I would say it's probably more than 1%...but, is anyone here
familiar with sound waves, and what human speech looks like waveform
wise?
>
>On Thu, 29 Oct 1998, Fred Cisin (XenoSoft) wrote:
>
>> Even the task of parsing the input stream to separate the words from
each
>> other has remained an elusive goal. That is a loarge part of why the
>> current "dictation" systems remain unsuitable for transcription,
closed
>> captioning, aids for the deaf, etc. Even now, the best systems
available
>> require intense interaction between the speaker and the program.
>
>Yup, it's a *very* difficult problem. Basically, there's not a
>straight-forward mapping between sound waves and what we hear. Even
once
>you do all of the relatively simple signal processing and word
>recognition, you're left with the basically impossible task of context
>analysis (which requires almost all of our brain to get right, and we
>still blow it about 1% of the time).
>
>-- Doug
>
>
>
______________________________________________________
Get Your Private, Free Email at
http://www.hotmail.com
Received on Thu Oct 29 1998 - 18:00:06 GMT