I'm thinking this is an idea for the more advanced SDK devs out there. Imagine a modular module with text entry & alphabetical recognition linked to an internal speach synthesiser. You type in a word or two & press a 'talk' button...
Anyone think this would be possible? Anything is possible right?
Wanted :- Speach synthesis module.
From what I remind from my studies, speech synthesis is mostly a matter of getting some sound sources (called voiced or unvoiced) and gettting them from formant filters (bandpass filter actually).
Voiced sounds are risponsible for voyels generation ([a], , [o]) as unvoiced sounds are just filterd noise...
Now the problem would be to build up sommething which could be fed with text.
The only idea I have know would be to use a pattern generator ...
Will have a look if I still can find something more acccurate about this topic.
<font size=-1>[ This Message was edited by: FRA59-HELP on 2006-06-18 14:53 ]</font>
Voiced sounds are risponsible for voyels generation ([a], , [o]) as unvoiced sounds are just filterd noise...
Now the problem would be to build up sommething which could be fed with text.
The only idea I have know would be to use a pattern generator ...
Will have a look if I still can find something more acccurate about this topic.
<font size=-1>[ This Message was edited by: FRA59-HELP on 2006-06-18 14:53 ]</font>
Here's an article with some good links at the bottom >>> <a href="http://emusician.com/tutorials/emusic_v ... ex.html</a>