Yes, rest assured that your position is understood and appreciated. Whether you realize it or not, Nuance is in a position very similar to mine. That is, most of the people who make the purchasing decisions in the market my Xpress-It has to live in value simplicity in installation and setup even above quality of the voice synthesis. That is almost certainly partly why Nuance retired Eloquence, it intimidated those who would be installing the software that use it. When I demonstrated Xpress-It at a hospital in the medical center a decade ago, the speech pathologists liked the output quality but felt they didn’t have time to install it on end user systems. That sort of made me wonder if their patients were mainly semi-vegetative.
A quick check into the history of Xpress-It makes clear why this overall matter is so critical for me. While I am formally classified as “severely disabled,” I have gone further in several areas than most disabled people. My moderate success has been due to a handful of key factors, with effective verbal communication near the top of the heap in recent years. Therefore, nobody should be surprised if I don’t go away quietly when deprived of that particular asset.
Sadly, I doubt we will get any useful responses from the engineering people you reached out to, but I am grateful for your attempt.
From: Elias, Rachel [mailto:Rachel.Elias@nuance.com]
Sent: Wednesday, January 16, 2013 10:05
To: Scott Royall
Subject: RE: Vocalizer
Scott, I’m going to forward this message to a few people to see if they have any further comment from an engineering standpoint. I’m in sales so I can only sell what we have today. Rachel
From: Scott Royall [mailto:royall]
Sent: Tuesday, January 15, 2013 9:08 PM
To: Elias, Rachel
Subject: RE: Vocalizer
Well, in regards to my third point, lamenting the sole focus on SAPI because of its innate difficulty in accomplish certain things, sometimes the best way to get some insight on how an application programming interface works is by looking at sample programs. Looking in the header files gives one a real clue of how an engine can be interacted with. Yep, the Vocalizer does indeed have a function named ve_ttsSetOutDevice. Interesting. The convoluted SAPI documentation mentioned nothing about that one. Still, that’s just one point.
More important to my situation is that Microsoft’s legendary commitment to legacy support of its enterprise-level technology, including ODBC. As I mentioned previously, I have to start migrating to 64-bit applications with all due speed. My two remaining machines are already there, and I need to follow them. But, 32-bit applications will probably be around for the rest of my life, and it just so happens that even the 64-bit version of the interface that Windows uses to connect its client applications with their databases (and which uses ODBC as its user interface) still includes a 32-bit driver to connect applications with something called SQL Server. I understand that, as an account manager, you probably don’t know what SQL Server is, but suffice it to say that it may be the intermediate solution to my problem.
To review, my need to migrate to 64-bit ODBC is what is driving my problem. My assistive communication software, Xpress-It, is 32-bit, and uses ODBC to talk to its database via SQL. Xpress-It doesn’t currently know about SQL Server, but there’s no reason why it shouldn’t. The important part for us is that SQL Server communicates with the clients’ database driver through the TCP stack, the very same thing that your computer uses to communicate with the Internet. At that level, the 32-bit/64-bit dichotomy stops being a huge problem because the differences largely get dealt with automatically. Correct, as long as Microsoft maintains a 32-bit SQL Server driver, and, if Xpress-It will accept SQL Server, the urgency of my problem goes away.
Although this isn’t what I consider an ideal solution, SQL Server isn’t easily administered, it may be the only solution I get. Nuance clearly isn’t going to offer any further help, because you (they) don’t think it’s in your best interest. That galls me to no end, but I get the fact that you don’t see me as your true customer. The core of that particular problem is in the structure of the assistive and augmented communication market, where purchasing decisions are not made by the end user who would be depending on the purchased product, but professionals without any time to individualize the program. That’s just the way it is, and the few people like me with a critical need for our AAC and the skills necessary to write them are simply out in the cold on our own.
From: Scott Royall [mailto:royall
Sent: Monday, January 14, 2013 22:32
To: ‘Elias, Rachel’
I said I was going to be honest, and it’s true that I have yet to compile any of the samples that come with the SDK so I don’t know what they sound like. However, I have read enough of the enclosed documentation to understand that Vocalizer is not yet ready for replacing Eloquence. Here are the reasons:
· Vocalizer is currently 32-bit, just like Eloquence, and you cannot mix 32- and 64-bit within one application. That’s the core issue that is forcing me to sunset Eloquence. The good news here is that Vocalizer is known to be compatible with at least Visual Studio 2005 so 64-bit might not be anything beyond recompilation.
· Vocalizer relies on fixed voice packages, just like your competitors. Mind you, I certainly understand the reasoning for it; I said earlier today that your customers in what you call the “accessibility” market are predominantly software houses. In turn, their predominant customers are healthcare professionals who are responsible for deciding what assistive software is installed on their clients’ machines. Those professionals want things as simplified as possible because they have no time for customization. And, the end users rarely even realize that customization was ever a possibility. However, people like me who really depend on their assistive communication software realize that our electronic voices become an integral part of our identities. Rather than just vanity, it is important to be able to distinguish ourselves from others with the same synthesizer.
There’s also an immanently practical aspect to having adjustable voice parameters. Yes, I am an Amateur Radio Operator, and yes, my own voice settings are tested for maximum intelligibility in a broad variety of conditions. Of course you needn’t be a ham to value that flexibility either, just try using a synthesizer in a noisy environment.
· I note that Vocalizer only uses the SAPI standard. Yes, that is the default standard these days, but it’s going to make tasks like switching audio output devices programmatically something of a nightmare.
I am very sorry to say that, as things appear right now, I basically have no upgrade path.