Vocalizer

Yes, rest assured that your position is understood and appreciated. Whether you realize it or not, Nuance is in a position very similar to mine. That is, most of the people who make the purchasing decisions in the market my Xpress-It has to live in value simplicity in installation and setup even above quality of the voice synthesis. That is almost certainly partly why Nuance retired Eloquence, it intimidated those who would be installing the software that use it. When I demonstrated Xpress-It at a hospital in the medical center a decade ago, the speech pathologists liked the output quality but felt they didn’t have time to install it on end user systems. That sort of made me wonder if their patients were mainly semi-vegetative.

A quick check into the history of Xpress-It makes clear why this overall matter is so critical for me. While I am formally classified as “severely disabled,” I have gone further in several areas than most disabled people. My moderate success has been due to a handful of key factors, with effective verbal communication near the top of the heap in recent years. Therefore, nobody should be surprised if I don’t go away quietly when deprived of that particular asset.

Sadly, I doubt we will get any useful responses from the engineering people you reached out to, but I am grateful for your attempt.

From: Elias, Rachel [mailto:Rachel.Elias@nuance.com]
Sent: Wednesday, January 16, 2013 10:05
To: Scott Royall
Subject: RE: Vocalizer

Scott, I’m going to forward this message to a few people to see if they have any further comment from an engineering standpoint. I’m in sales so I can only sell what we have today. Rachel

From: Scott Royall [mailto:royall]
Sent: Tuesday, January 15, 2013 9:08 PM
To: Elias, Rachel
Subject: RE: Vocalizer

Well, in regards to my third point, lamenting the sole focus on SAPI because of its innate difficulty in accomplish certain things, sometimes the best way to get some insight on how an application programming interface works is by looking at sample programs. Looking in the header files gives one a real clue of how an engine can be interacted with. Yep, the Vocalizer does indeed have a function named ve_ttsSetOutDevice. Interesting. The convoluted SAPI documentation mentioned nothing about that one. Still, that’s just one point.

More important to my situation is that Microsoft’s legendary commitment to legacy support of its enterprise-level technology, including ODBC. As I mentioned previously, I have to start migrating to 64-bit applications with all due speed. My two remaining machines are already there, and I need to follow them. But, 32-bit applications will probably be around for the rest of my life, and it just so happens that even the 64-bit version of the interface that Windows uses to connect its client applications with their databases (and which uses ODBC as its user interface) still includes a 32-bit driver to connect applications with something called SQL Server. I understand that, as an account manager, you probably don’t know what SQL Server is, but suffice it to say that it may be the intermediate solution to my problem.

To review, my need to migrate to 64-bit ODBC is what is driving my problem. My assistive communication software, Xpress-It, is 32-bit, and uses ODBC to talk to its database via SQL. Xpress-It doesn’t currently know about SQL Server, but there’s no reason why it shouldn’t. The important part for us is that SQL Server communicates with the clients’ database driver through the TCP stack, the very same thing that your computer uses to communicate with the Internet. At that level, the 32-bit/64-bit dichotomy stops being a huge problem because the differences largely get dealt with automatically. Correct, as long as Microsoft maintains a 32-bit SQL Server driver, and, if Xpress-It will accept SQL Server, the urgency of my problem goes away.

Although this isn’t what I consider an ideal solution, SQL Server isn’t easily administered, it may be the only solution I get. Nuance clearly isn’t going to offer any further help, because you (they) don’t think it’s in your best interest. That galls me to no end, but I get the fact that you don’t see me as your true customer. The core of that particular problem is in the structure of the assistive and augmented communication market, where purchasing decisions are not made by the end user who would be depending on the purchased product, but professionals without any time to individualize the program. That’s just the way it is, and the few people like me with a critical need for our AAC and the skills necessary to write them are simply out in the cold on our own.

From: Scott Royall [mailto:royall
Sent: Monday, January 14, 2013 22:32
To: ‘Elias, Rachel’
Subject: Vocalizer

Rachel,

I said I was going to be honest, and it’s true that I have yet to compile any of the samples that come with the SDK so I don’t know what they sound like. However, I have read enough of the enclosed documentation to understand that Vocalizer is not yet ready for replacing Eloquence. Here are the reasons:

· Vocalizer is currently 32-bit, just like Eloquence, and you cannot mix 32- and 64-bit within one application. That’s the core issue that is forcing me to sunset Eloquence. The good news here is that Vocalizer is known to be compatible with at least Visual Studio 2005 so 64-bit might not be anything beyond recompilation.

· Vocalizer relies on fixed voice packages, just like your competitors. Mind you, I certainly understand the reasoning for it; I said earlier today that your customers in what you call the “accessibility” market are predominantly software houses. In turn, their predominant customers are healthcare professionals who are responsible for deciding what assistive software is installed on their clients’ machines. Those professionals want things as simplified as possible because they have no time for customization. And, the end users rarely even realize that customization was ever a possibility. However, people like me who really depend on their assistive communication software realize that our electronic voices become an integral part of our identities. Rather than just vanity, it is important to be able to distinguish ourselves from others with the same synthesizer.

There’s also an immanently practical aspect to having adjustable voice parameters. Yes, I am an Amateur Radio Operator, and yes, my own voice settings are tested for maximum intelligibility in a broad variety of conditions. Of course you needn’t be a ham to value that flexibility either, just try using a synthesizer in a noisy environment.

· I note that Vocalizer only uses the SAPI standard. Yes, that is the default standard these days, but it’s going to make tasks like switching audio output devices programmatically something of a nightmare.

I am very sorry to say that, as things appear right now, I basically have no upgrade path.

Advertisements

Vocalizer

Well, in regards to my third point, lamenting the sole focus on SAPI because of its innate difficulty in accomplish certain things, sometimes the best way to get some insight on how an application programming interface works is by looking at sample programs. Looking in the header files gives one a real clue of how an engine can be interacted with. Yep, the Vocalizer does indeed have a function named ve_ttsSetOutDevice. Interesting. The convoluted SAPI documentation mentioned nothing about that one. Still, that’s just one point.

More important to my situation is that Microsoft’s legendary commitment to legacy support of its enterprise-level technology, including ODBC. As I mentioned previously, I have to start migrating to 64-bit applications with all due speed. My two remaining machines are already there, and I need to follow them. But, 32-bit applications will probably be around for the rest of my life, and it just so happens that even the 64-bit version of the interface that Windows uses to connect its client applications with their databases (and which uses ODBC as its user interface) still includes a 32-bit driver to connect applications with something called SQL Server. I understand that, as an account manager, you probably don’t know what SQL Server is, but suffice it to say that it may be the intermediate solution to my problem.

To review, my need to migrate to 64-bit ODBC is what is driving my problem. My assistive communication software, Xpress-It, is 32-bit, and uses ODBC to talk to its database via SQL. Xpress-It doesn’t currently know about SQL Server, but there’s no reason why it shouldn’t. The important part for us is that SQL Server communicates with the clients’ database driver through the TCP stack, the very same thing that your computer uses to communicate with the Internet. At that level, the 32-bit/64-bit dichotomy stops being a huge problem because the differences largely get dealt with automatically. Correct, as long as Microsoft maintains a 32-bit SQL Server driver, and, if Xpress-It will accept SQL Server, the urgency of my problem goes away.

Although this isn’t what I consider an ideal solution, SQL Server isn’t easily administered, it may be the only solution I get. Nuance clearly isn’t going to offer any further help, because you (they) don’t think it’s in your best interest. That galls me to no end, but I get the fact that you don’t see me as your true customer. The core of that particular problem is in the structure of the assistive and augmented communication market, where purchasing decisions are not made by the end user who would be depending on the purchased product, but professionals without any time to individualize the program. That’s just the way it is, and the few people like me with a critical need for our AAC and the skills necessary to write them are simply out in the cold on our own.

From: Scott Royall [mailto:royall@conchbbs.com]
Sent: Monday, January 14, 2013 22:32
To: ‘Elias, Rachel’
Subject: Vocalizer

Rachel,

I said I was going to be honest, and it’s true that I have yet to compile any of the samples that come with the SDK so I don’t know what they sound like. However, I have read enough of the enclosed documentation to understand that Vocalizer is not yet ready for replacing Eloquence. Here are the reasons:

· Vocalizer is currently 32-bit, just like Eloquence, and you cannot mix 32- and 64-bit within one application. That’s the core issue that is forcing me to sunset Eloquence. The good news here is that Vocalizer is known to be compatible with at least Visual Studio 2005 so 64-bit might not be anything beyond recompilation.

· Vocalizer relies on fixed voice packages, just like your competitors. Mind you, I certainly understand the reasoning for it; I said earlier today that your customers in what you call the “accessibility” market are predominantly software houses. In turn, their predominant customers are healthcare professionals who are responsible for deciding what assistive software is installed on their clients’ machines. Those professionals want things as simplified as possible because they have no time for customization. And, the end users rarely even realize that customization was ever a possibility. However, people like me who really depend on their assistive communication software realize that our electronic voices become an integral part of our identities. Rather than just vanity, it is important to be able to distinguish ourselves from others with the same synthesizer.

There’s also an immanently practical aspect to having adjustable voice parameters. Yes, I am an Amateur Radio Operator, and yes, my own voice settings are tested for maximum intelligibility in a broad variety of conditions. Of course you needn’t be a ham to value that flexibility either, just try using a synthesizer in a noisy environment.

· I note that Vocalizer only uses the SAPI standard. Yes, that is the default standard these days, but it’s going to make tasks like switching audio output devices programmatically something of a nightmare.

I am very sorry to say that, as things appear right now, I basically have no upgrade path.

Vocalizer

Rachel,

I said I was going to be honest, and it’s true that I have yet to compile any of the samples that come with the SDK so I don’t know what they sound like. However, I have read enough of the enclosed documentation to understand that Vocalizer is not yet ready for replacing Eloquence. Here are the reasons:

· Vocalizer is currently 32-bit, just like Eloquence, and you cannot mix 32- and 64-bit within one application. That’s the core issue that is forcing me to sunset Eloquence. The good news here is that Vocalizer is known to be compatible with at least Visual Studio 2005 so 64-bit might not be anything beyond recompilation.

· Vocalizer relies on fixed voice packages, just like your competitors. Mind you, I certainly understand the reasoning for it; I said earlier today that your customers in what you call the “accessibility” market are predominantly software houses. In turn, their predominant customers are healthcare professionals who are responsible for deciding what assistive software is installed on their clients’ machines. Those professionals want things as simplified as possible because they have no time for customization. And, the end users rarely even realize that customization was ever a possibility. However, people like me who really depend on their assistive communication software realize that our electronic voices become an integral part of our identities. Rather than just vanity, it is important to be able to distinguish ourselves from others with the same synthesizer.

There’s also an immanently practical aspect to having adjustable voice parameters. Yes, I am an Amateur Radio Operator, and yes, my own voice settings are tested for maximum intelligibility in a broad variety of conditions. Of course you needn’t be a ham to value that flexibility either, just try using a synthesizer in a noisy environment.

· I note that Vocalizer only uses the SAPI standard. Yes, that is the default standard these days, but it’s going to make tasks like switching audio output devices programmatically something of a nightmare.

I am very sorry to say that, as things appear right now, I basically have no upgrade path.

Nuance: Vocalizer Expressive

Rachel,

Thank you for the quick response. You should rest assured that I will investigate Vocalizer to the extent that the demonstration software permits. You should additionally expect my investigation to generate any number of questions, and I hope you will see it as in Nuance’s own best interest to redirect them to your software engineers.

It is fair to say that I no longer have business requirements in the sense that I’m no longer employed for a living. Instead, I have life requirements, because I literally live or die by my assistive communication software. Can you really understand that? Your department (or is it a division?) is truly in the business of selling engines. Sure, they’re software, but they’re still engines. That means your customers are almost never your end users. Instead, your customers are software companies that sell to speech pathologists and other healthcare professionals who have criteria different from end users. They’re at least as interested in simple administration as they are in actual voice quality. Even your end users are, well, mostly just neophyte computer users with no real idea what voice synthesis can be like. Rachel, even today, I occasionally have to give public talks, and the general reaction I always get to the quality of my voice is still “holy shit, what are you using?” There have been times when other disabled voice synthesis users have started crying when they learned I had the ability to customize my voices. Your regular customers still don’t recognize that voice synthesis comes down to both quality and giving users the capability of creating their own identity.

Yeah, I’m pretty unique among your customers in that I’m a software developer who is his own end user. That makes me something of an ideal customer for you since I represent both perspectives. In all frankness, I’m going to subject Vocalizer to some very brutal testing, but just remember that this is all real-world stuff that Eloquence does every day. The reality is that Vocalizer is going to fail some tests, and I hope Nuance will work with me to pass the re-testing. Is that fair enough?

Scott

From: Elias, Rachel [mailto:Rachel.Elias@nuance.com]
Sent: Monday, January 14, 2013 08:27
To: royall@conchbbs.com
Subject: Nuance: Vocalizer Expressive

Hi Scott –

The ETI Eloquence TTS software had been end of life for a very long time. Unfortunately, we do not have any software engineers working on this SDK and it’s impossible to

respond to your request. Have you tested our new embedded Vocalizer Expressive TTS engine? Many accessibility customers like it and have been using it their

products. The latest engine incorporates Nuance, Loquendo & SVOX technology into a single SDK. I will send you a 45-day eval. I really hope this helps your business

requirements! Rachel

\

RACHEL ELIAS

Account Manager, Mobile and Embedded Solutions

Nuance Communications, Inc.

781 565 5293 Direct

617 968 8620 Mobile

866 732 9590 Fax

NUANCE.COM

Eloquence on 64 bits

Rachel (or whoever is actually reading this),

As I was going back through my archives to cull out Nuance addresses for this email, I was alarmed to realize that I have been lobbying Nuance on this subject for two years with no apparent progress. Good grief!

I’m no dummy; I know it’s a common practice in the IT industry for one company to buy out a competitor just to shutter its products. However, if the products aren’t at least nominally maintained, it becomes difficult to protect the intellectual property from being back-engineered by somebody else. I certainly don’t have the resources to do that, but someone could. I’ve always thought that Nuance never fully appreciated what a hot little property the Eloquence speech engine was. Sure, a couple of other speech engines do sound arguably better, but they achieve that by using very few voices set up by engineers. Those packages have no provision for user individualism. Eloquence does, and still it holds its own against the highbrowed competition.

Yet, I can’t really say that Nuance has totally ignored Eloquence. Last I looked, you did still sell the SDK to OEMs as an embeddable package, and it is true that particular market is still 32-bit. However, even that will only last another couple of years. It’s also true that you have a Solaris version, which is 64-bit, so a 64-bit x86 version should be childsplay.

Eloquence is one of those rare pieces of software that reaches a point of virtual perfection, and it basically has. I can’t think of any changes I would make to version 6.1. it has simply become a matter of keeping the SDK updated for current technology. If Nuance has done this, nobody told me.

Let me restate my situation for clarity. I am totally dependent on an application I developed, Xpress-It, for verbal communication. Xpress-It is built around the Eloquence engine, and uses ODBC to communicate with its database. Your software engineers should immediately realize the cliff I’m headed for, as I essentially rev hardware every two years. Both of my “active duty” laptops run Windows 7 x64. It does run 32-bit executables, but, as long as Xpress-It must stay at 32-bit to accommodate Eloquence, so must everything else that uses ODBC. That includes the Office suites. Hopefully, Nuance now recognizes the absolute seriousness of my dilemma. If I cannot convince Nuance to make a x64 SDK at least of Eloquence 6.1 available this year, I will be unable to speak, because other ODBC-dependencies will demand that it be upgraded. As the messages below show, I am already starting to experience that problem.

Yep, buying up the IP of others does have its disadvantages. Sorry.

Scott