On January 14th, I will be in Luxembourg to present the results of a joint research project to reviewers of the European Commission.
We’ve not been vocal about it (and frankly, I’m not looking for excuses, that’s just plain laziness on my part), but Nu Echo has been an active consortium member of a European research project over the last two years: the SpeDial project, funded by the European Commission’s 7th Framework Programme (FP7). The consortium, led by prof. Alex Potamianos, included both commercial entities like VoiceWeb in Athens, Greece, and ourselves, and a number of academic research partners:
- the Athena Research and Innovation Center in Information, Communication and Knowledge Technologies, in Greece,
- the Telecommunication Systems Institute @ Technical University Of Crete,
- KTH Royal Institute of Technology of Stockholm, Sweden, and
- INESC-ID, from Lisboa, Portugal.
The project, whose name stands for Spoken Dialogue Analytics, aimed to apply speech analytics technologies to the IVR world. To quote prof. Potamianos, the project proposes “a process for spoken dialogue service development, enhancement and customization of deployed services, where data logs are analyzed and used to enhance the service in a semi-automated fashion”. Some of the technologies employed in the project are age/gender detection, speech and text affective analysis, and hotspot detection.
As part of this project, Nu Echo has significantly enhanced its internal speech tuning environment, Atelier. For instance, we added full support for the common SPDXml file format that was devised in the project and a sophisticated dialogue path navigator to interactively explore paths in the dialogue that were taken by actual callers and pinpoint dialogue hotspots (think Google Analytics’ behavior flow for speech applications!). We also devised effective techniques and tools to automate the process of finding tuning opportunities. A preliminary version of these tools was presented at the SpeechTEK conference in New York last summer.
In the upcoming weeks, I will present in more details the results of our work and how the tools developed help us tame the complexity of speech tuning. Stay tuned!