While working on a grammar conversion project from Nuance GSL to SRGS ABNF, I stumbled upon a few grammars all having the same design problem: using optional parts to make a few words repeat a varying number of times. This is a pattern we’ve observed regularly on various projects.
Here is an example of such a grammar for recognizing sequences of 4 to 8 digits (I omitted the semantic tags for clarity):
#ABNF 1.0 ISO-8859-1; mode voice; language en-US; root $digits4To8; public $digits4To8 = $digit $digit $digit $digit [$digit] [$digit] [$digit] [$digit] ; ...
The original GSL grammar looked like this:
Digit4To8 ( Digit Digit Digit Digit ?Digit ?Digit ?Digit ?Digit )
The GSL syntax does not support the <N-M> syntax like in ABNF to repeat an expansion from N to M times. That’s a reason why the grammar was written this way in the first place. In ABNF grammar , it would have instead been written as:
#ABNF 1.0 ISO-8859-1; mode voice; language en-US; root $digits4To8; public $digits4To8 = $digit <4-8> ; ...
In GSL, it would have been better to write the grammar as:
Digit4To8 ( Digit Digit Digit Digit ?Digit1To4 ) Digit1To4 ( Digit ?Digit1To3 ) Digit1To3 ( Digit ?Digit1To2 ) Digit1To2 ( Digit ?Digit )
Both grammars are equivalent, right? So what’s the problem?
Well, both grammars recognize the same language (the same set of sentences), but the first grammar has a very different behavior. It is highly ambiguous. That means some sentences can be parsed in two or more different ways. See what you get when you interpret one such sentence in NuGram IDE:
The interpreter tells us (at the top-left of the window) that there are 6 different parses for the sentence. (I’ve seen grammars generating more than 100 parses for a given sentence!).
The problem with ambiguous grammars is they can impact both recognition accurary and recognition performance. Suppose a grammar covers a sentence that is highly ambiguous and another sentence which is not, but is phonetically close to the former. Since speech recognition engines limit their recognition search space, it is possible that the latter be pruned from the search space at the beginning of the recognition window even if it’s the one that would come up with the best score at the end of the recognition.
The other problem is recognition performance. All semantic tags are typically executed at the end of the recognition process, once the user has finished talking. If there are lots of identical hypotheses with the same score, the recognition engine will have to execute all tags (interpreted ECMAScript code), most of them being redundant and useless, thus causing longer delays in the speech application.
Determining that a grammar is ambiguous (or not) is a very hard problem (it’s an undecidable problem). That means, whatever tool you use that’s supposed to decide for ambiguities will inevitably make mistakes. But that doesn’t mean there are no tools available to help detecting ambiguities. For instance, NuGram IDE will tell you if there are two or more different parses for a given sentence. And the sentence generator tool can also be configured to detect sentences that are ambiguous at the semantic level (sentences producing two or more different semantic values).