Human language simpler than previously thought
And now for something completely different.
Our focus here at BotScene.Net is on humanoid robotics — and in particular, the fun and educational uses to which they can be put, such as duking it out in the ring.
But eventually, we will reach the point of full-sized humanoid robots that can reliably walk, see, climb stairs, sit down, get up, and so on. And then what? There will be no point to such machines if they are not able to “think” and act on their own — and that means, in particular, that we must be able to speak to them, and them to us.
So this new study from Cornell University is important. Most natural language processing (NLP) work has assumed a hierarchical structure for human language, and applied hierarchical parsing techniques just like you would do for a programming language. But this doesn’t work very well. Such parsers can cheerfully parse deeply nested sentences no human would naturally comprehend, and utterly fall down on reasonably simple structures that don’t happen to fit the programmed grammar.
The Cornell cognitive scientists are now claiming that language is not hierarchical at all, but rather, sequential. Sounds combine into chunks of meaning, which are then strung together to make more complex meanings. Of course, I’m sure there is some hierarchy at some level; but it’s likely to be quite shallow.
All this is good news for natural language researchers. And sooner or later, this will translate into smarter NLP code you can use in your own robot.