Machines That Learn Language More Like Kids Do

Kids have a unique advantage that helps them master language faster than adults do. Now, tapping from this fact, researchers have built a system they say is able to mimic how kids learn a language, by following a child’s language-mastering process.

The concept, in a computer model, is expected to significantly improve human-machine interaction. Besides that, this might allow a deeper understanding of how exactly a child’s brain manages to acquire language skills faster than an adult brain.

Learning directly from humans


In the ordinary you’d expect that machines should be trained from the adult brain, however, that’s not true with language acquisition. Children have a better way of learning a language than grownups; they acquire it by listening to people and observing the environment around them, then connect the dots. This is what helps them establish a language’s word order – mastering key things like the subject and verb connection and order in a sentence.

The way language mastery has been happening in computing is that machines acquire their language skills by using syntactic and semantic parsers. Top on that the systems depend on being trained from sentence phrases annotated by humans.

And for things to work, the actual meaning in the sentence structure needs to be super clear — something that put’s the conventional way of training machines at a disadvantage because in the real world people communicate using partial sentences, jumbled language, and run-on thoughts.

Why is this important?


Parsers and the need to have machines understand language and communicate is becoming incredibly important. This tech now affects web searches, security (through sound recognition systems,) natural-language database querying, and interactive voice technology such as the ones used by Siri and Alexa. In addition, this is also expected to have an even bigger influence on home robotics.

Now, the take is that gathering data through annotation can be time-consuming, especially for non-common languages. Top on that, even with common languages, annotations vary with the geographical location of where one comes from, plus, they may not reflect the actual natural way people speak.

The new parser

Now according to a paper presented by MIT researchers at the Empirical Methods in Natural Language Processing this week, the scientists say their new parser is able to learn through observation—closely mimicking how children go about in their language-acquisition process.

They say this approach allows the parser to gather data on its own. As well as extend its potential to learn from the data it collected, and language structure all just by watching captioned videos. After which, when the parser’s system is presented with a new sentence, the machine can refer to what it learned to understand its meaning, by basically looking at the structure of the whole language – without being affix to the training videos.

What this means to AI’s feature

A look at this somewhat “weakly supervised approach” of training machines means the AI technology has gotten another boost. As in, it means models would require lesser data to train, and that they will be able to grow exactly like how the human language intelligence grows.

Talking of application, this means Alexa, Siri and other techs that use voice recognition are in for the next biggest breakthrough. Another area that stands a chance to benefit is the field of social robotics –where agents would be able to communicate more natural with their fellow human workers.