Scientists teach computers to learn like humans

New York :Scientists have developed an algorithm that captures our learning abilities, enabling computers to recognise and draw simple visual concepts that are mostly indistinguishable from those created by humans.

The work marks a significant advance in the field – one that dramatically shortens the time it takes computers to ‘learn’ new concepts and broadens their application to more creative tasks.

“Our results show that by reverse engineering how people think about a problem, we can develop better algorithms,” said Brenden Lake from New York University.

“Moreover, this work points to promising methods to narrow the gap for other machine learning tasks,” said Lake.
When humans are exposed to a new concept – such as new piece of kitchen equipment, a new dance move, or a new letter in an unfamiliar alphabet – they often need only a few examples to understand its make-up and recognise new instances.

While machines can now replicate some pattern-recognition tasks previously done only by humans – ATMs reading the numbers written on a check, for instance – machines typically need to be given hundreds or thousands of examples to perform with similar accuracy.

“It has been very difficult to build machines that require as little data as humans when learning a new concept,” said Ruslan Salakhutdinov from the University of Toronto.

“Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science,” said Salakhutdinov.

The researchers developed a ‘Bayesian Programme Learning’ (BPL) framework, where concepts are represented as simple computer programmes.

For instance, the letter ‘A’ is represented by computer code – resembling the work of a computer programmer – that generates examples of that letter when the code is run.

Yet no programmer is required during the learning process: the algorithm programmes itself by constructing code to produce the letter it sees.

Also, unlike standard computer programmes that produce the same output every time they run, these probabilistic programmes produce different outputs at each execution.

This allows them to capture the way instances of a concept vary, such as the differences between how two people draw the letter ‘A.’

While standard pattern recognition algorithms represent concepts as configurations of pixels or collections of features, the BPL approach learns “generative models” of processes in the world, making learning a matter of ‘model building’ or ‘explaining’ the data provided to the algorithm.

In the case of writing and recognising letters, BPL is designed to capture both the causal and compositional properties of real-world processes, allowing the algorithm to use data more efficiently.

The research appears in the journal Science.

PTI