Google’s DeepMind AI has taught itself to walk, and it’s as entertaining as it is fascinating
The technology wasn’t told how to walk, just to get from A to B – and the results are slightly odd.
Google’s artificial intelligence company has created a program capable of teaching itself how to walk and jump without prior input.
DeepMind’s website says it is on a “scientific mission” to push boundaries in AI. Their latest creation was capable of making an avatar overcome a series of obstacles simply by giving them an incentive to get from one point to another.
The AI was given no prior information on how it should walk, and what it came up with is rather unique…
A paper on the work published in Cornell University Library states that the technology uses the reinforcement learning paradigm – which allows “complex behaviours to be learned directly from simple reward signals”.
The avatars traversing the habitats included a humanoid, a pair of legs and a spider-like four-legged creation tasked with leaping across gaps.
“Our experiments suggest that training on diverse terrain can indeed lead to the development of non-trivial locomotion skills such as jumping, crouching, and turning for which designing a sensible reward is not easy,” they wrote.
“In that sense, choosing a seemingly more complex environment may actually make learning easier.”