DeepMind AI has explored chess and Go to develop its strongest programmes and sophisticated techniques, which is evident from the number of times it has beat human Go players. It improved a step further and developed AlphaGo Zero (AGZ), a program which was completely self-taught without any human intervention, and managed to defeat the DeepMind AI’s own earlier version. It implemented reinforcement learning from games of self-play. The self-taught virtual Go Player was the last that we heard of DeepMind.
What’s AlphaGo Zero?
AGZ, a virtual intelligence by DeepMind gained tremendous human knowledge around the game Go in just 72 hours. It then beat the version of the original AlphaGo, that had defeated human champion Lee Sedol. AGZ won the game against AlphaGo, arguably making it the best Go player in the world. To be precise, the AGZ could beat the leading conventional chess program with scoreboard recording 28 wins, 0 losses, and 72 draws.
DeepMind’s AGZ gained instant popularity both because it learnt quickly & efficiently, and that too from scratch. It didn’t use any human data which is usually provided as a baseline to train the AI. Instead, it used spontaneous data to start and achieve moves that were most effective in achieving a winning streak.
What Happened Next?
After a remarkable performance by AGZ, the team stalled any further research on it even before it could reach its full potential. According to reports, Demis Hassabis, CEO and co-founder, DeepMind during a talk at Google’s Go North conference in Toronto had confessed that the company shut down the experiment before it could determine the upper limits of AlphaGo Zero’s maximum intelligence.
Though he had hinted that DeepMind may spin up AlphaGo Zero again in future to find out how much further it could go, and excel in additional alien moves, they seemed to have halted all further developments. It could have been used to help Go players learn new moves and strategies or direct other tasks such as protein folding, reducing energy consumption or searching for revolutionary new materials, but the company did not invest in it as much as it should have. They needed the computers for something else, he had said.
What Is DeepMind Doing Now?
After the developments at AGZ were stalled, it marked the end of the multi-layered project. DeepMind team then committed itself to focus on other problems that weren’t expected to yield any immediate results. The underlying goal of the team is to build artificial intelligence system and accomplish tasks that are smarter than AlphaGo Zero, better optimise energy, and achieve far-reaching results.
Arguing that not every result has to be breathtaking, experts said that improving battery life by 30 percent, or tweaking Google assistant to perfectly mimicked humans, was pretty impressive work done by DeepMind. Though these developments might not have created a buzz on the scale that beating Go champion did, they are nonetheless noteworthy.
The team at DeepMind believes that all the problems in AI haven’t been solved yet and those better solutions are needed in the long run. Some experts have cheekily commented that the future of humanity didn’t rely on another chess playing programme, and that there are plenty of real-world problems that need urgent attention.
What Has DeepMind Been Up To?
Hassabi recently tweeted about their most recent research about how the brain constructs images in the mind’s eye. In a blog titled Neural scene representation and rendering, Hassabi talks about the developments in visual recognition systems. With this research, DeepMind has introduced the Generative Query Network (GQN), a framework within which machines learn to perceive their surroundings by training only on data obtained by themselves as they move around scenes. The blog post notes that GQN learns by trying to make sense of its observation of the world around it.
I’ve long been fascinated by how the brain constructs images in the mind’s eye. Our new @ScienceMagazine paper intros GQN: a model capable of recreating a 3D representation of a scene from a handful of 2D snapshots + rendering it from any new camera angle https://t.co/o5WC8qLo4N
— Demis Hassabis (@demishassabis) June 14, 2018
Researchers are also teaching computers about relational reasoning, a cognitive capability, which is at the core of human intelligence. It is the ability to consider relationships between different mental representations, objects, ideas etc. that is crucial to the human cognitive development and vital to solving any problem. They are working towards modifying ML methods that would enable them to learn about physical relationships between static objects and the behaviour of moving objects over time. They demonstrated the first capability using CLEVR, a data set of simple objects.
In another effort, researchers are working on showcasing how a similarly modified machine learning system can learn to predict the behaviour of simple objects in two dimensions.
While these advances may not be eye-popping breakthroughs as of now, they are parts of a larger development which may soon be witnessed by all. The company deems them important to see an overall development in AI and ML space. It is quite natural that without new ideas, AI systems would be incapable of holding real conversations, solving difficult problems or winning more games, which is exactly what the company is focusing on.