If there is one cliché that prevails in most portrayals of Artificial Intelligence (AI) in fiction, it is that machines completely lack imagination. This is a quality reserved for the human brain and unattainable for silicon brains, no matter how nearly limitless their processing power may be. Or not? If there is something that the progress of technology is showing us, it is that it is our imagination that falls short when it comes to predicting the future: precisely this quality, imagination, is already within the reach of machines thanks to a new type of algorithm called Generative Adversarial Network (GAN).
It was a night in 2014 when computational scientist Ian Goodfellow, then a doctoral student in the field of machine learning at the University of Montreal, Canada, met with some of his classmates at a bar to celebrate a graduation. During the evening a discussion arose about how to teach machines to invent representations of real objects, without copying existing ones, and so that the result looks like a real photograph.
AI systems are experts at handling huge volumes of data to solve problems, and can even learn without human supervision. But something as seemingly simple as creating for themselves a plausible image of, say, a human face, is an impossibly complicated task for them.
Some neuroscientists point out that the excellence of the human brain lies in our insurmountable ability to process patterns: from a very young age we can identify images of a face no matter how different they are from each other, because we know what makes a face a face. In recent years, deep learning algorithms used in neural networks – computing systems inspired by the human brain – have given machines an uncanny ability to recognize patterns, whether they are words in a conversation or the environment in which drives an autonomous vehicle.