Already in the 4th century BC Archytas was credited with building a self-propelled flying device called “the pigeon”. At first, the dream of artificial intelligence was driven by mechanics. So called Automata, self-operating machines that are designed to perform a specific task or function, were the predecessors to the modern AI. In the 15th century, Leonardo da Vinci designed a number of automata, including a humanoid robot and a clockwork knight. These machines were able to perform simple tasks and movements, such as waving and moving its arms. Already then, people started to imagine what it would be like to invent human-like machines.
Another idea that spread widely was that of the Homunculus. A homunculus is a small, humanoid creature that is said to be created through the use of alchemical processes. Alchemy can be seen as the predecessor to modern chemistry but was mixed with a lot of religion and superstition back then. The concept of the homunculus dates back to the 16th century and the work of the Swiss alchemist and physician Paracelsus. Although the idea was not that of the creation of mind through technical achievement, this phenomenon shows the appeal of the idea of creating (and thus having power over) an artificial intelligent entity.
The modern Term AI was invented by John McCarthy in 1955. He organised the Dartmouth Summer Research Project on Artificial Intelligence, during which researchers from the fields of cybernetics, automata theory, and complex information processing came together to clarify and develop ideas about thinking machines. But the real breakthrough happened thanks to neural networks.
One can argue that the history of Neural Networks starts with the proposal of the computational model of a neuron by Warren McCulloch and Walter Pitts. McCulloch was a Professor of Psychiatry and neuroscientist who studied the chemical and electrical properties of neurons. Pitts was a genius logician, who at the age of 12 read the Principia Mathematica (where the first of three volumes is a 680 pages thick book on the foundations of mathematics). He then wrote the author a critique and consequently got invited to study in Cambridge, which he declined. At the age of 15 he ran away from home and attended lectures at the University of Chicago without ever being officially enrolled. I highly recommend reading his full Wikipedia article.
From then on there were three main approaches to computational neuroscience. (1) Simulation of Neurons with digital computers, (2) emulation of neurons and neural networks with analog circuits and (3) mathematical models that explain information processing based on neural mechanisms. (If you don’t know the difference between digital and analog computation just think of digital computation as being based on distinct values of 0 and 1 while analog computation operates on a messy spectrum of 0 to 1, thus lacking accuracy). The whole field was driven by technological advancement as well as the discovery of neural mechanisms in the nervous system.
Much of our understanding of how neural networks work comes from the research on the visual system. One problem that AI researchers didn’t anticipate is the difficulty of recognizing objects. How can we recognise a new instance of a cup if we have never seen it? And whats even more difficult, is for an AI to get an image, where it has to recognise the cup from an array of pixels. In order to know what pixels belong to the cup the object has to be recognised. To recognise the cup the AI needs to know what pixels belong to the cup. Which was first, the chicken or the egg?
The solution lies in recognising basic features and then combining them to more and more complex patterns. One breakthrough was when Hubel & Wiesel discovered the visual properties that neurons in the visual cortex were sensitive to. While experimenting with a rather unhappy looking cat, they found by accident that the neuron whose activity they were measuring was sensible to the movement of a thin line (the video explains the full story). In other parts of the visual cortex are neurons, that are well connected to the neurons sensitive to more primitive features. They react to simple geometric patterns like corners and circles and their motion. Continuing this system, groups of neurons (layers in artificial) neural networks represent continuously more complex patterns up to the point of researchers finding a neuron activated only by one person – the famous Jennifer Aniston. If you are into neuroscience I can recommend having a look at the classic paper Invariant visual representation by single neurons in the human brain by Quiroga et al. If you are more interested in how exactly neural network work I recommend this video by 3Blue1Brown.
And now to the actual grounds for this blog: ChatGPT. If you don’t live under a rock you have probably heard about AI generated art. But recently the company OpenAI released a very sophisticated chatbot called ChatGPT. Built on the understanding of neural networks in combination with supervised training and huge computational resources, for the first time people have free access to an AI that can compete with humans. I will give you a little demonstration of what it's capable of.
What I asked: please write a funny poem about a student psychology congress in switzerland called "First Winter Congress" that is located in zurich (I had to specify zurich because in the first version it assumed the congress was in a ski resort).
First Winter Congress, in Zurich we convene
A gathering of students, all bright and keen
We came to listen to lectures and talks
And to network with others and go for walks
But as we arrived, we were met with dismay
The weather was cold, the sky gray
The Congress was held in the heart of the city
But the streets were empty, it was oh so pretty
We couldn't ski or sled or even build a snowman
We were stuck inside, as cold as a pan
But the Congress must go on, they said with a smile
So we huddled inside and talked for a while
We learned about psychology and the mind
And left feeling warm and inclined
To come back next year, to this snowy land
For the Second Winter Congress, by popular demand.
So what do you think? Have you already played around yourself with this language model? I will gladly listen to your thoughts on this technological revolution when we meet at the congress!
Written by: Maxim Gärtner
This blog is also written with the help of ChatGPT and some passages were directly taken from the language model output.
© 2022 First Winter Congress of the International Students Initiative