AI has been around for a half century but only recently exploded into our consciousness with the arrival of some fabulous hardware and software.
One of the pivotal pioneers was the Canadian scientist Geoffrey Hinton. Often known as the “Godfather of Deep Learning”. He took the idea of a “neural network” to a new level. This network design is based on our biological networks of neurons and synapses and how they are all processed.
Rather than thinking of a physical network like we might have in our homes or offices, this network is actually a form of a mathematical algorithm. You pump in something (say an image of a cat) and it will break this down and study chunks of the image looking for a pattern that it can determine is a cat or not. It might say there is an 85% probability that this is an image of a cat.
How does it do that “thinking”? Essentially the algorithm has to be trained. You send through say 100,000 images of various cats and as it analyzes each image. When it confidently says it’s a cat you lock- up the “values” that make up that algorithm.
Now here’s where it gets tricky. They algorithms teach themselves and modify themselves in order to consistently come up with “cat”. This makes it hard for us to play with the algorithms since they are in a state that they themselves created. Might there come a time when we lose control of this AI? This is the scenario that Elon Musk worries about: “…until people see robots going down the street killing people, they don’t know how to react.”
18-02 The Players
(blank) » John Bulloch » 18 Artificial Intelligence »