The enlightened minds of mathematicians, cryptographers, engineers, physicists, inventors and others have shaped the computer and the Internet into what we know today. Some of them also caught a glimpse of the future and envisioned the technology we are using now or are about to see. Keeping an eye on the visionaries helps us prepare for the future.
Alan Mathison Turing is broadly acknowledged as the father of artificial intelligence – the human-like intelligence exhibited by machines and software. He was born 102 years ago, on 23rd of June. With the occasion of his anniversary, we selected some essential info and trivia on artificial intelligence. Enjoy reading!
According to Turing, a computer can be considered to “think” if, in a conversation between a human and a machine, the human could not tell if he`s talking to a human or a computer. An intelligent machine would also be able to perceive its environment and take actions to maximize its success.
Turing believed that, instead of building a complex program to mimic the adult mind, it would be better to create a simple one to simulate a child’s mind and then educate it.
Widely used on the Internet, the CAPTCHA test is based on a reversed form of the Turing Test. The goal of both the Turing Test and the CAPTCHA is to distinguish between a human and a computer.
Note: A Turing test consists of blind 5-minute text-conversations between human judges on one side, and computers or humans, on the other. If 30 percent of the human judges cannot tell a machine from a human, the computer can be said to possess artificial intelligence.
Eugene Goostman – First software to pass the Turing test
Eugene Goostman, a computer program pretending to be a 13-year-old Ukrainian boy, convinced enough judges it was human to pass the Turing test in June, marking the first breakthrough in the famous Turing test, as reported by the Independent.
It may be hard to imagine, but a form of artificial intelligence is making decisions for you on your computer or smart phone while you are reading this text. For instance, Bitdefender communicates with a data-center where artificial intelligence engages complex mathematical algorithms to process huge amounts of data and filter malicious files from clean ones.
These technologies make use of machine learning, decision trees, neural networks, and Boltzmann algorithms, analyze enormous volumes of data, evaluate file characteristics to separate malicious and clean software or behavior, make associations and comparisons without human intervention. And, on top of that, artificial intelligence supervises other artificial intelligence implementations to make sure that everything works as planned. Welcome to the future!
Talks about artificial intelligence are as fervent today as they were in Turing`s time. Apart from the obvious benefits, Physicist Stephen Hawking also grasps the risks of such complex technology.
“Recent landmarks such as self-driving cars, a computer winning at “Jeopardy!,” and the digital personal assistants Siri, Google Now, and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation,” the physicist says in a recent Business Insider article.
“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
Autonomous robots of the future “are likely to behave in anti-social and harmful ways unless they are very carefully designed,” scientist Steve Omohundro writes in a paper in the Journal of Experimental & Theoretical Artificial Intelligence.
“When roboticists are asked by nervous onlookers about safety, a common answer is ËœWe can always unplug it!` But imagine this outcome from the chess robot`s point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess. This has very low utility and so expected utility maximization will cause the creation of the instrumental subgoal of preventing itself from being unplugged. If the system believes the roboticist will persist in trying to unplug it, it will be motivated to develop the subgoal of permanently stopping the roboticist. Because nothing in the simple chess utility function gives a negative weight to murder, the seemingly harmless chess robot will become a killer out of the drive for self-protection.”
Image credit: 1. Wikipedia, 2. BBC
tags
The meaning of Bitdefender’s mascot, the Dacian Draco, a symbol that depicts a mythical animal with a wolf’s head and a dragon’s body, is “to watch” and to “guard with a sharp eye.”
View all postsNovember 14, 2024
September 06, 2024