As we all get increasingly familiar with the pro’s and con’s of cognitive AI it is really important to remember that whilst there are real, applicable use cases, there is also a dark side to the technology.
So dark, in fact, that the man widely seen as the godfather of Artificial Intelligence (AI) has quit his job, warning about the growing dangers from developments in the field saying some of the dangers of AI chatbots were “quite scary”.
Geoffrey Hinton has left Google after more than a decade at the US search giant, citing fears about the rapid development of generative AI.
Hinton, a part-time professor at the University of Toronto who is widely viewed as the godfather of modern artificial intelligence, said he quit to speak freely about the dangers of AI.
The 75-year-old British scientist told the New York Times that he partly regretted his life’s work, as he warned about misinformation flooding the public sphere and AI usurping more human jobs than predicted.
“I console myself with the normal excuse: if I hadn’t done it, somebody else would have,” said Hinton. He added that it was “hard to see how you can prevent the bad actors from using it for bad things”.
His comments follow a rush of ground breaking AI launches over the past six months, such as Microsoft-backed OpenAI’s ChatGPT in November last year, and Google’s own chatbot Bard in March.
The accelerating pace of development and public deployment has raised growing concern among some AI researchers and tech ethicists.
In March, Elon Musk and more than 1,000 tech researchers and executives called for a six-month “pause” on the development of advanced AI systems to halt what they call a “dangerous” arms race.
Hinton said he was concerned the race between Google and Microsoft to launch AI-driven products would push forward the development of AI without appropriate guardrails and regulations in place. He added: “I don’t think they should scale this up more until they have understood whether they can control it.”
Hinton also voiced concerns that AI could surpass human intelligence, which he now believed was coming faster than he had expected.
In artificial intelligence, neural networks are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would. This is called deep learning.
“Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.
I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have. We’re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.
And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”
Dr Hinton joins a growing number of experts who have expressed concerns about AI – both the speed at which it is developing and the direction in which it is going.
The post Beware the dark side to the growth of AI to GPT4 appeared first on Payments Cards & Mobile.