AI pioneer Geoffrey Hinton’s top 5 ethical concerns

AI pioneer Geoffrey Hinton, known for his revolutionary work in deep learning and neural network research, recently expressed concern about the rapid advances in AI and the potential impact.

Given his observations of new large language models like GPT-4, Hinton warns of several key issues:

  1. Machines that surpass human intelligence: Hinton believes that AI systems like GPT-4 are on track to be much more intelligent than initially thought, and may have better learning algorithms than humans.
  2. Risks of AI chatbots being exploited by “bad actors”.: Hinton highlights the dangers of using intelligent chatbots to spread misinformation, manipulate voters and create powerful spambots.
  3. Learning skills with few shots: AI models can learn new tasks with just a few examples, allowing machines to acquire new skills at speeds comparable to or even faster than humans.
  4. Existential risk from AI systems: Hinton warns of scenarios in which AI systems create their own sub-goals and strive for greater power by surpassing human knowledge accumulation and sharing abilities.
  5. Impact on labor markets: AI and automation may displace jobs in certain industries, with manufacturing, agriculture and healthcare being particularly hard hit.

In this article, we delve deeper into Hinton’s concerns, his departure from Google to focus on the ethical and safety aspects of AI development, and the importance of responsible AI development in shaping the future of human-AI relationships.

Hinton’s departure from Google and ethical AI development

In an effort to address the ethical and security issues surrounding AI, Hinton decided to leave his position at Google.

This gives him the freedom to voice his concerns and engage in more philosophical work without the constraints of corporate interests.

Hinton explains in an interview with MIT Technology Review:

“I want to talk about AI security issues without worrying about how it interacts with Google’s business. As long as I’m being paid by Google, I can’t do that.”

Hinton’s departure marks a shift in his focus on the ethical and safety aspects of AI. He strives for active participation in ongoing dialogues on the responsible development and use of AI.

Hinton leverages his expertise and reputation and intends to help develop frameworks and policies that address issues such as bias, transparency, accountability, privacy and compliance with ethics.

GPT-4 & bad actors

During a recent interview, Hinton raised concerns about the possibility of machines surpassing human intelligence. The impressive capabilities of GPT-4developed by OpenAI and released earlier this year, prompted Hinton to reassess his earlier beliefs.

He believes language models like GPT-4 are on track to be much more intelligent than originally thought, and may have better learning algorithms than humans.

Hinton says in the interview:

“Our brain has 100 trillion connections. Large language models have up to half a trillion, at most a trillion. Yet GPT-4 knows hundreds of times more than any single person. Maybe it actually has a much better learning algorithm than we do.”

Hinton’s concerns revolve mainly around the significant differences between machines and humans. He likens the introduction of large language models to an alien invasion, emphasizing their superior language skills and knowledge compared to any individual.

Hinton says in the interview:

“These things are completely different from us. Sometimes I think it’s like aliens landed and people didn’t notice because they speak English very well.”

Hinton warns of the risks of AI chatbots becoming smarter than humans and beings exploited by “bad actors”.

In the interview, he warns that these chatbots could be used to spread misinformation, manipulate voters and create powerful spambots.

“Look, here’s one way it could all go wrong. We know that many of the people who want to use these tools are bad actors like Putin or DeSantis. They want to use them to win wars or manipulate voters.”

Less-shot learning and AI superiority

Another aspect that worries Hinton is the performance of large language models Learn in a few shots.

These models can be trained with some samples to perform new tasks, even tasks they were not directly trained to do.

This remarkable ability to learn makes the speed at which machines acquire new skills comparable to, or even faster than, that of humans.

Hinton says in the interview:

“People[‘s brains] seemed to have some kind of magic. Well, that argument is moot once you take one of those big language models and teach it to do something new. It can learn new tasks extremely quickly.”

Hinton’s concerns go beyond the immediate impact on labor markets and industries.

He raises the “existential risk” what happens when AI systems become more intelligent than humans, and warn of scenarios in which AI systems create their own sub-goals and strive for more power.

Hinton provides an example of how AI systems can go wrong when developing sub-goals:

“Well, here’s a subgoal that almost always helps in biology: get more energy. The first thing that could happen is that these robots will say, “Let’s get more power. Let’s reroute all power to my chips.’ Another great sub-goal would be to make more copies of yourself. Does that sound good?”

Impact of AI on labor markets and risk management

Hinton points this out Impact of AI on jobs is a big concern.

AI and automation could take over repetitive and mundane tasks and lead to job losses in some sectors.

Manufacturing and factory workers could be hit hard by automation.

Robots and AI-controlled machines are proliferating in manufacturing that could take over risky and repetitive human jobs.

Automation is also advancing in agriculture, with automated tasks such as planting, harvesting, and crop monitoring.

In healthcare, certain administrative tasks can be automated, but roles that require human interaction and compassion are less likely to be replaced entirely by AI.

In total

Hinton’s concerns about the rapid advances in AI and their potential impact underscore the need for responsible AI development.

His departure from Google demonstrates his commitment to address security considerations, foster open dialogue, and shape the future of AI in ways that ensure the well-being of humankind.

Though no longer at Google, Hinton’s contributions and expertise continue to play a critical role in shaping the AI ​​space and guiding its ethical development.

Featured image created by the author using Midjourney