Ray Kurzweil rejects calls to halt AI research
6 mins read

Ray Kurzweil rejects calls to halt AI research

Ray Kurzweil rejects calls to halt AI research

Ray Kurzweil, a noted futurist and director of engineering at Google, published a rebuttal to the letter calling for a pause in AI research, giving reasons why the proposal is impractical and deprives humankind of medical breakthroughs and innovations that profoundly benefit humanity.

International letter on disrupting AI development

An open letter signed by scientists and celebrities from around the world (posted on FutureOfLife.org) called for a complete pause in the development of AI more powerful than GPT-4, the latest version of OpenAI.

In addition to halting further development of AI, they also called for the development of security protocols overseen by independent third-party experts.

Some of the points the authors of the open letter make:

  • AI carries a great risk
  • AI development should not continue until beneficial applications of the technology have been enumerated and justified
  • AI should only proceed when “we” (the thousands who signed the letter) are satisfied that the AI ​​risks are manageable.
  • AI developers are challenged to work with policymakers to develop AI governance systems composed of regulators.
  • The development of watermarking technologies to identify AI-created content and control the spread of the technology.
  • A system for assigning liability for damage caused by AI
  • Creating institutions to deal with the disruptions caused by AI technology

The letter appears to come from the perspective that AI technology is centralized and can be stopped by the few organizations in control of the technology. But AI is not exclusively in the hands of governments, research institutes and companies.

AI at this point is an open source and decentralized technology developed by thousands of individuals in a global collaboration.

Ray Kurzweil: Futurist, author and technical lead at Google

Ray Kurzweil has been developing software and machines with a focus on artificial intelligence since the 1960s. He has written many popular books on the subject and is famous for making predictions about the future that tend to be correct.

Out of 147 predictions he made about life in 2009, only three predictions, 2% in total, were wrong.

Among his predictions in the 1990s was that many physical media, such as books, would fall in popularity as they went digital. At a time in the 1990s when computers were big and bulky, he predicted that by 2009 computers would be small enough to carry, which turned out to be true (How my predictions are doing – 2010 PDF).

Ray Kurzweil’s recent predictions focus on all the good that AI will bring, particularly medical and scientific breakthroughs.

Kurzweil also focuses on the ethics of AI.

In 2017, he was one of the participants (along with OpenAI CEO Sam Altman) to author an open letter known as the Asilomar AI Principles, also published on the Future of Life website, Guidelines for the Safe and Ethical Development of Technologies of artificial intelligence.

Among the principles he helped shape:

  • “The goal of AI research should not be to create undirected intelligence, but useful intelligence.
  • Investments in AI should be accompanied by research funds to ensure their beneficial use
  • There should be a constructive and healthy exchange between AI researchers and policy makers.
  • Advanced AI could represent a profound change in the history of life on Earth and should be planned and managed with due care and resources.
  • Superintelligence should only be developed in the service of widespread ethical ideals and for the benefit of all mankind and not of any state or organization.”

Kurzweil’s response to the open letter asking for a pause in AI development stems from a lifetime of innovative technology and all the positive impact it can have on humanity and nature.

His answer focused on three main points:

  • The call for a break is too vague to be practical
  • All nations must agree to the break or the goals will be thwarted from the start
  • A pause in development ignores the perks like identifying cures for diseases.

Too vague to be practical

His first point regarding the letter is that it is too vague because it causes a pause in the AI, which is more powerful than GPT-4 assuming that GPT-4 is the only type of AI.

Kurzweil wrote:

“Regarding the open letter to ‘pause’ research on AI ‘more powerful than GPT-4’, this criterion is too vague to be practical.”

The nations will decide against the break

His second point is that the demands outlined in the letter can only work if all researchers worldwide participate voluntarily.

Any nation that refuses to register will be at an advantage, which is likely to happen.

He writes:

“And the proposal faces a serious coordination problem: those who agree to a pause may fall far behind companies or nations that disagree.”

This point makes it clear that the goal of a full pause is not achievable as nations do not give up an advantage and AI is democratized and open source in the hands of individuals around the world.

AI brings significant benefits to AI

There have been editorials dismissing AI as having very little benefit to society, arguing that increasing labor productivity is not enough to justify the feared risks.

Kurzweil’s final point is that the open letter calling for a pause in AI development completely ignores all the good that AI can do.

He explains:

“The advancement of AI in critical areas such as medicine and health, education, the pursuit of renewable energy sources to replace fossil fuels, and numerous other areas offers tremendous benefits.

… more nuance is needed if we are to unlock the profound benefits of AI for health and productivity while avoiding the real dangers.”

Dangers, fear of the unknown and benefits for humanity

Kurzweil makes good points about how AI can benefit society. His argument that there is no way to actually stop the AI ​​is reasonable.

His explanation of AI emphasizes the profound benefits to humanity inherent in AI.

Could it be that OpenAI’s implementation of AI as a chatbot trivializes AI and overshadows its usefulness to humanity, while at the same time frightening people who don’t understand how generative AI works?

Featured image from Shutterstock/Iurii Motov