An insight into the world of human training

An insight into the world of human training

At a time when AI is increasingly shaping the customer experience, understanding how these technologies work has never been more important, especially for marketers.

A recent Bloomberg report has shed light on the human workforce that trains Google’s Bard chatbot and highlighted the essential role thousands of contractors play in shaping this AI tool’s responses.

This in-depth report uncovers the realities of AI development and reveals a significant impact on the people who use it.

The quality, accuracy, and trustworthiness of AI-driven interactions can impact brand reputation, customer trust, and ultimately your bottom line.

As we delve into the human processes behind the Bard AI chatbot, we gain valuable insights into the challenges and opportunities ahead for companies using AI in their marketing strategies.

A look inside the AI ​​training ground

Google’s bard is known for its quick and sure answers to various questions.

However, anonymous contract workers reveal to Bloomberg that behind these AI capabilities is the work of frustrated humans.

These contractors, run by companies like Appen Ltd. and Accenture Plc, work under strict deadlines to ensure the chatbot’s responses are reliable, accurate and unbiased.

To work under pressure

These contractors, some making as little as $14 an hour, have come under increasing pressure over the past year as Google and OpenAI compete in an AI arms race.

Tasks have become more complex and workloads have increased, often without contractors having specific expertise in the areas they are auditing.

An unnamed contractor said:

“As it stands now, people are scared, stressed, underpaid and don’t know what’s going on. And this culture of fear is not helping to achieve the quality and teamwork that you expect from all of us.”

The role of contractors in training AI

The role of the contractors is to review the AI’s responses, identifying errors and eliminating potential biases. They work with complicated instructions and tight deadlines, often as little as three minutes.

According to documents provided to Bloomberg, contractors are often asked to decide whether the AI ​​model’s responses contain verifiable evidence. They analyze the responses for factors such as specificity, timeliness of information and coherence.

An example in the Bloomberg report discusses how an evaluator might use evidence to determine the right dosage for a blood pressure drug called lisinopril.

Contractors must ensure that responses do not contain harmful, offensive or overly sexual content. You must also guard against inaccurate, misleading, or misleading information.

Highlighting the human factor behind the AI

While AI chatbots like Bard are considered breakthrough technological advances, the truth is that their effectiveness depends on the work of human contractors.

Laura Edelson, a computer scientist at New York University, tells Bloomberg:

“One should remember that these systems are not the work of sorcerers – they are the work of thousands of people and their poorly paid labor.”

Although the contractors play an important role, their work is often secretive and they have little direct communication with Google.

Concerns about the quality of AI products

Contractors raise concerns about their working conditions, which they say could affect the quality of AI products.

Contractors are an essential part of learning AI, as Ed Stackhouse, an Appen staffer, noted in a letter to Congress.

Stackhouse warned that the speed required for content inspection could result in Bard becoming a “flawed” and “dangerous” product.

Google responded to these concerns by stating that it makes extensive efforts to responsibly develop its AI products and employs rigorous testing, training, and feedback processes to ensure factuality and reduce bias.

While the company says it doesn’t rely solely on human raters to improve its AI, it has pointed out that minor inaccuracies can creep in that could potentially mislead users.

Alex Hanna, research director at the Distributed AI Research Institute and former Google AI ethicist, said:

“It is still worrying that the chatbot misunderstands important facts.”

A call for change

Despite growing concerns about working conditions and the quality of AI products, it is clear that human contractors are an essential part of AI development.

The challenge is to ensure they are adequately remunerated and have the necessary resources to carry out their duties.

Emily Bender, Professor of Computational Linguistics at the University of Washington, underscored this point, saying:

“The work of these contract workers at Google and other technology platforms is a history of labor exploitation.”

As the AI ​​revolution continues, the role of human contractors in shaping and refining these technologies will continue to be critical.

Their voices and concerns need to be heard and considered to ensure the continued development of reliable, accurate, and ethical AI products.


Featured image: Maurice NORBERT/Shutterstock

Previous Article

ChatGPT increases typing productivity by 40% [STUDY]

Next Article

Meta improves video features on Facebook

Subscribe to our Newsletter

Subscribe to our email newsletter to get the latest posts delivered right to your email.
Pure inspiration, zero spam ✨