Complete Story
 

07/12/2024

OpenAI Promised to Make its AI Safe

Employees say it ‘failed’ its first test

Last summer, artificial intelligence (AI) powerhouse OpenAI promised the White House it would rigorously safety test new versions of its groundbreaking technology to make sure the AI wouldn't inflict damage — like teaching users to build bioweapons or helping hackers develop new kinds of cyberattacks.

But this spring, some members of OpenAI’s safety team felt pressured to speed through a new testing protocol, designed to prevent the technology from causing catastrophic harm, to meet a May launch date set by OpenAI’s leaders, according to three people familiar with the matter who spoke on the condition of anonymity for fear of retaliation.

Even before testing began on the model, GPT-4 Omni, OpenAI invited employees to celebrate the product, which would power ChatGPT, with a party at one of the company’s San Francisco offices.

Please select this link to read the complete article from The Washington Post.

Printer-Friendly Version