The world's leading AI companies have "unacceptable" levels of risk management, and a "striking lack of commitment to many areas of safety," according to two new studies published Thursday.
The risks of even today's AI—by the admission of many top companies themselves—could include AI helping bad actors carry out cyberattacks or create bioweapons. Future AI models, top scientists worry, could escape human control altogether.
The studies were carried out by the nonprofits SaferAI and the Future of Life Institute (FLI). Each was the second of its kind, in what the groups hope will be a running series that incentivizes top AI companies to improve their practices.
Please select this link to read the complete article from TIME.