Dark Mode Light Mode

Sam Altman is hiring somebody to fret in regards to the risks of AI


OpenAI is hiring a Head of Preparedness. Or, in other words, someone whose primary job is to think about all the ways AI could go horribly, horribly wrong. In a post on X, Sam Altman announced the position by acknowledging that the rapid improvement of AI models poses “some real challenges.” The post goes on to specifically call out the potential impact on people’s mental health and the dangers of AI-powered cybersecurity weapons.

The job listing says the person in the role would be responsible for:

“Tracking and preparing for frontier capabilities that create new risks of severe harm. You will be the directly responsible leader for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline.”

Altman also says that, looking forward, this person would be responsible for executing the company’s “preparedness framework,” securing AI models for the release of “biological capabilities,” and even setting guardrails for self-improving systems. He also states that it will be a “stressful job,” which seems like an understatement.



Source link

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
Investors share what to remember while raising a Series A

Traders share what to recollect whereas elevating a Collection A

Next Post

Scientists could have discovered the most effective place for people to land on Mars