OpenAI Committee to Lead AI Safety and Security Efforts

OpenAI forms an independent committee to oversee AI safety and security processes, with increased transparency and government collaboration.





Microsoft-backed OpenAI announced on Monday that its newly formed Safety and Security Committee will independently oversee the security and safety processes related to the development and deployment of the company's AI models. This move comes after the committee's own recommendations were made public for the first time.

OpenAI, the company behind the widely known ChatGPT, established this committee in May to assess and further enhance its existing safety protocols. The launch of ChatGPT in late 2022 ignited widespread interest in AI, sparking both excitement and debates over ethical concerns and potential biases in AI systems.

In line with the committee's suggestions, OpenAI is exploring the creation of an "Information Sharing and Analysis Center" for the AI sector, aimed at sharing cybersecurity and threat intelligence within the industry.

The committee will be chaired by Zico Kolter, a professor and director of Carnegie Mellon University's machine learning department and a member of OpenAI's board. Additionally, OpenAI plans to increase transparency about the capabilities and risks of its AI models.

Last month, OpenAI signed a significant agreement with the U.S. government to support research, testing, and evaluation of its AI technologies.

Post a Comment

Don't be angry in the comments :O

© 2023 IHYPERG - All Rights Reserved 😀

Did someone say … cookies?

IHYPERG and its partners use cookies to provide you with a better, safer and faster service. Some cookies are necessary to use our services, improve our services, and make sure they work properly.