Artificial Intelligence Leaders Partner with Cloud Security Alliance to Launch the AI Safety Initiative

Program for responsible, safe and forward-looking research, best practices, education, professional credentialing and organizational certification for generative AI is underway.

SEATTLE: The Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining standards, certifications, and best practices to help ensure a secure cloud computing environment, today announced the launch of the AI Safety Initiative in partnership with Amazon, Anthropic, Google, Microsoft, and OpenAI. This group is joined by a broad coalition of experts from the Cybersecurity & Infrastructure Security Agency (CISA), other governments, academia and across a wide swath of industries in what represents the largest number of participants in any initiative in CSA’s 14-year history.

The AI Safety Initiative is dedicated to crafting and openly sharing reliable guidelines for AI safety and security, initially concentrating on generative AI. It aims to equip customers, regardless of their size, with the tools, templates, and know-how for deploying AI in a safe, ethical, and compliant manner. By aligning with government regulations and adding agile industry standards, this initiative bridges the gap between policy and practice. The AI Safety Initiative is actively developing practical safeguards for today’s generative AI, structured in a way to help prepare for the future of much more powerful AI systems. Its goal is to reduce risks and amplify the positive impact of AI across all sectors.

“Generative AI is reshaping our world, offering immense promise but also immense risks. Uniting to share knowledge and best practices is crucial. The collaborative spirit of leaders crossing competitive boundaries to educate and implement best practices has enabled us to build the best recommendations for the industry,” said Caleb Sima, industry veteran and Chair of the Cloud Security Alliance AI Safety Initiative.

The AI Safety Initiative has begun meetings of its core research working groups:

  • AI Technology and Risk Working Group
  • AI Governance & Compliance Working Group
  • AI Controls Working Group
  • AI Organizational Responsibilities Working Group

LEAVE A REPLY

Please enter your comment!
Please enter your name here