Today, we’re publishing information on Google’s AI Red Team for the first time.
Last month, we introduced the Secure AI Framework (SAIF), designed to help address risks to AI systems and drive security standards for the technology in a responsible manner.
To build on this momentum, today, we’re publishing a new report to explore one critical capability that we deploy to support SAIF: red teaming. We believe that red teaming will play a decisive role in preparing every organization for attacks on AI systems and look forward to working together to help everyone utilize AI in a secure way. The report examines our work to stand up a dedicated AI Red Team and includes three important areas: 1) what red teaming in the context of AI systems is and why it is important; 2) what types of attacks AI red teams simulate; and 3) lessons we have learned that we can share with others. — Read More