Jump to Content
Security & Identity

To securely build AI on Google Cloud, follow these best practices [infographic]

March 28, 2024
http://storage.googleapis.com/gweb-cloudblog-publish/images/GettyImages-1070902452.max-2600x2600.jpg
Anton Chuvakin

Security Advisor, Office of the CISO, Google Cloud

Hear monthly from our Cloud CISO in your inbox

Get the latest on security from Cloud CISO Phil Venables.

Subscribe

Cloud providers play a vital role in the rising use and deployment of AI systems. Generative AI foundation models in general get better when they’re trained on larger sets of data, and cloud providers let organizations store and process more data, as well as serve apps at scale that use that data.

As with any emerging technology, it’s crucial to address the unique threats that target AI while also addressing the risks facing any cloud app. We offer our recommendations in Best Practices for Securely Deploying AI on Google Cloud, a new research report published today.

At the heart of our analysis is how we drive requirements and recommendations for securing AI workloads on Google Cloud. We explain the security capabilities that Google provides, and the steps that we advise customers to take. We review essential security domains, address model-specific concerns including prompt injection, and outline proactive measures for future-proofing AI governance and resilience.

“As the world focuses on the potential of AI — and governments and industry work on a regulatory approach to ensure AI is safe and secure — we believe that AI represents an inflection point for digital security,” we wrote in the report.

Google has rooted our practical guidance in the Secure AI Framework (SAIF) and our understanding of how to secure AI systems. As part of the report, we’ve provided a checklist to help security and business leaders more readily achieve their AI security goals.

Make a list, check it twice

Building secure AI technology is similar to building other secure software products. Both should follow best practices designed to help developers achieve their goals faster, more efficiently, and with fewer mistakes that can turn into risks further down the development chain.

For each subject category in the graphic below, we have best practices available at these links for: model development, application security, infrastructure, and data management.

Learn more

In this report, we provide you with more detail on how to build AI securely on Google Cloud while mitigating risk inherent in the AI development process. You can read the full report here.

You can also check out some of our AI-focused security sessions, including our security spotlight session at Google Cloud Next in April.

http://storage.googleapis.com/gweb-cloudblog-publish/images/FINAL_Ai_Security-Best-Practices_Infograph.max-1700x1700.png
Posted in