Jump to Content
Gaming

Game-changing assets: Making concept art with Google Cloud's generative AI

May 16, 2024
http://storage.googleapis.com/gweb-cloudblog-publish/images/gaming_2022.max-2500x2500.jpg
Krishna Chytanya Ayyagari

Gen AI Field Solutions Architect

Try Gemini 1.5 models

Google's most advanced multimodal models in Vertex AI

Try it

Developing games is unique in that it requires a large variety of media assets such as 2D images, 3D models, audio, and video to come together in a development environment.   However, in small game teams, such as those just getting started or “indie” teams, it’s unlikely that there are enough people to create such a wide variety and amount of assets. The lack of assets can create a bottleneck, throttling the entire game development team.

In this blog, we demonstrate how easy it is for gaming developers to deploy generative AI services on Google Cloud, showcase the available tooling of Model Garden on Vertex AI (including partner integrations like Hugging Face and Civitai), and highlight their potential for scaling game-asset creation.

Solution

Google Cloud offers a diverse range of generative AI models, accessible to users for various use cases. This solution focuses on how game development teams can harness the capabilities of Model Garden on Vertex AI, which incorporates partner integrations such as Hugging Face and Civitai. 

Many artists run these models on their local machine, e.g., Stable Diffusion on a local instance of Automatic 1111. However, considering the cost of high-end GPUs, not all people have access to hardware required to do so. Therefore, running these models in the cloud is a way to access the compute needed while mitigating the need to invest in high-end hardware upfront.

Our primary objective is to explore how these tools can streamline and scale game-asset creation.

Concept or pre-production assets

Assets are the visual and audio elements that make up a game's world. They have a significant impact on the player's experience, contributing to the creation of a realistic and immersive environment. There are many different types of game assets, including:

  • 2D and 3D models

  • Textures

  • Animations

  • Sounds and music

Here's a typical life journey of a typical 3D game asset, such as a character:

  • Concept art: Initial design of the asset

  • 3D modeling: Creation of a three-dimensional model of the asset

  • Texturing: Adding color and detail to the model in alignment with the game's style

  • Animation: Bringing movement to the asset (if applicable)

  • Sound effects: Adding audio elements to enhance the asset

  • Import to game engine: Integration of the asset into the game engine that powers the gameplay

Generative AI can streamline the asset-creation process by generating initial designs, 3D models, and high-quality textures tailored to the game's style. In this way, game artists can quickly provide assets that unlock the rest of the game team in the short term, while allowing them to focus on long term goals like art direction and finalized assets.

Read on to learn how to accomplish the first step of game asset creation – generating concept art – on Google Cloud using Vertex AI and Model Garden with Stable Diffusion. We'll cover how to access and download popular LoRA (Low-Rank Adaptation) adapters from Hugging Face or Civitai, and serve them alongside the stabilityai/stable-diffusion-xl-base-1.0 model (from Model Garden) on Vertex AI for online prediction. The resulting concept art images will be stored in a Google Cloud Storage bucket for easy access and further refinement by artists.

Infrastructure setup

1. Prerequisites:

2. Storage and authentication:

We'll use this service account with our Python notebook for model creation and storage management.

3. Colab Enterprise setup:

4. Running your notebooks:

  • Connecting notebooks: Once you've uploaded the notebooks, ensure they are connected to the runtime you created in step 3 above. This ensures your notebooks have access to the necessary resources for execution.

  • Cloud NAT: If your runtime environment requires internet access to download packages, you can create a Cloud NAT following these instructions.

This completes the infrastructure setup. You're ready to run your Jupyter notebooks to deploy a LoRA model with stabilityai/stable-diffusion-xl-base-1.0 on a Vertex AI prediction endpoint.

ExecutionUpon successful execution of all the above steps, you should see three Jupyter notebook files in Colab Enterprise as follows:

http://storage.googleapis.com/gweb-cloudblog-publish/images/1.ColabEnterprise.max-2200x2200.png

1. Create_mg_pytorch_sdxl_lora.ipynb

  • This notebook contains steps to download popular LoRA (Low-Rank Adaptation) adapters from either huggingface.co or civitai.com. It then serves the adapter alongside the stabilityai/stable-diffusion-xl-base-1.0 model on Vertex AI for online prediction.

  • In this notebook, set the following variables to begin:

    • HUGGINGFACE_MODE: If enabled, the LoRA will be downloaded from Hugging Face. Otherwise, it will be downloaded from Civitai.
http://storage.googleapis.com/gweb-cloudblog-publish/images/2.NotebookVariables-1.max-1400x1400.png
  • Upon successful execution, this notebook will print "Model ID" and "Endpoint ID." Save these values for use in the following notebooks.

  • If HUGGINGFACE_MODE is unchecked or disabled, ensure you update the Civitai variables within the notebook.

http://storage.googleapis.com/gweb-cloudblog-publish/images/3.Civitai.max-1300x1300.png

2. GenerateGameAssets.ipynb

  • This notebook contains code to convert text to images. Set the following variables to begin:

    • ENDPOINT_ID: Obtained from successful execution of "1.Create_mg_pytorch_sdxl_lora.ipynb".
http://storage.googleapis.com/gweb-cloudblog-publish/images/4.NotebookVariables-2.max-1400x1400.png
  • Update the prompts in the notebook as needed.
http://storage.googleapis.com/gweb-cloudblog-publish/images/5.Prompts-notebook2.max-2200x2200.png
  • Upon successful execution, you should see the following results:

    • Concept art images will be uploaded to your configured GCS storage bucket.

    • Images will be displayed for reference.

http://storage.googleapis.com/gweb-cloudblog-publish/images/6.Result-2.max-1900x1900.png

3. CleanupCloudResources.ipynb

  • Execute this notebook to clean up resources, including the endpoint and model.

  • Before executing, set the following variables:

    • MODEL_ID and ENDPOINT_ID: Obtained from successful execution of "1.Create_mg_pytorch_sdxl_lora.ipynb".
http://storage.googleapis.com/gweb-cloudblog-publish/images/7.NotebookVariables-3.max-1300x1300.png

Congratulations! You've successfully deployed the stabilityai/stable-diffusion-xl-base-1.0 model from Model Garden on Vertex AI, generated concept art for your games, and responsibly deleted models and endpoints to manage costs.

Final thoughts

Integrating Stable Diffusion-generated images into a game requires careful planning:

  • Legal rights: Ensure you have the necessary permissions to use generated images. Always consult a legal professional if you have any questions about image usage rights.

  • Customization: Edit and refine the images to match your game's style and technical needs.

  • Optimization: Optimize images for in-game performance and smooth integration into your game engine.

  • Testing: Thoroughly test for quality and performance after incorporating the assets.

  • Ethics and compliance: Prioritize ethical considerations and legal compliance throughout the entire process.

  • Documentation and feedback: Maintain detailed records, backups, and be responsive to player feedback after your game's release.

References

  1. Explore AI models in Model Garden http://cloud.go888ogle.com.fqhub.com/vertex-ai/docs/start/explore-models

  2. Your guide to generative AI support in Vertex AI http://cloud.go888ogle.com.fqhub.com/blog/products/ai-machine-learning/vertex-ai-model-garden-and-generative-ai-studio

Posted in