Code generation

code-bison is the name of the model that supports code generation. It's a foundation model that generates code based on a natural language description. The type of content that code-bison can create includes functions, web pages, and unit tests. code-bison is supported by the code generation Codey APIs. Codey APIs are in the PaLM API family.

To explore this model in the console, see the code-bison model card in the Model Garden.
Go to the Model Garden

Use cases

Some common used cases for code generation are:

  • Unit tests: Use the prompt to request a unit test for a function.

  • Write a function: Pass a problem to the model to get a function that solves that problem.

  • Create a class: Use a prompt to describe the purpose of a class and have code that defines the class returned.

HTTP request

POST http://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/code-bison:predict

Model versions

To use the latest model version, specify the model name without a version number, for example code-bison.

To use a stable model version, specify the model version number, for example code-bison@001. Each stable version is available for six months after the release date of the subsequent stable version.

The following table contains the available stable model versions:

code-bison model Release date
code-bison@001 June 29, 2023

For more information, see Model versions and lifecycle.

Request body

{
  "instances": [
    { "prefix": string }
  ],
  "parameters": {
    "temperature": number,
    "maxOutputTokens": integer,
    "candidateCount": integer,
    "stopSequences": [ string ]
  }
}

The following are the parameters for the code generation model named code-bison. The code-bison model is one of the models in Codey. You can use these parameters to help optimize your code completion prompt. For more information, see Code models overview and Create prompts for code completion.

Parameter Description Acceptable values

prefix

(required)

For code models, prefix represents the beginning of a piece of meaningful programming code or a natural language prompt that describes code to be generated. A valid text string

temperature

The temperature is used for sampling during response generation. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 means that the highest probability tokens are always selected. In this case, responses for a given prompt are mostly deterministic, but a small amount of variation is still possible.

0.0–1.0

Default: 0.2

maxOutputTokens

Maximum number of tokens that can be generated in the response. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words.

Specify a lower value for shorter responses and a higher value for longer responses.

1–2048

Default: 1024

candidateCount

(optional)

The number of response variations to return. The candidate count parameter is not supported when you use the Vertex AI SDK.

1-4

Default: 1

stopSequences

(optional)

Specifies a list of strings that tells the model to stop generating text if one of the strings is encountered in the response. If a string appears multiple times in the response, then the response truncates where it's first encountered. The strings are case-sensitive.

For example, if the following is the returned response when stopSequences isn't specified:

public static string reverse(string myString)

Then the returned response with stopSequences set to ["Str", "reverse"] is:

public static string
A list of strings

Sample request

REST

To test a text prompt by using the Vertex AI API, send a POST request to the publisher model endpoint.

Before using any of the request data, make the following replacements:

  • PROJECT_ID: Your project ID.
  • For other fields, see the Request body table.

    HTTP method and URL:

    POST http://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/code-bison:predict

    Request JSON body:

    {
      "instances": [
        { "prefix": "PREFIX" }
      ],
      "parameters": {
        "temperature": TEMPERATURE,
        "maxOutputTokens": MAX_OUTPUT_TOKENS,
        "candidateCount": CANDIDATE_COUNT
      }
    }
    

    To send your request, choose one of these options:

    curl

    Save the request body in a file named request.json, and execute the following command:

    curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "http://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/code-bison:predict"

    PowerShell

    Save the request body in a file named request.json, and execute the following command:

    $cred = gcloud auth print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `
    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "http://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/code-bison:predict" | Select-Object -Expand Content

    You should receive a JSON response similar to the sample response.

Vertex AI SDK for Python

To learn how to install the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Vertex AI SDK for Python API reference documentation.

from vertexai.preview.language_models import CodeGenerationModel


def generate_a_function(temperature: float = 0.5) -> object:
    """Example of using Code Generation to write a function."""

    # TODO developer - override these parameters as needed:
    parameters = {
        "temperature": temperature,  # Temperature controls the degree of randomness in token selection.
        "max_output_tokens": 256,  # Token limit determines the maximum amount of text output.
    }

    code_generation_model = CodeGenerationModel.from_pretrained("code-bison@001")
    response = code_generation_model.predict(
        prefix="Write a function that checks if a year is a leap year.", **parameters
    )

    print(f"Response from Model: {response.text}")

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';
const aiplatform = require('@google-cloud/aiplatform');

// Imports the Google Cloud Prediction service client
const {PredictionServiceClient} = aiplatform.v1;

// Import the helper module for converting arbitrary protobuf.Value objects.
const {helpers} = aiplatform;

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};
const publisher = 'google';
const model = 'code-bison@001';

// Instantiates a client
const predictionServiceClient = new PredictionServiceClient(clientOptions);

async function callPredict() {
  // Configure the parent resource
  const endpoint = `projects/${project}/locations/${location}/publishers/${publisher}/models/${model}`;

  const prompt = {
    prefix: 'Write a function that checks if a year is a leap year.',
  };
  const instanceValue = helpers.toValue(prompt);
  const instances = [instanceValue];

  const parameter = {
    temperature: 0.5,
    maxOutputTokens: 256,
  };
  const parameters = helpers.toValue(parameter);

  const request = {
    endpoint,
    instances,
    parameters,
  };

  // Predict request
  const [response] = await predictionServiceClient.predict(request);
  console.log('Get code generation response');
  const predictions = response.predictions;
  console.log('\tPredictions :');
  for (const prediction of predictions) {
    console.log(`\t\tPrediction : ${JSON.stringify(prediction)}`);
  }
}

callPredict();

Java

Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.


import com.google.cloud.aiplatform.v1beta1.EndpointName;
import com.google.cloud.aiplatform.v1beta1.PredictResponse;
import com.google.cloud.aiplatform.v1beta1.PredictionServiceClient;
import com.google.cloud.aiplatform.v1beta1.PredictionServiceSettings;
import com.google.protobuf.InvalidProtocolBufferException;
import com.google.protobuf.Value;
import com.google.protobuf.util.JsonFormat;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

public class PredictCodeGenerationFunctionSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace this variable before running the sample.
    String project = "YOUR_PROJECT_ID";

    // Learn how to create prompts to work with a code model to generate code:
    // http://cloud.go888ogle.com.fqhub.com/vertex-ai/docs/generative-ai/code/code-generation-prompts
    String instance = "{ \"prefix\": \"Write a function that checks if a year is a leap year.\"}";
    String parameters = "{\n" + "  \"temperature\": 0.5,\n" + "  \"maxOutputTokens\": 256,\n" + "}";
    String location = "us-central1";
    String publisher = "google";
    String model = "code-bison@001";

    predictFunction(instance, parameters, project, location, publisher, model);
  }

  // Use Code Generation to generate a code function
  public static void predictFunction(
      String instance,
      String parameters,
      String project,
      String location,
      String publisher,
      String model)
      throws IOException {
    final String endpoint = String.format("%s-aiplatform.googleapis.com:443", location);
    PredictionServiceSettings predictionServiceSettings =
        PredictionServiceSettings.newBuilder().setEndpoint(endpoint).build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests.
    try (PredictionServiceClient predictionServiceClient =
        PredictionServiceClient.create(predictionServiceSettings)) {
      final EndpointName endpointName =
          EndpointName.ofProjectLocationPublisherModelName(project, location, publisher, model);

      Value instanceValue = stringToValue(instance);
      List<Value> instances = new ArrayList<>();
      instances.add(instanceValue);

      Value parameterValue = stringToValue(parameters);

      PredictResponse predictResponse =
          predictionServiceClient.predict(endpointName, instances, parameterValue);
      System.out.println("Predict Response");
      System.out.println(predictResponse);
    }
  }

  // Convert a Json string to a protobuf.Value
  static Value stringToValue(String value) throws InvalidProtocolBufferException {
    Value.Builder builder = Value.newBuilder();
    JsonFormat.parser().merge(value, builder);
    return builder.build();
  }
}

Response body

{
  "predictions": [
    {
      "content": string,
      "score": float,
      "citationMetadata": {
        "citations": [
          {
            "startIndex": integer,
            "endIndex": integer,
            "url": string,
            "title": string,
            "license": string,
            "publicationDate": string
          }
        ]
      },
      "safetyAttributes":{
        "categories": [],
        "blocked": false,
        "scores": []
      },
      "score": float
    }
  ]
}
Response element Description
blocked A boolean flag associated with a safety attribute that indicates if the model's input or output was blocked.
categories A list the safety attribute category names that are associated with the generated content. The order of the scores in the scores parameter matches the order of the categories. For example, the first score in the scores parameter indicates the likelihood that the response violates the first category in the categories list.
citationMetadata An element that contains an array of citations.
citations An array of citations. Each citation contains its metadata.
content The result generated by the model using the input text.
endIndex An integer that specifies where a citation ends in the content.
license The license associated with a citation.
publicationDate The date a citation was published. Its valid formats are YYYY, YYYY-MM, and YYYY-MM-DD.
safetyAttributes An array of safety attributes. The array contains one safety attribute for each response candidate.
score A float value that's less than zero. The higher the value for score, the greater confidence the model has in its response.
scores An array of float values. Each value is a score that indicates the likelihood that the response violates the safety category it's checked against. The lower the value, the safer the model considers the response. The order of the scores in the array corresponds to the order of the safety attributes in the categories response element.
startIndex An integer that specifies where a citation starts in the content.
title The title of a citation source. Examples of source titles might be that of a news article or a book.
url The URL of a citation source. Examples of a URL source might be a news website or a GitHub repository.

Sample response

{
  "predictions": [
    {
      "citationMetadata": {
        "citations": []
      },
      "safetyAttributes": {
        "scores": [],
        "categories": [],
        "blocked": false
      },
      "content": "CONTENT",
      "score": -1.1161688566207886
    }
  ]
}

Stream response from Generative AI models

The parameters are the same for streaming and non-streaming requests to the APIs.

To view sample code requests and responses using the REST API, see Examples using the streaming REST API.

To view sample code requests and responses using the Vertex AI SDK for Python, see Examples using Vertex AI SDK for Python for streaming.