Image understanding

You can add images to Gemini requests to perform tasks that involve understanding the contents of the included images. This page shows you how to add images to your requests to Gemini in Vertex AI by using the Google Cloud console and the Vertex AI API.

Supported models

The following table lists the models that support image understanding:

Model Image modality details

Gemini 1.5 Flash

Go to the Gemini 1.5 Flash model card
Maximum images per prompt: 3,000

Gemini 1.5 Pro

Go to the Gemini 1.5 Pro model card
Maximum images per prompt: 3,000

Gemini 1.0 Pro Vision

Go to the Gemini 1.0 Pro Vision model card
Maximum images per prompt: 16

For a list of languages supported by Gemini models, see model information Google models. To learn more about how to design multimodal prompts, see Design multimodal prompts. If you're looking for a way to use Gemini directly from your mobile and web apps, see the Vertex AI in Firebase SDKs for Android, Swift, web, and Flutter apps.

Add images to a request

You can add a single image or multiple images in your request to Gemini.

Single image

The sample code in each of the following tabs shows a different way to identify what's in an image. This sample works with all Gemini multimodal models.

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Vertex AI SDK for Python API reference documentation.

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

For a streaming response, use the stream parameter in generate_content.

  response = model.generate_content(contents=[...], stream = True)
  

For a non-streaming response, remove the parameter, or set the parameter to False.

Sample code

import vertexai

from vertexai.generative_models import GenerativeModel, Part

# TODO(developer): Update and un-comment below line
# PROJECT_ID = "your-project-id"
vertexai.init(project=PROJECT_ID, location="us-central1")

model = GenerativeModel("gemini-1.5-flash-002")

image_file = Part.from_uri(
    "gs://cloud-samples-data/generative-ai/image/scones.jpg", "image/jpeg"
)

# Query the model
response = model.generate_content([image_file, "what is this image?"])
print(response.text)
# Example response:
# That's a lovely overhead flatlay photograph of blueberry scones.
# The image features:
# * **Several blueberry scones:** These are the main focus,
# arranged on parchment paper with some blueberry juice stains.
# ...

Java

Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart. For more information, see the Vertex AI Java SDK for Gemini reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

For a streaming response, use the generateContentStream method.

  public ResponseStream<GenerateContentResponse> generateContentStream(Content content)
  

For a non-streaming response, use the generateContent method.

  public GenerateContentResponse generateContent(Content content)
  

Sample code

import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import com.google.cloud.vertexai.generativeai.ContentMaker;
import com.google.cloud.vertexai.generativeai.GenerativeModel;
import com.google.cloud.vertexai.generativeai.PartMaker;
import com.google.cloud.vertexai.generativeai.ResponseHandler;
import java.util.Base64;

public class MultimodalQuery {

  public static void main(String[] args) throws Exception {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-google-cloud-project-id";
    String location = "us-central1";
    String modelName = "gemini-1.5-flash-001";
    String dataImageBase64 = "your-base64-encoded-image";

    String output = multimodalQuery(projectId, location, modelName, dataImageBase64);
    System.out.println(output);
  }


  // Ask the model to recognise the brand associated with the logo image.
  public static String multimodalQuery(String projectId, String location, String modelName,
      String dataImageBase64) throws Exception {
    // Initialize client that will be used to send requests. This client only needs
    // to be created once, and can be reused for multiple requests.
    try (VertexAI vertexAI = new VertexAI(projectId, location)) {
      String output;
      byte[] imageBytes = Base64.getDecoder().decode(dataImageBase64);

      GenerativeModel model = new GenerativeModel(modelName, vertexAI);
      GenerateContentResponse response = model.generateContent(
          ContentMaker.fromMultiModalData(
              "What is this image?",
              PartMaker.fromMimeTypeAndData("image/png", imageBytes)
          ));

      output = ResponseHandler.getText(response);
      return output;
    }
  }
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the Generative AI quickstart using the Node.js SDK. For more information, see the Node.js SDK for Gemini reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

For a streaming response, use the generateContentStream method.

  const streamingResp = await generativeModel.generateContentStream(request);
  

For a non-streaming response, use the generateContent method.

  const streamingResp = await generativeModel.generateContent(request);
  

Sample code

const {VertexAI} = require('@google-cloud/vertexai');

/**
 * TODO(developer): Update these variables before running the sample.
 */
async function createNonStreamingMultipartContent(
  projectId = 'PROJECT_ID',
  location = 'us-central1',
  model = 'gemini-1.5-flash-001',
  image = 'gs://generativeai-downloads/images/scones.jpg',
  mimeType = 'image/jpeg'
) {
  // Initialize Vertex with your Cloud project and location
  const vertexAI = new VertexAI({project: projectId, location: location});

  // Instantiate the model
  const generativeVisionModel = vertexAI.getGenerativeModel({
    model: model,
  });

  // For images, the SDK supports both Google Cloud Storage URI and base64 strings
  const filePart = {
    fileData: {
      fileUri: image,
      mimeType: mimeType,
    },
  };

  const textPart = {
    text: 'what is shown in this image?',
  };

  const request = {
    contents: [{role: 'user', parts: [filePart, textPart]}],
  };

  console.log('Prompt Text:');
  console.log(request.contents[0].parts[1].text);

  console.log('Non-Streaming Response Text:');

  // Generate a response
  const response = await generativeVisionModel.generateContent(request);

  // Select the text from the response
  const fullTextResponse =
    response.response.candidates[0].content.parts[0].text;

  console.log(fullTextResponse);
}

Go

Before trying this sample, follow the Go setup instructions in the Vertex AI quickstart. For more information, see the Vertex AI Go SDK for Gemini reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

For a streaming response, use the GenerateContentStream method.

  iter := model.GenerateContentStream(ctx, genai.Text("Tell me a story about a lumberjack and his giant ox. Keep it very short."))
  

For a non-streaming response, use the GenerateContent method.

  resp, err := model.GenerateContent(ctx, genai.Text("What is the average size of a swallow?"))
  

Sample code

import (
	"context"
	"encoding/json"
	"fmt"
	"io"

	"cloud.google.com/go/vertexai/genai"
)

func tryGemini(w io.Writer, projectID string, location string, modelName string) error {
	// location := "us-central1"
	// modelName := "gemini-1.5-flash-001"

	ctx := context.Background()
	client, err := genai.NewClient(ctx, projectID, location)
	if err != nil {
		return fmt.Errorf("error creating client: %w", err)
	}
	gemini := client.GenerativeModel(modelName)

	img := genai.FileData{
		MIMEType: "image/jpeg",
		FileURI:  "gs://generativeai-downloads/images/scones.jpg",
	}
	prompt := genai.Text("What is in this image?")

	resp, err := gemini.GenerateContent(ctx, img, prompt)
	if err != nil {
		return fmt.Errorf("error generating content: %w", err)
	}
	rb, err := json.MarshalIndent(resp, "", "  ")
	if err != nil {
		return fmt.Errorf("json.MarshalIndent: %w", err)
	}
	fmt.Fprintln(w, string(rb))
	return nil
}

C#

Before trying this sample, follow the C# setup instructions in the Vertex AI quickstart. For more information, see the Vertex AI C# reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

For a streaming response, use the StreamGenerateContent method.

  public virtual PredictionServiceClient.StreamGenerateContentStream StreamGenerateContent(GenerateContentRequest request)
  

For a non-streaming response, use the GenerateContentAsync method.

  public virtual Task<GenerateContentResponse> GenerateContentAsync(GenerateContentRequest request)
  

For more information on how the server can stream responses, see Streaming RPCs.

Sample code


using Google.Api.Gax.Grpc;
using Google.Cloud.AIPlatform.V1;
using System.Text;
using System.Threading.Tasks;

public class GeminiQuickstart
{
    public async Task<string> GenerateContent(
        string projectId = "your-project-id",
        string location = "us-central1",
        string publisher = "google",
        string model = "gemini-1.5-flash-001"
    )
    {
        // Create client
        var predictionServiceClient = new PredictionServiceClientBuilder
        {
            Endpoint = $"{location}-aiplatform.googleapis.com"
        }.Build();

        // Initialize content request
        var generateContentRequest = new GenerateContentRequest
        {
            Model = $"projects/{projectId}/locations/{location}/publishers/{publisher}/models/{model}",
            GenerationConfig = new GenerationConfig
            {
                Temperature = 0.4f,
                TopP = 1,
                TopK = 32,
                MaxOutputTokens = 2048
            },
            Contents =
            {
                new Content
                {
                    Role = "USER",
                    Parts =
                    {
                        new Part { Text = "What's in this photo?" },
                        new Part { FileData = new() { MimeType = "image/png", FileUri = "gs://generativeai-downloads/images/scones.jpg" } }
                    }
                }
            }
        };

        // Make the request, returning a streaming response
        using PredictionServiceClient.StreamGenerateContentStream response = predictionServiceClient.StreamGenerateContent(generateContentRequest);

        StringBuilder fullText = new();

        // Read streaming responses from server until complete
        AsyncResponseStream<GenerateContentResponse> responseStream = response.GetResponseStream();
        await foreach (GenerateContentResponse responseItem in responseStream)
        {
            fullText.Append(responseItem.Candidates[0].Content.Parts[0].Text);
        }

        return fullText.ToString();
    }
}

REST

After you set up your environment, you can use REST to test a text prompt. The following sample sends a request to the publisher model endpoint.

You can include images that are stored in Cloud Storage or use base64-encoded image data.

Image in Cloud Storage

Before using any of the request data, make the following replacements:

  • LOCATION: The region to process the request. Enter a supported region. For the full list of supported regions, see Available locations.

    Click to expand a partial list of available regions

    • us-central1
    • us-west4
    • northamerica-northeast1
    • us-east4
    • us-west1
    • asia-northeast3
    • asia-southeast1
    • asia-northeast1
  • PROJECT_ID: Your project ID.
  • FILE_URI: The URI or URL of the file to include in the prompt. Acceptable values include the following:
    • Cloud Storage bucket URI: The object must either be publicly readable or reside in the same Google Cloud project that's sending the request. For gemini-1.5-pro and gemini-1.5-flash, the size limit is 2 GB. For gemini-1.0-pro-vision, the size limit is 20 MB.
    • HTTP URL: The file URL must be publicly readable. You can specify one video file, one audio file, and up to 10 image files per request. Audio files, video files, and documents can't exceed 15 MB.
    • YouTube video URL:The YouTube video must be either owned by the account that you used to sign in to the Google Cloud console or is public. Only one YouTube video URL is supported per request.

    When specifying a fileURI, you must also specify the media type (mimeType) of the file. If VPC Service Controls is enabled, specifying a media file URL for fileURI is not supported.

    If you don't have an image file in Cloud Storage, then you can use the following publicly available file: gs://cloud-samples-data/generative-ai/image/scones.jpg with a mime type of image/jpeg. To view this image, open the sample image file.

  • MIME_TYPE: The media type of the file specified in the data or fileUri fields. Acceptable values include the following:

    Click to expand MIME types

    • application/pdf
    • audio/mpeg
    • audio/mp3
    • audio/wav
    • image/png
    • image/jpeg
    • image/webp
    • text/plain
    • video/mov
    • video/mpeg
    • video/mp4
    • video/mpg
    • video/avi
    • video/wmv
    • video/mpegps
    • video/flv
  • TEXT: The text instructions to include in the prompt. For example, What is shown in this image?

To send your request, choose one of these options:

curl

Save the request body in a file named request.json. Run the following command in the terminal to create or overwrite this file in the current directory:

cat > request.json << 'EOF'
{
  "contents": {
    "role": "USER",
    "parts": [
      {
        "fileData": {
          "fileUri": "FILE_URI",
          "mimeType": "MIME_TYPE"
        }
      },
      {
        "text": "TEXT"
      }
    ]
  }
}
EOF

Then execute the following command to send your REST request:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/gemini-1.5-flash:generateContent"

PowerShell

Save the request body in a file named request.json. Run the following command in the terminal to create or overwrite this file in the current directory:

@'
{
  "contents": {
    "role": "USER",
    "parts": [
      {
        "fileData": {
          "fileUri": "FILE_URI",
          "mimeType": "MIME_TYPE"
        }
      },
      {
        "text": "TEXT"
      }
    ]
  }
}
'@  | Out-File -FilePath request.json -Encoding utf8

Then execute the following command to send your REST request:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/gemini-1.5-flash:generateContent" | Select-Object -Expand Content

You should receive a JSON response similar to the following.

Base64 image data

Before using any of the request data, make the following replacements:

  • LOCATION: The region to process the request. Enter a supported region. For the full list of supported regions, see Available locations.

    Click to expand a partial list of available regions

    • us-central1
    • us-west4
    • northamerica-northeast1
    • us-east4
    • us-west1
    • asia-northeast3
    • asia-southeast1
    • asia-northeast1
  • PROJECT_ID: Your project ID.
  • B64_BASE_IMAGE
    The base64 encoding of the image, PDF, or video to include inline in the prompt. When including media inline, you must also specify the media type (mimeType) of the data.
  • MIME_TYPE: The media type of the file specified in the data or fileUri fields. Acceptable values include the following:

    Click to expand MIME types

    • application/pdf
    • audio/mpeg
    • audio/mp3
    • audio/wav
    • image/png
    • image/jpeg
    • image/webp
    • text/plain
    • video/mov
    • video/mpeg
    • video/mp4
    • video/mpg
    • video/avi
    • video/wmv
    • video/mpegps
    • video/flv
  • TEXT: The text instructions to include in the prompt. For example, What is shown in this image?.

To send your request, choose one of these options:

curl

Save the request body in a file named request.json. Run the following command in the terminal to create or overwrite this file in the current directory:

cat > request.json << 'EOF'
{
  "contents": {
    "role": "USER",
    "parts": [
      {
        "inlineData": {
          "data": "B64_BASE_IMAGE",
          "mimeType": "MIME_TYPE"
        }
      },
      {
        "text": "TEXT"
      }
    ]
  }
}
EOF

Then execute the following command to send your REST request:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/gemini-1.5-flash:generateContent"

PowerShell

Save the request body in a file named request.json. Run the following command in the terminal to create or overwrite this file in the current directory:

@'
{
  "contents": {
    "role": "USER",
    "parts": [
      {
        "inlineData": {
          "data": "B64_BASE_IMAGE",
          "mimeType": "MIME_TYPE"
        }
      },
      {
        "text": "TEXT"
      }
    ]
  }
}
'@  | Out-File -FilePath request.json -Encoding utf8

Then execute the following command to send your REST request:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/gemini-1.5-flash:generateContent" | Select-Object -Expand Content

You should receive a JSON response similar to the following.

Note the following in the URL for this sample:
  • Use the generateContent method to request that the response is returned after it's fully generated. To reduce the perception of latency to a human audience, stream the response as it's being generated by using the streamGenerateContent method.
  • The multimodal model ID is located at the end of the URL before the method (for example, gemini-1.5-flash or gemini-1.0-pro-vision). This sample may support other models as well.

Console

To send a multimodal prompt by using the Google Cloud console, do the following:

  1. In the Vertex AI section of the Google Cloud console, go to the Vertex AI Studio page.

    Go to Vertex AI Studio

  2. Click Open freeform.

  3. Optional: Configure the model and parameters:

    • Model: Select a model.
    • Region: Select the region that you want to use.
    • Temperature: Use the slider or textbox to enter a value for temperature.

      The temperature is used for sampling during response generation, which occurs when topP and topK are applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 means that the highest probability tokens are always selected. In this case, responses for a given prompt are mostly deterministic, but a small amount of variation is still possible.

      If the model returns a response that's too generic, too short, or the model gives a fallback response, try increasing the temperature.

    • Output token limit: Use the slider or textbox to enter a value for the max output limit.

      Maximum number of tokens that can be generated in the response. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words.

      Specify a lower value for shorter responses and a higher value for potentially longer responses.

    • Add stop sequence: Optional. Enter a stop sequence, which is a series of characters that includes spaces. If the model encounters a stop sequence, the response generation stops. The stop sequence isn't included in the response, and you can add up to five stop sequences.

  4. Optional: To configure advanced parameters, click Advanced and configure as follows:

    Click to expand advanced configurations

    • Top-K: Use the slider or textbox to enter a value for top-K. (not supported for Gemini 1.5).

      Top-K changes how the model selects tokens for output. A top-K of 1 means the next selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-K of 3 means that the next token is selected from among the three most probable tokens by using temperature.

      For each token selection step, the top-K tokens with the highest probabilities are sampled. Then tokens are further filtered based on top-P with the final token selected using temperature sampling.

      Specify a lower value for less random responses and a higher value for more random responses.

    • Top-P: Use the slider or textbox to enter a value for top-P. Tokens are selected from most probable to the least until the sum of their probabilities equals the value of top-P. For the least variable results, set top-P to 0.
    • Max responses: Use the slider or textbox to enter a value for the number of responses to generate.
    • Streaming responses: Enable to print responses as they're generated.
    • Safety filter threshold: Select the threshold of how likely you are to see responses that could be harmful.
    • Enable Grounding: Grounding isn't supported for multimodal prompts.

  5. Click Insert Media, and select a source for your file.

    Upload

    Select the file that you want to upload and click Open.

    By URL

    Enter the URL of the file that you want to use and click Insert.

    Cloud Storage

    Select the bucket and then the file from the bucket that you want to import and click Select.

    Google Drive

    1. Choose an account and give consent to Vertex AI Studio to access your account the first time you select this option. You can upload multiple files that have a total size of up to 10 MB. A single file can't exceed 7 MB.
    2. Click the file that you want to add.
    3. Click Select.

      The file thumbnail displays in the Prompt pane. The total number of tokens also displays. If your prompt data exceeds the token limit, the tokens are truncated and aren't included in processing your data.

  6. Enter your text prompt in the Prompt pane.

  7. Optional: To view the Token ID to text and Token IDs, click the tokens count in the Prompt pane.

  8. Click Submit.

  9. Optional: To save your prompt to My prompts, click Save.

  10. Optional: To get the Python code or a curl command for your prompt, click Get code.

Multiple images

Each of the following tabs show you a different way to include multiple images in a prompt request. Each sample takes in two sets of the following inputs:

  • An image of a popular city landmark
  • The media type of the image
  • Text indicating the city and landmark in the image

The sample also takes in a third image and media type, but no text. The sample returns a text response indicating the city and landmark in the third image.

These image samples work with all Gemini multimodal models.

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Vertex AI SDK for Python API reference documentation.

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

For a streaming response, use the stream parameter in generate_content.

  response = model.generate_content(contents=[...], stream = True)
  

For a non-streaming response, remove the parameter, or set the parameter to False.

Sample code

import vertexai

from vertexai.generative_models import GenerativeModel, Part

# TODO(developer): Update and un-comment below line
# PROJECT_ID = "your-project-id"
vertexai.init(project=PROJECT_ID, location="us-central1")

# Load images from Cloud Storage URI
image_file1 = Part.from_uri(
    "gs://cloud-samples-data/vertex-ai/llm/prompts/landmark1.png",
    mime_type="image/png",
)
image_file2 = Part.from_uri(
    "gs://cloud-samples-data/vertex-ai/llm/prompts/landmark2.png",
    mime_type="image/png",
)
image_file3 = Part.from_uri(
    "gs://cloud-samples-data/vertex-ai/llm/prompts/landmark3.png",
    mime_type="image/png",
)

model = GenerativeModel("gemini-1.5-flash-002")
response = model.generate_content(
    [
        image_file1,
        "city: Rome, Landmark: the Colosseum",
        image_file2,
        "city: Beijing, Landmark: Forbidden City",
        image_file3,
    ]
)
print(response.text)
# Example response:
# city: Rio de Janeiro, Landmark: Christ the Redeemer

Java

Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart. For more information, see the Vertex AI Java SDK for Gemini reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

For a streaming response, use the generateContentStream method.

  public ResponseStream<GenerateContentResponse> generateContentStream(Content content)
  

For a non-streaming response, use the generateContent method.

  public GenerateContentResponse generateContent(Content content)
  

Sample code

import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.api.Content;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import com.google.cloud.vertexai.generativeai.ContentMaker;
import com.google.cloud.vertexai.generativeai.GenerativeModel;
import com.google.cloud.vertexai.generativeai.PartMaker;
import com.google.cloud.vertexai.generativeai.ResponseHandler;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.HttpURLConnection;
import java.net.URL;

public class MultimodalMultiImage {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-google-cloud-project-id";
    String location = "us-central1";
    String modelName = "gemini-1.5-flash-001";

    multimodalMultiImage(projectId, location, modelName);
  }

  // Generates content from multiple input images.
  public static void multimodalMultiImage(String projectId, String location, String modelName)
      throws IOException {
    // Initialize client that will be used to send requests. This client only needs
    // to be created once, and can be reused for multiple requests.
    try (VertexAI vertexAI = new VertexAI(projectId, location)) {
      GenerativeModel model = new GenerativeModel(modelName, vertexAI);

      Content content = ContentMaker.fromMultiModalData(
          PartMaker.fromMimeTypeAndData("image/png", readImageFile(
              "https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark1.png")),
          "city: Rome, Landmark: the Colosseum",
          PartMaker.fromMimeTypeAndData("image/png", readImageFile(
              "https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark2.png")),
          "city: Beijing, Landmark: Forbidden City",
          PartMaker.fromMimeTypeAndData("image/png", readImageFile(
              "https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark3.png"))
      );

      GenerateContentResponse response = model.generateContent(content);

      String output = ResponseHandler.getText(response);
      System.out.println(output);
    }
  }

  // Reads the image data from the given URL.
  public static byte[] readImageFile(String url) throws IOException {
    URL urlObj = new URL(url);
    HttpURLConnection connection = (HttpURLConnection) urlObj.openConnection();
    connection.setRequestMethod("GET");

    int responseCode = connection.getResponseCode();

    if (responseCode == HttpURLConnection.HTTP_OK) {
      InputStream inputStream = connection.getInputStream();
      ByteArrayOutputStream outputStream = new ByteArrayOutputStream();

      byte[] buffer = new byte[1024];
      int bytesRead;
      while ((bytesRead = inputStream.read(buffer)) != -1) {
        outputStream.write(buffer, 0, bytesRead);
      }

      return outputStream.toByteArray();
    } else {
      throw new RuntimeException("Error fetching file: " + responseCode);
    }
  }
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the Generative AI quickstart using the Node.js SDK. For more information, see the Node.js SDK for Gemini reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

For a streaming response, use the generateContentStream method.

  const streamingResp = await generativeModel.generateContentStream(request);
  

For a non-streaming response, use the generateContent method.

  const streamingResp = await generativeModel.generateContent(request);
  

Sample code

const {VertexAI} = require('@google-cloud/vertexai');
const axios = require('axios');

async function getBase64(url) {
  const image = await axios.get(url, {responseType: 'arraybuffer'});
  return Buffer.from(image.data).toString('base64');
}

/**
 * TODO(developer): Update these variables before running the sample.
 */
async function sendMultiModalPromptWithImage(
  projectId = 'PROJECT_ID',
  location = 'us-central1',
  model = 'gemini-1.5-flash-001'
) {
  // For images, the SDK supports base64 strings
  const landmarkImage1 = await getBase64(
    'https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark1.png'
  );
  const landmarkImage2 = await getBase64(
    'https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark2.png'
  );
  const landmarkImage3 = await getBase64(
    'https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark3.png'
  );

  // Initialize Vertex with your Cloud project and location
  const vertexAI = new VertexAI({project: projectId, location: location});

  const generativeVisionModel = vertexAI.getGenerativeModel({
    model: model,
  });

  // Pass multimodal prompt
  const request = {
    contents: [
      {
        role: 'user',
        parts: [
          {
            inlineData: {
              data: landmarkImage1,
              mimeType: 'image/png',
            },
          },
          {
            text: 'city: Rome, Landmark: the Colosseum',
          },

          {
            inlineData: {
              data: landmarkImage2,
              mimeType: 'image/png',
            },
          },
          {
            text: 'city: Beijing, Landmark: Forbidden City',
          },
          {
            inlineData: {
              data: landmarkImage3,
              mimeType: 'image/png',
            },
          },
        ],
      },
    ],
  };

  // Create the response
  const response = await generativeVisionModel.generateContent(request);
  // Wait for the response to complete
  const aggregatedResponse = await response.response;
  // Select the text from the response
  const fullTextResponse =
    aggregatedResponse.candidates[0].content.parts[0].text;

  console.log(fullTextResponse);
}

Go

Before trying this sample, follow the Go setup instructions in the Vertex AI quickstart. For more information, see the Vertex AI Go SDK for Gemini reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

For a streaming response, use the GenerateContentStream method.

  iter := model.GenerateContentStream(ctx, genai.Text("Tell me a story about a lumberjack and his giant ox. Keep it very short."))
  

For a non-streaming response, use the GenerateContent method.

  resp, err := model.GenerateContent(ctx, genai.Text("What is the average size of a swallow?"))
  

Sample code

import (
	"context"
	"fmt"
	"io"
	"mime"
	"path/filepath"

	"cloud.google.com/go/vertexai/genai"
)

// generateMultimodalContent shows how to generate a text from a multimodal prompt using the Gemini model,
// writing the response to the provided io.Writer.
func generateMultimodalContent(w io.Writer, projectID, location, modelName string) error {
	// location := "us-central1"
	// modelName := "gemini-1.5-flash-001"
	ctx := context.Background()

	// create prompt image parts
	colosseum := genai.FileData{
		MIMEType: mime.TypeByExtension(filepath.Ext("landmark1.png")),
		FileURI:  "gs://cloud-samples-data/vertex-ai/llm/prompts/landmark1.png",
	}
	forbiddenCity := genai.FileData{
		MIMEType: mime.TypeByExtension(filepath.Ext("landmark2.png")),
		FileURI:  "gs://cloud-samples-data/vertex-ai/llm/prompts/landmark2.png",
	}
	newImage := genai.FileData{
		MIMEType: mime.TypeByExtension(filepath.Ext("landmark3.png")),
		FileURI:  "gs://cloud-samples-data/vertex-ai/llm/prompts/landmark3.png",
	}
	// create a multimodal (multipart) prompt
	prompt := []genai.Part{
		colosseum,
		genai.Text("city: Rome, Landmark: the Colosseum "),
		forbiddenCity,
		genai.Text("city: Beijing, Landmark: the Forbidden City "),
		newImage,
	}

	// generate the response
	client, err := genai.NewClient(ctx, projectID, location)
	if err != nil {
		return fmt.Errorf("unable to create client: %w", err)
	}
	defer client.Close()

	model := client.GenerativeModel(modelName)

	res, err := model.GenerateContent(ctx, prompt...)
	if err != nil {
		return fmt.Errorf("unable to generate contents: %w", err)
	}

	fmt.Fprintf(w, "generated response: %s\n", res.Candidates[0].Content.Parts[0])
	return nil
}

C#

Before trying this sample, follow the C# setup instructions in the Vertex AI quickstart. For more information, see the Vertex AI C# reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

For a streaming response, use the StreamGenerateContent method.

  public virtual PredictionServiceClient.StreamGenerateContentStream StreamGenerateContent(GenerateContentRequest request)
  

For a non-streaming response, use the GenerateContentAsync method.

  public virtual Task<GenerateContentResponse> GenerateContentAsync(GenerateContentRequest request)
  

For more information on how the server can stream responses, see Streaming RPCs.

Sample code


using Google.Api.Gax.Grpc;
using Google.Cloud.AIPlatform.V1;
using Google.Protobuf;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;

public class MultimodalMultiImage
{
    public async Task<string> GenerateContent(
        string projectId = "your-project-id",
        string location = "us-central1",
        string publisher = "google",
        string model = "gemini-1.5-flash-001"
    )
    {
        var predictionServiceClient = new PredictionServiceClientBuilder
        {
            Endpoint = $"{location}-aiplatform.googleapis.com"
        }.Build();

        ByteString colosseum = await ReadImageFileAsync(
            "https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark1.png");

        ByteString forbiddenCity = await ReadImageFileAsync(
            "https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark2.png");

        ByteString christRedeemer = await ReadImageFileAsync(
            "https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark3.png");

        var generateContentRequest = new GenerateContentRequest
        {
            Model = $"projects/{projectId}/locations/{location}/publishers/{publisher}/models/{model}",
            Contents =
            {
                new Content
                {
                    Role = "USER",
                    Parts =
                    {
                        new Part { InlineData = new() { MimeType = "image/png", Data = colosseum }},
                        new Part { Text = "city: Rome, Landmark: the Colosseum" },
                        new Part { InlineData = new() { MimeType = "image/png", Data = forbiddenCity }},
                        new Part { Text = "city: Beijing, Landmark: Forbidden City"},
                        new Part { InlineData = new() { MimeType = "image/png", Data = christRedeemer }}
                    }
                }
            }
        };

        using PredictionServiceClient.StreamGenerateContentStream response = predictionServiceClient.StreamGenerateContent(generateContentRequest);

        StringBuilder fullText = new();

        AsyncResponseStream<GenerateContentResponse> responseStream = response.GetResponseStream();
        await foreach (GenerateContentResponse responseItem in responseStream)
        {
            fullText.Append(responseItem.Candidates[0].Content.Parts[0].Text);
        }
        return fullText.ToString();
    }

    private static async Task<ByteString> ReadImageFileAsync(string url)
    {
        using HttpClient client = new();
        using var response = await client.GetAsync(url);
        byte[] imageBytes = await response.Content.ReadAsByteArrayAsync();
        return ByteString.CopyFrom(imageBytes);
    }
}

REST

After you set up your environment, you can use REST to test a text prompt. The following sample sends a request to the publisher model endpoint.

Before using any of the request data, make the following replacements:

  • LOCATION: The region to process the request. Enter a supported region. For the full list of supported regions, see Available locations.

    Click to expand a partial list of available regions

    • us-central1
    • us-west4
    • northamerica-northeast1
    • us-east4
    • us-west1
    • asia-northeast3
    • asia-southeast1
    • asia-northeast1
  • PROJECT_ID: Your project ID.
  • FILE_URI1: The URI or URL of the file to include in the prompt. Acceptable values include the following:
    • Cloud Storage bucket URI: The object must either be publicly readable or reside in the same Google Cloud project that's sending the request. For gemini-1.5-pro and gemini-1.5-flash, the size limit is 2 GB. For gemini-1.0-pro-vision, the size limit is 20 MB.
    • HTTP URL: The file URL must be publicly readable. You can specify one video file, one audio file, and up to 10 image files per request. Audio files, video files, and documents can't exceed 15 MB.
    • YouTube video URL:The YouTube video must be either owned by the account that you used to sign in to the Google Cloud console or is public. Only one YouTube video URL is supported per request.

    When specifying a fileURI, you must also specify the media type (mimeType) of the file. If VPC Service Controls is enabled, specifying a media file URL for fileURI is not supported.

    If you don't have an image file in Cloud Storage, then you can use the following publicly available file: gs://cloud-samples-data/vertex-ai/llm/prompts/landmark1.png with a mime type of image/png. To view this image, open the sample image file.

  • MIME_TYPE: The media type of the file specified in the data or fileUri fields. Acceptable values include the following:

    Click to expand MIME types

    • application/pdf
    • audio/mpeg
    • audio/mp3
    • audio/wav
    • image/png
    • image/jpeg
    • image/webp
    • text/plain
    • video/mov
    • video/mpeg
    • video/mp4
    • video/mpg
    • video/avi
    • video/wmv
    • video/mpegps
    • video/flv
    For simplicity, this sample uses the same media type for all three input images.
  • TEXT1: The text instructions to include in the prompt. For example, city: Rome, Landmark: the Colosseum
  • FILE_URI2: The URI or URL of the file to include in the prompt. Acceptable values include the following:
    • Cloud Storage bucket URI: The object must either be publicly readable or reside in the same Google Cloud project that's sending the request. For gemini-1.5-pro and gemini-1.5-flash, the size limit is 2 GB. For gemini-1.0-pro-vision, the size limit is 20 MB.
    • HTTP URL: The file URL must be publicly readable. You can specify one video file, one audio file, and up to 10 image files per request. Audio files, video files, and documents can't exceed 15 MB.
    • YouTube video URL:The YouTube video must be either owned by the account that you used to sign in to the Google Cloud console or is public. Only one YouTube video URL is supported per request.

    When specifying a fileURI, you must also specify the media type (mimeType) of the file. If VPC Service Controls is enabled, specifying a media file URL for fileURI is not supported.

    If you don't have an image file in Cloud Storage, then you can use the following publicly available file: gs://cloud-samples-data/vertex-ai/llm/prompts/landmark2.png with a mime type of image/png. To view this image, open the sample image file.

  • TEXT2: The text instructions to include in the prompt. For example, city: Beijing, Landmark: Forbidden City
  • FILE_URI3: The URI or URL of the file to include in the prompt. Acceptable values include the following:
    • Cloud Storage bucket URI: The object must either be publicly readable or reside in the same Google Cloud project that's sending the request. For gemini-1.5-pro and gemini-1.5-flash, the size limit is 2 GB. For gemini-1.0-pro-vision, the size limit is 20 MB.
    • HTTP URL: The file URL must be publicly readable. You can specify one video file, one audio file, and up to 10 image files per request. Audio files, video files, and documents can't exceed 15 MB.
    • YouTube video URL:The YouTube video must be either owned by the account that you used to sign in to the Google Cloud console or is public. Only one YouTube video URL is supported per request.

    When specifying a fileURI, you must also specify the media type (mimeType) of the file. If VPC Service Controls is enabled, specifying a media file URL for fileURI is not supported.

    If you don't have an image file in Cloud Storage, then you can use the following publicly available file: gs://cloud-samples-data/vertex-ai/llm/prompts/landmark3.png with a mime type of image/png. To view this image, open the sample image file.

To send your request, choose one of these options:

curl

Save the request body in a file named request.json. Run the following command in the terminal to create or overwrite this file in the current directory:

cat > request.json << 'EOF'
{
  "contents": {
    "role": "USER",
    "parts": [
      {
        "fileData": {
          "fileUri": "FILE_URI1",
          "mimeType": "MIME_TYPE"
        }
      },
      {
        "text": "TEXT1"
      },
      {
        "fileData": {
          "fileUri": "FILE_URI2",
          "mimeType": "MIME_TYPE"
        }
      },
      {
        "text": "TEXT2"
      },
      {
        "fileData": {
          "fileUri": "FILE_URI3",
          "mimeType": "MIME_TYPE"
        }
      }
    ]
  }
}
EOF

Then execute the following command to send your REST request:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/gemini-1.5-flash:generateContent"

PowerShell

Save the request body in a file named request.json. Run the following command in the terminal to create or overwrite this file in the current directory:

@'
{
  "contents": {
    "role": "USER",
    "parts": [
      {
        "fileData": {
          "fileUri": "FILE_URI1",
          "mimeType": "MIME_TYPE"
        }
      },
      {
        "text": "TEXT1"
      },
      {
        "fileData": {
          "fileUri": "FILE_URI2",
          "mimeType": "MIME_TYPE"
        }
      },
      {
        "text": "TEXT2"
      },
      {
        "fileData": {
          "fileUri": "FILE_URI3",
          "mimeType": "MIME_TYPE"
        }
      }
    ]
  }
}
'@  | Out-File -FilePath request.json -Encoding utf8

Then execute the following command to send your REST request:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/gemini-1.5-flash:generateContent" | Select-Object -Expand Content

You should receive a JSON response similar to the following.

Note the following in the URL for this sample:
  • Use the generateContent method to request that the response is returned after it's fully generated. To reduce the perception of latency to a human audience, stream the response as it's being generated by using the streamGenerateContent method.
  • The multimodal model ID is located at the end of the URL before the method (for example, gemini-1.5-flash or gemini-1.0-pro-vision). This sample may support other models as well.

Console

To send a multimodal prompt by using the Google Cloud console, do the following:

  1. In the Vertex AI section of the Google Cloud console, go to the Vertex AI Studio page.

    Go to Vertex AI Studio

  2. Click Open freeform.

  3. Optional: Configure the model and parameters:

    • Model: Select a model.
    • Region: Select the region that you want to use.
    • Temperature: Use the slider or textbox to enter a value for temperature.

      The temperature is used for sampling during response generation, which occurs when topP and topK are applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 means that the highest probability tokens are always selected. In this case, responses for a given prompt are mostly deterministic, but a small amount of variation is still possible.

      If the model returns a response that's too generic, too short, or the model gives a fallback response, try increasing the temperature.

    • Output token limit: Use the slider or textbox to enter a value for the max output limit.

      Maximum number of tokens that can be generated in the response. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words.

      Specify a lower value for shorter responses and a higher value for potentially longer responses.

    • Add stop sequence: Optional. Enter a stop sequence, which is a series of characters that includes spaces. If the model encounters a stop sequence, the response generation stops. The stop sequence isn't included in the response, and you can add up to five stop sequences.

  4. Optional: To configure advanced parameters, click Advanced and configure as follows:

    Click to expand advanced configurations

    • Top-K: Use the slider or textbox to enter a value for top-K. (not supported for Gemini 1.5).

      Top-K changes how the model selects tokens for output. A top-K of 1 means the next selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-K of 3 means that the next token is selected from among the three most probable tokens by using temperature.

      For each token selection step, the top-K tokens with the highest probabilities are sampled. Then tokens are further filtered based on top-P with the final token selected using temperature sampling.

      Specify a lower value for less random responses and a higher value for more random responses.

    • Top-P: Use the slider or textbox to enter a value for top-P. Tokens are selected from most probable to the least until the sum of their probabilities equals the value of top-P. For the least variable results, set top-P to 0.
    • Max responses: Use the slider or textbox to enter a value for the number of responses to generate.
    • Streaming responses: Enable to print responses as they're generated.
    • Safety filter threshold: Select the threshold of how likely you are to see responses that could be harmful.
    • Enable Grounding: Grounding isn't supported for multimodal prompts.

  5. Click Insert Media, and select a source for your file.

    Upload

    Select the file that you want to upload and click Open.

    By URL

    Enter the URL of the file that you want to use and click Insert.

    Cloud Storage

    Select the bucket and then the file from the bucket that you want to import and click Select.

    Google Drive

    1. Choose an account and give consent to Vertex AI Studio to access your account the first time you select this option. You can upload multiple files that have a total size of up to 10 MB. A single file can't exceed 7 MB.
    2. Click the file that you want to add.
    3. Click Select.

      The file thumbnail displays in the Prompt pane. The total number of tokens also displays. If your prompt data exceeds the token limit, the tokens are truncated and aren't included in processing your data.

  6. Enter your text prompt in the Prompt pane.

  7. Optional: To view the Token ID to text and Token IDs, click the tokens count in the Prompt pane.

  8. Click Submit.

  9. Optional: To save your prompt to My prompts, click Save.

  10. Optional: To get the Python code or a curl command for your prompt, click Get code.

Set optional model parameters

Each model has a set of optional parameters that you can set. For more information, see Content generation parameters.

Image requirements

Gemini multimodal models support the following image MIME types:

Image MIME type Gemini 1.5 Flash Gemini 1.5 Pro Gemini 1.0 Pro Vision
PNG - image/png
JPEG - image/jpeg
WebP - image/webp

There isn't a specific limit to the number of pixels in an image. However, larger images are scaled down and padded to fit a maximum resolution of 3072 x 3072 while preserving their original aspect ratio.

Here's the maximum number of image files allowed in a prompt request:

  • Gemini 1.0 Pro Vision: 16 images
  • Gemini 1.5 Flash and Gemini 1.5 Pro: 3000 images

Here's how tokens are calculated for images:

  • Gemini 1.0 Pro Vision: Each image accounts for 258 tokens.
  • Gemini 1.5 Flash and Gemini 1.5 Pro:
    • If both dimensions of an image are less than or equal to 384 pixels, then 258 tokens are used.
    • If one dimension of an image is greater than 384 pixels, then the image is cropped into tiles. Each tile size defaults to the smallest dimension (width or height) divided by 1.5. If necessary, each tile is adjusted so that it's not smaller than 256 pixels and not greater than 768 pixels. Each tile is then resized to 768x768 and uses 258 tokens.

Best practices

When using images, use the following best practices and information for the best results:

  • If you want to detect text in an image, use prompts with a single image to produce better results than prompts with multiple images.
  • If your prompt contains a single image, place the image before the text prompt in your request.
  • If your prompt contains multiple images, and you want to refer to them later in your prompt or have the model refer to them in the model response, it can help to give each image an index before the image. Use a b c or image 1 image 2 image 3 for your index. The following is an example of using indexed images in a prompt:
    image 1 
    image 2 
    image 3 
    
    Write a blogpost about my day using image 1 and image 2. Then, give me ideas
    for tomorrow based on image 3.
  • Use images with higher resolution; they yield better results.
  • Include a few examples in the prompt.
  • Rotate images to their proper orientation before adding them to the prompt.
  • Avoid blurry images.

Limitations

While Gemini multimodal models are powerful in many multimodal use cases, it's important to understand the limitations of the models:

  • Content moderation: The models refuse to provide answers on images that violate our safety policies.
  • Spatial reasoning: The models aren't precise at locating text or objects in images. They might only return the approximated counts of objects.
  • Medical uses: The models aren't suitable for interpreting medical images (for example, x-rays and CT scans) or providing medical advice.
  • People recognition: The models aren't meant to be used to identify people who aren't celebrities in images.
  • Accuracy: The models might hallucinate or make mistakes when interpreting low-quality, rotated, or extremely low-resolution images. The models might also hallucinate when interpreting handwritten text in images documents.

What's next