Gemini can generate and process images conversationally. You can prompt Gemini with text, images, or a combination of both to achieve various image-related tasks, such as image generation and editing. All generated images include a SynthID watermark.
Image generation may not be available in all regions and countries, review our Gemini models page for more information.
Image generation (text-to-image)
The following code demonstrates how to generate an image based on a descriptive
prompt. You must include responseModalities
: ["TEXT", "IMAGE"]
in your
configuration. Image-only output is not supported with these models.
Python
from google import genai
from google.genai import types
from PIL import Image
from io import BytesIO
import base64
client = genai.Client()
contents = ('Hi, can you create a 3d rendered image of a pig '
'with wings and a top hat flying over a happy '
'futuristic scifi city with lots of greenery?')
response = client.models.generate_content(
model="gemini-2.0-flash-preview-image-generation",
contents=contents,
config=types.GenerateContentConfig(
response_modalities=['TEXT', 'IMAGE']
)
)
for part in response.candidates[0].content.parts:
if part.text is not None:
print(part.text)
elif part.inline_data is not None:
image = Image.open(BytesIO((part.inline_data.data)))
image.save('gemini-native-image.png')
image.show()
JavaScript
import { GoogleGenAI, Modality } from "@google/genai";
import * as fs from "node:fs";
async function main() {
const ai = new GoogleGenAI({});
const contents =
"Hi, can you create a 3d rendered image of a pig " +
"with wings and a top hat flying over a happy " +
"futuristic scifi city with lots of greenery?";
// Set responseModalities to include "Image" so the model can generate an image
const response = await ai.models.generateContent({
model: "gemini-2.0-flash-preview-image-generation",
contents: contents,
config: {
responseModalities: [Modality.TEXT, Modality.IMAGE],
},
});
for (const part of response.candidates[0].content.parts) {
// Based on the part type, either show the text or save the image
if (part.text) {
console.log(part.text);
} else if (part.inlineData) {
const imageData = part.inlineData.data;
const buffer = Buffer.from(imageData, "base64");
fs.writeFileSync("gemini-native-image.png", buffer);
console.log("Image saved as gemini-native-image.png");
}
}
}
main();
Go
package main
import (
"context"
"fmt"
"os"
"google.golang.org/genai"
)
func main() {
ctx := context.Background()
client, err := genai.NewClient(ctx, nil)
if err != nil {
log.Fatal(err)
}
config := &genai.GenerateContentConfig{
ResponseModalities: []string{"TEXT", "IMAGE"},
}
result, _ := client.Models.GenerateContent(
ctx,
"gemini-2.0-flash-preview-image-generation",
genai.Text("Hi, can you create a 3d rendered image of a pig " +
"with wings and a top hat flying over a happy " +
"futuristic scifi city with lots of greenery?"),
config,
)
for _, part := range result.Candidates[0].Content.Parts {
if part.Text != "" {
fmt.Println(part.Text)
} else if part.InlineData != nil {
imageBytes := part.InlineData.Data
outputFilename := "gemini_generated_image.png"
_ = os.WriteFile(outputFilename, imageBytes, 0644)
}
}
}
REST
curl -s -X POST
"https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash-preview-image-generation:generateContent" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [{
"parts": [
{"text": "Hi, can you create a 3d rendered image of a pig with wings and a top hat flying over a happy futuristic scifi city with lots of greenery?"}
]
}],
"generationConfig":{"responseModalities":["TEXT","IMAGE"]}
}' \
| grep -o '"data": "[^"]*"' \
| cut -d'"' -f4 \
| base64 --decode > gemini-native-image.png

Image editing (text-and-image-to-image)
To perform image editing, add an image as input. The following example demonstrates uploading base64 encoded images. For multiple images and larger payloads, check the image input section.
Python
from google import genai
from google.genai import types
from PIL import Image
from io import BytesIO
import PIL.Image
image = PIL.Image.open('/path/to/image.png')
client = genai.Client()
text_input = ('Hi, This is a picture of me.'
'Can you add a llama next to me?',)
response = client.models.generate_content(
model="gemini-2.0-flash-preview-image-generation",
contents=[text_input, image],
config=types.GenerateContentConfig(
response_modalities=['TEXT', 'IMAGE']
)
)
for part in response.candidates[0].content.parts:
if part.text is not None:
print(part.text)
elif part.inline_data is not None:
image = Image.open(BytesIO((part.inline_data.data)))
image.show()
JavaScript
import { GoogleGenAI, Modality } from "@google/genai";
import * as fs from "node:fs";
async function main() {
const ai = new GoogleGenAI({});
// Load the image from the local file system
const imagePath = "path/to/image.png";
const imageData = fs.readFileSync(imagePath);
const base64Image = imageData.toString("base64");
// Prepare the content parts
const contents = [
{ text: "Can you add a llama next to the image?" },
{
inlineData: {
mimeType: "image/png",
data: base64Image,
},
},
];
// Set responseModalities to include "Image" so the model can generate an image
const response = await ai.models.generateContent({
model: "gemini-2.0-flash-preview-image-generation",
contents: contents,
config: {
responseModalities: [Modality.TEXT, Modality.IMAGE],
},
});
for (const part of response.candidates[0].content.parts) {
// Based on the part type, either show the text or save the image
if (part.text) {
console.log(part.text);
} else if (part.inlineData) {
const imageData = part.inlineData.data;
const buffer = Buffer.from(imageData, "base64");
fs.writeFileSync("gemini-native-image.png", buffer);
console.log("Image saved as gemini-native-image.png");
}
}
}
main();
Go
package main
import (
"context"
"fmt"
"os"
"google.golang.org/genai"
)
func main() {
ctx := context.Background()
client, err := genai.NewClient(ctx, nil)
if err != nil {
log.Fatal(err)
}
imagePath := "/path/to/image.png"
imgData, _ := os.ReadFile(imagePath)
parts := []*genai.Part{
genai.NewPartFromText("Hi, This is a picture of me. Can you add a llama next to me?"),
&genai.Part{
InlineData: &genai.Blob{
MIMEType: "image/png",
Data: imgData,
},
},
}
contents := []*genai.Content{
genai.NewContentFromParts(parts, genai.RoleUser),
}
config := &genai.GenerateContentConfig{
ResponseModalities: []string{"TEXT", "IMAGE"},
}
result, _ := client.Models.GenerateContent(
ctx,
"gemini-2.0-flash-preview-image-generation",
contents,
config,
)
for _, part := range result.Candidates[0].Content.Parts {
if part.Text != "" {
fmt.Println(part.Text)
} else if part.InlineData != nil {
imageBytes := part.InlineData.Data
outputFilename := "gemini_generated_image.png"
_ = os.WriteFile(outputFilename, imageBytes, 0644)
}
}
}
REST
IMG_PATH=/path/to/your/image1.jpeg
if [[ "$(base64 --version 2>&1)" = *"FreeBSD"* ]]; then
B64FLAGS="--input"
else
B64FLAGS="-w0"
fi
IMG_BASE64=$(base64 "$B64FLAGS" "$IMG_PATH" 2>&1)
curl -X POST \
"https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash-preview-image-generation:generateContent" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-H 'Content-Type: application/json' \
-d "{
\"contents\": [{
\"parts\":[
{\"text\": \"'Hi, This is a picture of me. Can you add a llama next to me\"},
{
\"inline_data\": {
\"mime_type\":\"image/jpeg\",
\"data\": \"$IMG_BASE64\"
}
}
]
}],
\"generationConfig\": {\"responseModalities\": [\"TEXT\", \"IMAGE\"]}
}" \
| grep -o '"data": "[^"]*"' \
| cut -d'"' -f4 \
| base64 --decode > gemini-edited-image.png
Other image generation modes
Gemini supports other image interaction modes based on prompt structure and context, including:
- Text to image(s) and text (interleaved): Outputs images with related text.
- Example prompt: "Generate an illustrated recipe for a paella."
- Image(s) and text to image(s) and text (interleaved): Uses input images and text to create new related images and text.
- Example prompt: (With an image of a furnished room) "What other color sofas would work in my space? can you update the image?"
- Multi-turn image editing (chat): Keep generating / editing images conversationally.
- Example prompts: [upload an image of a blue car.] , "Turn this car into a convertible.", "Now change the color to yellow."
Limitations
- For best performance, use the following languages: EN, es-MX, ja-JP, zh-CN, hi-IN.
- Image generation does not support audio or video inputs.
- Image generation may not always trigger:
- The model may output text only. Try asking for image outputs explicitly (e.g. "generate an image", "provide images as you go along", "update the image").
- The model may stop generating partway through. Try again or try a different prompt.
- When generating text for an image, Gemini works best if you first generate the text and then ask for an image with the text.
- There are some regions/countries where Image generation is not available. See Models for more information.
When to use Imagen
In addition to using Gemini's built-in image generation capabilities, you can also access Imagen, our specialized image generation model, through the Gemini API.
Choose Gemini when:
- You need contextually relevant images that leverage world knowledge and reasoning.
- Seamlessly blending text and images is important.
- You want accurate visuals embedded within long text sequences.
- You want to edit images conversationally while maintaining context.
Choose Imagen when:
- Image quality, photorealism, artistic detail, or specific styles (e.g., impressionism, anime) are top priorities.
- Performing specialized editing tasks like product background updates or image upscaling.
- Infusing branding, style, or generating logos and product designs.
Imagen 4 should be your go-to model starting to generate images with Imagen. Choose Imagen 4 Ultra for advanced use-cases or when you need the best image quality. Note that Imagen 4 Ultra can only generate one image at a time.
What's next
- Check out the Veo guide to learn how to generate videos with the Gemini API.
- To learn more about Gemini models, see Gemini models and Experimental models.