Running Gemini Nano in Google Chrome doesn't require any data network.
Requirements
A "desktop platform" with
- Recent operating system (OS) version
- 22+ GB on the volume that contains your Chrome profile.
- GPU
- 4 GB Video RAM

Download Google Chrome for Developers
To run Gemini Nano in Google Chrome you will have to download a special version of Google Chrome — Google Chrome for developers / Canary.
Download Google Chrome for Developers from Dev channel (or Canary channel), version at least 128.0.6545.0.
Check the version by typing chrome://version
it into the URL bar and pressing Enter.
Enable Feature Flags & Check For Updates
Enable two feature flags :
- Prompt API — To send natural language instructions to an instance of Gemini Nano in Chrome.
- On-device model — To bypass performance checks that might get in the way of downloading Gemini Nano on your device.
On-device model Flag
Open a new tab in Chrome, go to chrome://flags/#optimization-guide-on-device-model
Select Enabled BypassPerfRequirement
to facilitate a smooth download of Gemini Nano on your laptop.

Relaunch Google Chrome for Developers.
Prompt API Flag
Open a new tab in Chrome, go to chrome://flags/#prompt-api-for-gemini-nano
to Enabled
.

If you do not see "Optimization Guide On Device Model" listed, you may need to wait 1–2 days before it shows up (this was the case for me).
Relaunch Google Chrome for Developers.
Check For Updates
At this point, it's good to check for updates. As said above, this is an experimental feature and might change over time even with short notice.
Go to chrome://components
and click "Check for Update" on "Optimization Guide On Device Model"

The version should be greater or equal to 2024.5.21.1031.
If you do not see "Optimization Guide On Device Model" listed, you may need to wait a few minutes or some hours (this was the case for me).
Once the model has downloaded go to the next step: Run Gemini Nano in Google Chrome.
Run Gemini Nano in Google Chrome
To verify that everything is working correctly, open the browser console e.g. DevTools (Shift + CTRL + J
on Windows/Linux or Option + ⌘ + J
on macOS) and run the following code:
(await ai.languageModel.capabilities()).available;
If this returns "readily", then you are all set.
If it fails, we need to force Chrome to recognize that we want to use this API.
So, from the same console send the following code:
await ai.languageModel.create();
This will likely fail but apparently it's intended.
Relaunch Google Chrome for Developers.
Then go through the Check For Updates section again.
Use Gemini Nano With UI
At this point, you are ready to try the built-in version of Gemini Nano on Chrome for developers!
You can find an intuitive UI using the Chrome Dev Playground.

Use Gemini Nano APIs
Try out the API by simply using it in the browser console.
Start by checking if it's possible to create a session based on the availability of the model, and the characteristics of the device.
In the browser console, run:
const {available, defaultTemperature, defaultTopK, maxTopK } = await ai.languageModel.capabilities();
if (available !== "no") {
const session = await ai.languageModel.create();
// Prompt the model and wait for the whole result to come back.
const result = await session.prompt("Tell me a German joke");
console.log(result);
}
Built-in AI models guarantee certain benefits over using models online:
- Virtually Zero Costs
- Faster Response Time
- Offline availability
- Local processing of sensitive data
This early preview of Gemini Nano allows text interactions. Naturally, the quality of the output does not match the quality of bigger LLM models...
The core object is window.ai
. It has three core methods:
canCreateTextSession
createTextSession
textModelInfo
If you first check for window.ai
, you could then use canCreateTextSession
to see if AI support is really ready, if it iss on a supported browser and the model has been loaded. This does not return true
, but... readily
.
textModelInfo
returns information about the model:
{
"defaultTemperature": 0.800000011920929,
"defaultTopK": 3,
"maxTopK": 128
}
Finally: createTextSession
.
const model = await window.ai.createTextSession();
await model.prompt("Who are you?");
promptStreaming
method is for working with a streamed response.
Example:
<script defer src="https://cdn.jsdelivr.net/npm/alpinejs@3.x.x/dist/cdn.min.js"></script>
<h2>window.ai demo</h2>
<div x-data="app">
<div x-show="!hasAI">
Sorry, no AI for you. Have a nice day.
</div>
<div x-show="hasAI">
<div class="row">
<div class="column">
<label for="prompt">Prompt: </label>
</div>
<div class="column column-90">
<input type="text" x-model="prompt" id="prompt">
</div>
</div>
<button @click="testPrompt">Test</button>
<p x-html="result"></p>
</div>
</div>
document.addEventListener('alpine:init', () => { Alpine.data('app', () => ({ hasAI:false, prompt:"", result:"", session:null, async init() { if(window.ai) { let ready = await window.ai.canCreateTextSession(); if(ready === 'readily') this.hasAI = true; else alert('Browser has AI, but not ready.'); this.session = await window.ai.createTextSession(); } }, async testPrompt() { if(this.prompt === '') return; console.log(`test ${this.prompt}`); this.result = '<i>Working...</i>'; try { this.result = await this.session.prompt(this.prompt); } catch(e) { console.log('window.ai error', e); } } })) });
Text summarization:
<script defer src="https://cdn.jsdelivr.net/npm/alpinejs@3.x.x/dist/cdn.min.js"></script>
<h2>window.ai demo</h2>
<div x-data="app">
<div x-show="!hasAI">
Sorry, no AI for you. Have a nice day.
</div>
<div x-show="hasAI">
<p>
<label for="inputText">Enter the text you would like summarized below:</label>
<textarea x-model="inputText" id="inputText"></textarea>
</p>
<button @click="testSummarize">Summarize</button>
<p x-html="result"></p>
</div>
</div>
document.addEventListener('alpine:init', () => { Alpine.data('app', () => ({ hasAI:false, inputText:"", result:"", session:null, async init() { if(window.ai) { let ready = await window.ai.canCreateTextSession(); if(ready === 'readily') this.hasAI = true; else alert('Browser has AI, but not ready.'); this.session = await window.ai.createTextSession(); } }, async testSummarize() { if(this.inputText === '') return; this.result = '<i>Working...</i>'; try { let prompt = `Summarize the following text: ${this.inputText}`; this.result = await this.session.prompt(prompt); } catch(e) { console.log('window.ai error', e); } } })) });