Integrate AI Autocomplete in CodeMirror

Welcome to the CodeMirror AI Autocomplete Guide - your complete tutorial for adding AI-powered autocomplete to the CodeMirror editor. Learn how to integrate intelligent code completion, enhance editor productivity, and deliver real-time AI coding suggestions directly inside your browser using FrontLLM and the @marimo-team/codemirror-ai package.
Check the live demo of AI Autocomplete in CodeMirror here.
Setup
First, we need to install all the required dependencies. In this tutorial, we’ll use the @marimo-team/codemirror-ai package, which provides seamless integration of AI autocomplete features into CodeMirror editors.
The list of required dependencies is as follows:
"dependencies": {
"codemirror": "^6.0.2",
"@codemirror/view": "^6.38.6",
"@marimo-team/codemirror-ai": "^0.3.2",
"frontllm": "^0.2.0"
}
To start, make sure you have a FrontLLM account. You need to create a new gateway and obtain a gateway ID.
import { frontLLM } from 'frontllm';
const gateway = frontLLM('<YOUR_GATEWAY_ID>');
Initialize CodeMirror
Now we can initialize the CodeMirror editor and integrate the AI autocomplete functionality.
import { EditorView, basicSetup } from 'codemirror';
import { nextEditPrediction } from '@marimo-team/codemirror-ai';
const placeholder = document.getElementById('editor');
new EditorView({
doc: 'Content of my document...',
placeholder,
extensions: [
basicSetup,
nextEditPrediction({
fetchFn: myFetchFunction, // Autocomplete fetch function
acceptOnClick: true,
defaultKeymap: true,
showAcceptReject: true
})
]
});
The editor is now set up with basic functionality. Next, we need to focus on the myFetchFunction, which is responsible for fetching AI-generated autocomplete suggestions.
Implement the Fetch Function
The myFetchFunction accepts a state argument, which contains the current state of the editor. We’ll use this state to extract the current document content.
It’s important to consider the content of the document before and after the cursor position - we want to provide AI suggestions only after the cursor.
async function myFetchFunction(state) {
const { from, to } = state.selection.main;
const text = state.doc.toString();
const beforeCursor = text.slice(0, from);
const afterCursor = text.slice(to);
// ...
}
For example, if the state of the document is as follows:
Hello,█ world!
Then beforeCursor will be Hello, and afterCursor will be world!.
This allows us to build a prompt that instructs the AI model to provide suggestions only after the cursor position. We can use the following system prompt:
You are a code completion assistant. Your job is to rewrite the marked region of user content, respecting the cursor location.
# 🔍 Markers:
- Editable content is wrapped in:
`<|USER_CONTENT_START|>`
...
`<|USER_CONTENT_END|>`
- The cursor is marked using the **exact token**:
`<|user_cursor_is_here|>`
# 🚫 Forbidden actions (do **NOT** do these):
1. ❌ **Do NOT move, delete, replace, or duplicate** the `<|user_cursor_is_here|>` token.
2. ❌ Do NOT add any text **before or on the same line as** the cursor.
3. ❌ Do NOT change or reformat any text **before** the cursor.
If any of these are violated: **return the content exactly as-is**, unchanged.
# ✅ What you MUST do:
- Add code suggestions *only after* the `<|user_cursor_is_here|>` token.
- Preserve all formatting, indentation, line breaks, and spacing.
- Return only the content between `<|USER_CONTENT_START|>` and `<|USER_CONTENT_END|>` with your changes.
# 🧱 Example:
User input:
```
<|USER_CONTENT_START|>hello<|user_cursor_is_here|><|USER_CONTENT_END|>
```
Correct response:
```
<|USER_CONTENT_START|>hello<|user_cursor_is_here|>world!<|USER_CONTENT_END|>
```
As you can see, we wrap the beforeCursor and afterCursor content with special markers to instruct the AI model where the cursor is located and what content is editable.
Our previous example will then look like this:
<|USER_CONTENT_START|>Hello,<|user_cursor_is_here|> world!<|USER_CONTENT_END|>
To build the final prompt, we can use the following code:
const prompt = `Please complete this text:\n' +
'<|USER_CONTENT_START|>${beforeCursor}<|user_cursor_is_here|>${afterCursor}<|USER_CONTENT_END|>`;
Now we can send the final request to the FrontLLM gateway:
const response = await gateway.complete({
model: 'smart',
messages: [
{
role: 'system',
content: `You are a code completion assistant...`
},
{
role: 'user',
content: prompt
}
]
});
If you want to exclude the system prompt from your code, you can use pre prompts supported by FrontLLM.
Result Extraction
Next, we extract the AI suggestions from the response. We need the content between the <|USER_CONTENT_START|> and <|USER_CONTENT_END|> markers.
Here’s a simple function that does that:
function extract(response) {
const content = response.choices[0].message.content;
const startPos = content.indexOf('<|USER_CONTENT_START|>');
const endPos = content.indexOf('<|USER_CONTENT_END|>');
if (startPos === -1 || endPos === -1 || endPos <= startPos) {
return null;
}
return content.slice(startPos + '<|USER_CONTENT_START|>'.length, endPos);
}
The final implementation of the myFetchFunction will look like this:
async function myFetchFunction(state) {
const { from, to } = state.selection.main;
const text = state.doc.toString();
const beforeCursor = text.slice(0, from);
const afterCursor = text.slice(to);
const response = await gateway.complete({
/* ... */
});
const oldText = `${beforeCursor}<|user_cursor_is_here|>${afterCursor}`;
const newText = extract(response) ?? oldText;
return {
oldText,
newText,
from,
to
};
}
Now we have a fully functional AI autocomplete feature in our CodeMirror editor!
Of course, you can further improve this implementation by adding support for abort signals, handling errors, and optimizing the prompt for your specific use case. For large documents, it’s a good idea to limit the context size sent to the AI model.
Summary
In this tutorial, we covered the basic steps to integrate AI autocomplete functionality into a CodeMirror editor using FrontLLM.
Enjoy coding with AI assistance!
You can view the source code here or check out the 🪟 live demo.