Create Front-End LLM Gateway
In this guide, we’ll walk you through the steps to create a front-end LLM (Large Language Model) gateway using FrontLLM. You’ll be surprised by how simple it is to get started!
Step-by-Step Guide
-
Create an Account
Sign up for a new account on FrontLLM. -
Create a New Gateway
Navigate to thegateways
section and click Create Gateway. -
Choose a Provider
Select your preferred LLM provider, such as OpenAI or OpenRouter. -
Enter Your API Key
Provide the API key associated with the provider you selected. -
Load Available Models
Click the Load supported models button to fetch the models supported by your provider. -
Select Allowed Models
Choose the models you want to make available through your gateway. You must select at least one. -
Define Allowed Domains
Specify which domains can access your gateway.- For local testing, add
localhost
. - For testing on CodePen, add
cdpn.io
.
- For local testing, add
-
Create the Gateway
Click Create to finalize your gateway setup.
Embed the Gateway
Once your gateway is created, you’ll see a ready-to-use <script>
tag on the right side of the screen. Copy and paste this tag into your HTML file to integrate the gateway into your front-end project.
const response = await fetch('https://gateway.frontllm.com/api/gateways/<id>/v1/chat/completions', {
method: 'POST',
mode: 'cors',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: 'openai/o1-mini',
messages: [{ role: 'user', content: 'Translate this text to French: ' + text }]
})
});
const data = await response.json();
You’re Done!
That’s it! You’ve successfully created and embedded your first front-end LLM gateway. You can now start making HTTP requests to it directly from your front-end code.
Happy coding!