Skip to main content

Create Front-End LLM Gateway

· 2 min read

In this guide, we’ll walk you through the steps to create a front-end LLM (Large Language Model) gateway using FrontLLM. You’ll be surprised by how simple it is to get started!

Step-by-Step Guide

  1. Create an Account
    Sign up for a new account on FrontLLM.

  2. Create a New Gateway
    Navigate to the gateways section and click Create Gateway.

  3. Choose a Provider
    Select your preferred LLM provider, such as OpenAI or OpenRouter.

  4. Enter Your API Key
    Provide the API key associated with the provider you selected.

  5. Load Available Models
    Click the Load supported models button to fetch the models supported by your provider.

  6. Select Allowed Models
    Choose the models you want to make available through your gateway. You must select at least one.

  7. Define Allowed Domains
    Specify which domains can access your gateway.

    • For local testing, add localhost.
    • For testing on CodePen, add cdpn.io.
  8. Create the Gateway
    Click Create to finalize your gateway setup.

Embed the Gateway

Once your gateway is created, you’ll see a ready-to-use <script> tag on the right side of the screen. Copy and paste this tag into your HTML file to integrate the gateway into your front-end project.

const response = await fetch('https://gateway.frontllm.com/api/gateways/<id>/v1/chat/completions', {
method: 'POST',
mode: 'cors',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: 'openai/o1-mini',
messages: [{ role: 'user', content: 'Translate this text to French: ' + text }]
})
});
const data = await response.json();

You’re Done!

That’s it! You’ve successfully created and embedded your first front-end LLM gateway. You can now start making HTTP requests to it directly from your front-end code.

Happy coding!