Skip to main content

Request LLM Model from Front-End

In this guide, we will show you how to make requests to your LLM gateway using the FrontLLM SDK. Before you start please read the How to Start with FrontLLM guide to set up your account, create a gateway, and install the FrontLLM SDK.

const gateway = frontLLM('<gateway_id>');

Chat Completion

// Short syntax - requires the default model configured in the gateway

const response = await gateway.complete('Hello world!');

// Full syntax

const response = await gateway.complete({
model: 'fast',
messages: [{ role: 'user', content: 'Hello world!' }],
temperature: 0.7
});

// Output the generated response text to the console.

console.log(response.choices[0].message.content);

Chat Completion with Streaming

// Short syntax - requires the default model configured in the gateway

const response = await gateway.completeStreaming('Where is Europe?');

// Full syntax

const response = await gateway.completeStreaming({
model: 'fast',
messages: [{ role: 'user', content: 'Where is Europe?' }],
temperature: 0.7
});

// Output the generated response text to the console.

for (;;) {
const { finished, chunks } = await response.read();
for (const chunk of chunks) {
console.log(chunk.choices[0].delta.content);
}
if (finished) {
break;
}
}