Skip to content
Dashboard
Core Concepts

Chat Completions

OpenAI-compatible text generation with tool calling and streaming.

The Chat Completions API is fully OpenAI-compatible. If you’re already using the OpenAI SDK or any OpenAI-compatible client, you can point it at Lightcone with minimal changes.

This API is separate from Lightcone’s browser automation features. Use it for text generation, chatbots, and tool-calling workflows. For browser-related AI tasks, see Agent Tasks or the Responses API.

from tzafon import Lightcone
client = Lightcone()
result = client.chat.create_completion(
model="tzafon.sm-1",
messages=[
{"role": "user", "content": "What is the capital of France?"},
],
)
print(result)
import Lightcone from "@tzafon/lightcone";
const client = new Lightcone();
const result = await client.chat.createCompletion({
model: "tzafon.sm-1",
messages: [
{ role: "user", content: "What is the capital of France?" },
],
});
console.log(result);

Set stream: true to receive tokens as they’re generated:

result = client.chat.create_completion(
model="tzafon.sm-1",
messages=[
{"role": "user", "content": "Write a haiku about programming"},
],
stream=True,
)
const result = await client.chat.createCompletion({
model: "tzafon.sm-1",
messages: [
{ role: "user", content: "Write a haiku about programming" },
],
stream: true,
});

Define functions the model can call:

result = client.chat.create_completion(
model="tzafon.sm-1",
messages=[
{"role": "user", "content": "What's the weather in San Francisco?"},
],
tools=[
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"},
},
"required": ["location"],
},
},
},
],
)
const result = await client.chat.createCompletion({
model: "tzafon.sm-1",
messages: [
{ role: "user", content: "What's the weather in San Francisco?" },
],
tools: [
{
type: "function",
function: {
name: "get_weather",
description: "Get the current weather for a location",
parameters: {
type: "object",
properties: {
location: { type: "string", description: "City name" },
},
required: ["location"],
},
},
},
],
});
ModelDescriptionInputOutput
tzafon.sm-1Small, fast general-purpose model$0.20/M tokens$0.30/M tokens
tzafon.northstar-cua-fastOptimized for computer-use tasks$0.30/M tokens$0.50/M tokens

List models programmatically:

models = client.models.list()
print(models)
const models = await client.models.list();
console.log(models);
ParameterDescription
modelModel to use
messagesConversation history
temperatureRandomness (0 = deterministic, higher = more creative)
max_completion_tokensMaximum tokens to generate
streamEnable streaming responses
toolsFunction definitions for tool calling
tool_choice"auto", "none", or "required"
response_formatForce "json_object" or JSON schema output

Since the API is OpenAI-compatible, you can use the OpenAI SDK directly:

from openai import OpenAI
client = OpenAI(
api_key="sk_your_tzafon_key",
base_url="https://api.tzafon.ai",
)
response = client.chat.completions.create(
model="tzafon.sm-1",
messages=[{"role": "user", "content": "Hello!"}],
)
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "sk_your_tzafon_key",
baseURL: "https://api.tzafon.ai",
});
const response = await client.chat.completions.create({
model: "tzafon.sm-1",
messages: [{ role: "user", content: "Hello!" }],
});