Chat Completions
OpenAI-compatible text generation with tool calling and streaming.
The Chat Completions API is fully OpenAI-compatible. If you’re already using the OpenAI SDK or any OpenAI-compatible client, you can point it at Lightcone with minimal changes.
This API is separate from Lightcone’s browser automation features. Use it for text generation, chatbots, and tool-calling workflows. For browser-related AI tasks, see Agent Tasks or the Responses API.
Basic usage
Section titled “Basic usage”from tzafon import Lightcone
client = Lightcone()
result = client.chat.create_completion( model="tzafon.sm-1", messages=[ {"role": "user", "content": "What is the capital of France?"}, ],)print(result)import Lightcone from "@tzafon/lightcone";
const client = new Lightcone();
const result = await client.chat.createCompletion({ model: "tzafon.sm-1", messages: [ { role: "user", content: "What is the capital of France?" }, ],});console.log(result);Streaming
Section titled “Streaming”Set stream: true to receive tokens as they’re generated:
result = client.chat.create_completion( model="tzafon.sm-1", messages=[ {"role": "user", "content": "Write a haiku about programming"}, ], stream=True,)const result = await client.chat.createCompletion({ model: "tzafon.sm-1", messages: [ { role: "user", content: "Write a haiku about programming" }, ], stream: true,});Tool calling
Section titled “Tool calling”Define functions the model can call:
result = client.chat.create_completion( model="tzafon.sm-1", messages=[ {"role": "user", "content": "What's the weather in San Francisco?"}, ], tools=[ { "type": "function", "function": { "name": "get_weather", "description": "Get the current weather for a location", "parameters": { "type": "object", "properties": { "location": {"type": "string", "description": "City name"}, }, "required": ["location"], }, }, }, ],)const result = await client.chat.createCompletion({ model: "tzafon.sm-1", messages: [ { role: "user", content: "What's the weather in San Francisco?" }, ], tools: [ { type: "function", function: { name: "get_weather", description: "Get the current weather for a location", parameters: { type: "object", properties: { location: { type: "string", description: "City name" }, }, required: ["location"], }, }, }, ],});Available models
Section titled “Available models”| Model | Description | Input | Output |
|---|---|---|---|
tzafon.sm-1 | Small, fast general-purpose model | $0.20/M tokens | $0.30/M tokens |
tzafon.northstar-cua-fast | Optimized for computer-use tasks | $0.30/M tokens | $0.50/M tokens |
List models programmatically:
models = client.models.list()print(models)const models = await client.models.list();console.log(models);Key parameters
Section titled “Key parameters”| Parameter | Description |
|---|---|
model | Model to use |
messages | Conversation history |
temperature | Randomness (0 = deterministic, higher = more creative) |
max_completion_tokens | Maximum tokens to generate |
stream | Enable streaming responses |
tools | Function definitions for tool calling |
tool_choice | "auto", "none", or "required" |
response_format | Force "json_object" or JSON schema output |
Using with OpenAI SDK
Section titled “Using with OpenAI SDK”Since the API is OpenAI-compatible, you can use the OpenAI SDK directly:
from openai import OpenAI
client = OpenAI( api_key="sk_your_tzafon_key", base_url="https://api.tzafon.ai",)
response = client.chat.completions.create( model="tzafon.sm-1", messages=[{"role": "user", "content": "Hello!"}],)import OpenAI from "openai";
const client = new OpenAI({ apiKey: "sk_your_tzafon_key", baseURL: "https://api.tzafon.ai",});
const response = await client.chat.completions.create({ model: "tzafon.sm-1", messages: [{ role: "user", content: "Hello!" }],});See also
Section titled “See also”- Responses API — computer-use agent interface (also OpenAI-compatible)
- Agent Tasks — autonomous agents for browser tasks
- How Lightcone works — how Chat Completions fits into the platform