--- title: Tutorial: Build a Price Tracker | Lightcone description: Build a script that monitors product prices on any website and alerts you when they change. --- In this tutorial, you’ll build a price tracker that visits a product page, extracts the current price, and alerts you when it changes. Along the way, you’ll learn how to create browser sessions, navigate pages, extract content, take screenshots, and use persistent sessions. **Prerequisites**: Complete the [Quickstart](/guides/quickstart/index.md) and have `TZAFON_API_KEY` set in your environment. **Time**: About 15 minutes. ## What you’ll build A script that: 1. Opens a browser and navigates to a product page 2. Extracts the page HTML and pulls out the price 3. Takes a screenshot as a visual record 4. Saves the session so you don’t start from scratch each run 5. Compares prices across runs and prints an alert when the price drops ## Step 1: Create a browser and visit a page Start by creating a browser session and navigating to a product page. We’ll use [Books to Scrape](https://books.toscrape.com), a safe practice site. ``` from tzafon import Lightcone client = Lightcone() with client.computer.create(kind="browser") as computer: computer.navigate("https://books.toscrape.com/catalogue/sapiens-a-brief-history-of-humankind_996/index.html") computer.wait(2) # Take a screenshot to see the page result = computer.screenshot() print(f"Screenshot: {computer.get_screenshot_url(result)}") ``` ``` import Lightcone from "@tzafon/lightcone"; const client = new Lightcone(); const computer = await client.computers.create({ kind: "browser" }); const id = computer.id!; try { await client.computers.navigate(id, { url: "https://books.toscrape.com/catalogue/sapiens-a-brief-history-of-humankind_996/index.html", }); await new Promise((r) => setTimeout(r, 2000)); const result = await client.computers.screenshot(id); console.log("Screenshot:", result.result?.screenshot_url); } finally { await client.computers.delete(id); } ``` Run this and open the screenshot URL. You should see a book product page with a title, price, and description. Always take a screenshot after navigating to verify the page loaded correctly before extracting data. ## Step 2: Extract the price from the page Use `html()` to get the page’s HTML, then parse out the price. The price on Books to Scrape is in a `

` element. ``` import re from tzafon import Lightcone client = Lightcone() def get_price(): with client.computer.create(kind="browser") as computer: computer.navigate("https://books.toscrape.com/catalogue/sapiens-a-brief-history-of-humankind_996/index.html") computer.wait(2) # Get the page HTML html_result = computer.html() html_content = computer.get_html_content(html_result) # Extract the price using a simple regex match = re.search(r'price_color">£([\d.]+)<', html_content) if match: return float(match.group(1)) return None price = get_price() print(f"Current price: £{price}") ``` ``` import Lightcone from "@tzafon/lightcone"; const client = new Lightcone(); async function getPrice(): Promise { const computer = await client.computers.create({ kind: "browser" }); const id = computer.id!; try { await client.computers.navigate(id, { url: "https://books.toscrape.com/catalogue/sapiens-a-brief-history-of-humankind_996/index.html", }); await new Promise((r) => setTimeout(r, 2000)); const htmlResult = await client.computers.html(id); const html = htmlResult.result?.html_content as string; const match = html.match(/price_color">£([\d.]+)£([\d.]+)<', html_content) price = float(match.group(1)) if match else None # Take a screenshot as proof result = computer.screenshot() screenshot_url = computer.get_screenshot_url(result) return price, screenshot_url # Run the check price, screenshot = check_price() prices = save_price(price, screenshot) print(f"Current price: £{price}") print(f"Screenshot: {screenshot}") # Compare with previous price if len(prices) > 1: previous = prices[-2]["price"] if price < previous: print(f"PRICE DROP! £{previous} → £{price} (save £{previous - price:.2f})") elif price > previous: print(f"Price increased: £{previous} → £{price}") else: print("Price unchanged") else: print("First check — will compare on next run") ``` price\_tracker.ts ``` import Lightcone from "@tzafon/lightcone"; import { readFileSync, writeFileSync, existsSync } from "fs"; const client = new Lightcone(); const PRICE_FILE = "prices.json"; const URL = "https://books.toscrape.com/catalogue/sapiens-a-brief-history-of-humankind_996/index.html"; interface PriceEntry { price: number; screenshot: string; timestamp: string; } function loadPrices(): PriceEntry[] { if (existsSync(PRICE_FILE)) { return JSON.parse(readFileSync(PRICE_FILE, "utf-8")); } return []; } function savePrice(price: number, screenshot: string): PriceEntry[] { const prices = loadPrices(); prices.push({ price, screenshot, timestamp: new Date().toISOString() }); writeFileSync(PRICE_FILE, JSON.stringify(prices, null, 2)); return prices; } async function checkPrice() { const computer = await client.computers.create({ kind: "browser" }); const id = computer.id!; try { await client.computers.navigate(id, { url: URL }); await new Promise((r) => setTimeout(r, 2000)); const htmlResult = await client.computers.html(id); const html = htmlResult.result?.html_content as string; const match = html.match(/price_color">£([\d.]+) 1) { const previous = prices[prices.length - 2].price; if (price < previous) { console.log(`PRICE DROP! £${previous} → £${price} (save £${(previous - price).toFixed(2)})`); } else if (price > previous) { console.log(`Price increased: £${previous} → £${price}`); } else { console.log("Price unchanged"); } } else { console.log("First check — will compare on next run"); } ``` Run it twice. The first run records the baseline; the second run compares. ## Step 4: Use a persistent session for faster checks Each run currently creates a new browser from scratch. Use a [persistent session](/guides/computers#persistent-sessions/index.md) to save cookies and state, making subsequent checks faster. ``` # First run: create and save the session with client.computer.create(kind="browser", persistent=True) as computer: computer.navigate(URL) computer.wait(2) session_id = computer.id print(f"Session saved: {session_id}") # Subsequent runs: reuse the session with client.computer.create( kind="browser", environment_id=session_id, ) as computer: computer.navigate(URL) computer.wait(1) # Faster — browser state is warm html_result = computer.html() html_content = computer.get_html_content(html_result) # ... extract price as before ``` ``` // First run: create and save the session const session = await client.computers.create({ kind: "browser", persistent: true, }); await client.computers.navigate(session.id!, { url: URL }); await client.computers.delete(session.id!); console.log(`Session saved: ${session.id}`); // Subsequent runs: reuse the session const restored = await client.computers.create({ kind: "browser", environment_id: session.id!, }); await client.computers.navigate(restored.id!, { url: URL }); // ... extract price as before await client.computers.delete(restored.id!); ``` Persistent sessions save cookies, local storage, and browser cache. This is especially useful when tracking prices on sites that require login. ## Step 5: Run on a schedule To check prices automatically, run your script on a schedule. Here are a few options: **Cron (Linux/macOS):** Terminal window ``` # Check every hour 0 * * * * cd /path/to/project && python price_tracker.py ``` **Task scheduler (programmatic):** You can also use [Trigger.dev](https://trigger.dev), [Inngest](https://inngest.com), or any task scheduler to run checks on a cadence. ## What you learned In this tutorial, you: 1. **Created browser sessions** with `client.computer.create()` and the context manager pattern 2. **Navigated to pages** and waited for them to load 3. **Extracted HTML content** with `computer.html()` and parsed it for specific data 4. **Took screenshots** as visual records of page state 5. **Used persistent sessions** to save and restore browser state across runs 6. **Built a complete workflow** that detects and reports changes over time ## Next steps - [**Web scraping**](/use-cases/web-scraping/index.md) — more scraping patterns including pagination and bot detection - [**Dashboard monitoring**](/use-cases/dashboard-monitoring/index.md) — similar pattern applied to internal dashboards - [**Run an agent**](/guides/run-an-agent/index.md) — let AI extract data instead of writing regex - [**Playwright integration**](/integrations/playwright/index.md) — use CSS selectors for more reliable extraction