AI Automation Tutorials

How to Create an AI Agent That Monitors Competitors in 2026

March 15, 2026 · AI Automation, Competitor Analysis, Solopreneur

Monitoring competitors shouldn’t be a full-time job. In 2026, you can build a lightweight AI agent that watches competitors’ sites, pricing pages, newsletters, and social feeds, then sends you clean summaries on a schedule. This tutorial shows a practical, low-cost stack that I actually use: cron + scraping + change detection + an LLM to summarize. No bloated SaaS required.

You’ll end up with a system that checks sources daily, stores raw snapshots, detects meaningful changes, and delivers a weekly report to your inbox or Slack. This is perfect for solopreneurs and indie hackers who need competitive intel without spending $200/month on monitoring tools.

What this agent will do (scope and outputs)

Time to set up: 2–3 hours the first time. Ongoing cost: $5–$20/month (mainly LLM usage and a small server).

Architecture overview (simple and durable)

Use Playwright when pages are dynamic; use plain fetch for static pages. The agent stores snapshots so you can always review raw changes.

Step 1: Define your competitor list and sources

Start simple. Pick 3–5 competitors and 2–4 sources per competitor:

Create a config file.

{
  "competitors": [
    {
      "name": "AcmeAI",
      "sources": [
        {"type": "page", "url": "https://acmeai.com/pricing"},
        {"type": "page", "url": "https://acmeai.com/changelog"},
        {"type": "social", "url": "https://x.com/acmeai"}
      ]
    },
    {
      "name": "BetaOps",
      "sources": [
        {"type": "page", "url": "https://betaops.io"},
        {"type": "page", "url": "https://betaops.io/blog"}
      ]
    }
  ]
}

Keep the list short until the system works. You can add more later.

Step 2: Set up a minimal Node.js project

Initialize the project:

mkdir competitor-agent
cd competitor-agent
npm init -y
npm install playwright node-fetch sqlite3 diff dotenv

Create folders:

mkdir data snapshots reports

Step 3: Build the fetcher (static + dynamic)

We’ll create a small script that fetches a page and extracts visible text.

// fetcher.js
import fs from "fs";
import fetch from "node-fetch";
import { chromium } from "playwright";

export async function fetchPage(url) {
  // Simple heuristic: use Playwright for dynamic pages
  const useBrowser = url.includes("x.com") || url.includes("/pricing");

  if (!useBrowser) {
    const res = await fetch(url);
    const html = await res.text();
    return { html, text: stripHtml(html) };
  }

  const browser = await chromium.launch();
  const page = await browser.newPage();
  await page.goto(url, { waitUntil: "networkidle" });
  const html = await page.content();
  const text = await page.evaluate(() => document.body.innerText);
  await browser.close();

  return { html, text };
}

function stripHtml(html) {
  return html.replace(/<[^>]*>/g, " ").replace(/\s+/g, " ").trim();
}

This is intentionally minimal. You can add selectors later for cleaner text.

Step 4: Store snapshots and detect changes

We’ll store a timestamped snapshot per source and compare to the last one.

// monitor.js
import fs from "fs";
import path from "path";
import { fetchPage } from "./fetcher.js";

const config = JSON.parse(fs.readFileSync("./competitors.json", "utf8"));

function snapshotPath(name, url, ts) {
  const safe = url.replace(/[^a-z0-9]/gi, "_").toLowerCase();
  return `./snapshots/${name}_${safe}_${ts}.json`;
}

function getLastSnapshot(name, url) {
  const safe = url.replace(/[^a-z0-9]/gi, "_").toLowerCase();
  const files = fs.readdirSync("./snapshots").filter(f => f.startsWith(`${name}_${safe}`));
  if (!files.length) return null;
  const latest = files.sort().pop();
  return JSON.parse(fs.readFileSync(path.join("./snapshots", latest), "utf8"));
}

function diffText(oldText, newText) {
  if (!oldText) return { changed: true, summary: "First snapshot" };
  if (oldText === newText) return { changed: false, summary: "No change" };
  return { changed: true, summary: "Content changed" };
}

(async () => {
  const now = new Date().toISOString();

  for (const c of config.competitors) {
    for (const s of c.sources) {
      const { html, text } = await fetchPage(s.url);
      const last = getLastSnapshot(c.name, s.url);
      const diff = diffText(last?.text, text);

      const snapshot = { name: c.name, url: s.url, ts: now, html, text };
      fs.writeFileSync(snapshotPath(c.name, s.url, now), JSON.stringify(snapshot));

      if (diff.changed) {
        fs.appendFileSync("./data/changes.log", `${now}\t${c.name}\t${s.url}\n`);
      }
    }
  }
})();

This version only marks changes. Later, we’ll summarize them.

Step 5: Summarize changes with an LLM

Once you detect change, you can diff the old/new text and ask an LLM for a concise summary.

// summarize.js
import fs from "fs";
import fetch from "node-fetch";
import dotenv from "dotenv";

dotenv.config();

export async function summarizeChange(oldText, newText) {
  const prompt = `Summarize the key differences between these two versions of a competitor page.
Focus on pricing, features, positioning, and CTAs.
Old:\n${oldText.slice(0, 4000)}\nNew:\n${newText.slice(0, 4000)}\n`;

  const res = await fetch("https://api.openai.com/v1/chat/completions", {
    method: "POST",
    headers: {
      "Authorization": `Bearer ${process.env.OPENAI_API_KEY}`,
      "Content-Type": "application/json"
    },
    body: JSON.stringify({
      model: "gpt-4.1-mini",
      messages: [{ role: "user", content: prompt }],
      max_tokens: 250
    })
  });

  const data = await res.json();
  return data.choices[0].message.content.trim();
}

Swap in Anthropic or local models if preferred. The prompt is intentionally specific to competitive intel.

Step 6: Generate a weekly report

Aggregate all changes and send a report once per week. This is where the system becomes truly useful.

// weekly-report.js
import fs from "fs";
import path from "path";
import { summarizeChange } from "./summarize.js";

function getSnapshots(name, url) {
  const safe = url.replace(/[^a-z0-9]/gi, "_").toLowerCase();
  const files = fs.readdirSync("./snapshots").filter(f => f.startsWith(`${name}_${safe}`));
  if (files.length < 2) return null;
  const latest = JSON.parse(fs.readFileSync(path.join("./snapshots", files.sort().pop()), "utf8"));
  const prev = JSON.parse(fs.readFileSync(path.join("./snapshots", files.sort().slice(-2)[0]), "utf8"));
  return { latest, prev };
}

(async () => {
  const config = JSON.parse(fs.readFileSync("./competitors.json", "utf8"));
  let report = "Weekly Competitor Report\n\n";

  for (const c of config.competitors) {
    report += `## ${c.name}\n`;
    for (const s of c.sources) {
      const pair = getSnapshots(c.name, s.url);
      if (!pair) continue;
      const summary = await summarizeChange(pair.prev.text, pair.latest.text);
      report += `- ${s.url}\n  ${summary}\n`;
    }
    report += "\n";
  }

  fs.writeFileSync(`./reports/report_${new Date().toISOString().slice(0,10)}.txt`, report);
})();

This generates a weekly text report file. Next step is sending it out.

Step 7: Send the report (email or Slack)

Here’s a minimal Slack webhook example.

// notify.js
import fs from "fs";
import fetch from "node-fetch";

const webhook = process.env.SLACK_WEBHOOK_URL;
const report = fs.readFileSync(process.argv[2], "utf8");

await fetch(webhook, {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({ text: report })
});

You can also use SMTP, SendGrid, or Resend. I recommend Slack if you review reports daily.

Step 8: Schedule the agent with cron

Daily monitoring and weekly report generation:

# Daily checks at 7am
0 7 * * * /usr/bin/node /path/competitor-agent/monitor.js

# Weekly report Sunday 8am
0 8 * * 0 /usr/bin/node /path/competitor-agent/weekly-report.js && \
/usr/bin/node /path/competitor-agent/notify.js /path/competitor-agent/reports/latest.txt

Adjust timing to your timezone and review cadence.

Noise control: avoid false positives

Most change detectors produce noise. Use these filters:

For example, only alert when the text changes by more than 3%.

const delta = Math.abs(oldText.length - newText.length) / oldText.length;
if (delta < 0.03) return { changed: false };

Cost breakdown (realistic numbers)

ItemMonthly CostNotes
Small VPS$5–10Hetzner or DigitalOcean
LLM API$5–15Depends on volume
Slack/Email$0Free webhooks or SMTP

Total: $10–25/month for a solid competitor intelligence system.

Practical tips from real use

Optional upgrades (when you’re ready)

Relevant products

If you want prebuilt templates, automation workflows, and prompts for competitor analysis and AI ops, check the Gumroad shop: opsdesk0.gumroad.com. I keep it updated with practical assets I use in real automation stacks.

Common pitfalls (and how to avoid them)

Wrap-up

A competitor monitoring agent is one of the highest ROI automations you can build. It’s low-cost, useful every week, and improves your strategic decisions without drowning you in noise. Start simple, keep it focused, and let the AI do the boring diff + summary work.

FAQ

Can I build this without Playwright? Yes. If your competitors’ pages are mostly static, plain HTTP fetch is enough. Use Playwright only for dynamic pages or sites that hide content behind JS.

How often should I run the agent? Weekly reporting with daily checks is ideal for most solopreneurs. Daily reports are usually too noisy unless you’re in a fast-moving niche.

Is this legal and ethical? Yes, if you’re monitoring public pages and content. Avoid scraping private data, logins, or bypassing paywalls.

What’s the fastest way to reduce noise? Filter for specific sections like pricing tables or feature lists, and ignore small changes under 3–5% text delta.

Can I run this on a local machine instead of a VPS? Yes. A local Mac mini or always-on desktop works fine. Just make sure cron runs and your machine doesn’t sleep.

Resources & Tools

Level up your solopreneur stack:

AI Automation Prompt Pack (520+ prompts) → AI Engineering by Chip Huyen →

The OpsDesk Dispatch

Weekly: revenue numbers, automation wins, and tools that work. No fluff.