We already ran the server-side link audit (linkinator). Please add:

1) A real SPA crawl using Playwright

Install Playwright (dev) and add a script to run a headless crawl that executes JS, clicks links, and records any client-side 404s.

Crawl public pages plus a few deep links. If credentials are available, optionally log in and crawl protected routes.

Output a JSON report spa-link-report.json with:

visited URLs

status: "ok" | "not-found" | "blocked"

source link text (if clicked from a page)

package.json

"devDependencies": {
+  "@playwright/test": "^1.47.0"
}
"scripts": {
+  "spa:audit": "node scripts/spa-audit.mjs"
}


scripts/spa-audit.mjs

import { chromium } from '@playwright/test';
import { writeFileSync } from 'fs';

const ORIGIN = process.env.SPA_ORIGIN || 'http://localhost:5000';
const CREDS = {
  email: process.env.SPA_EMAIL || null,
  password: process.env.SPA_PASSWORD || null,
};

const startPaths = [
  '/', '/pricing', '/resources', '/faq',
  '/business-assets/logo-templates',
  '/brand-development/ai-logo-creator'
];

const queue = new Set(startPaths.map(p => new URL(p, ORIGIN).toString()));
const visited = new Map(); // url -> { status, from }

const isInternal = (href) => href.startsWith(ORIGIN) || href.startsWith('/');
const norm = (href) => href.startsWith('http') ? href : new URL(href, ORIGIN).toString();

const browser = await chromium.launch();
const ctx = await browser.newContext();
const page = await ctx.newPage();

// optional login
if (CREDS.email && CREDS.password) {
  try {
    await page.goto(new URL('/auth/login', ORIGIN).toString(), { waitUntil: 'domcontentloaded' });
    await page.fill('input[type="email"]', CREDS.email);
    await page.fill('input[type="password"]', CREDS.password);
    await page.click('button:has-text("Log in"), button:has-text("Sign in")');
    await page.waitForLoadState('networkidle', { timeout: 10000 }).catch(()=>{});
  } catch (e) { /* ignore if no login page */ }
}

async function visit(url, from=null) {
  if (visited.has(url)) return;
  try {
    await page.goto(url, { waitUntil: 'domcontentloaded' });
    // detect client 404 by heading text or data-attr if you have it
    const notFound = await page.locator('text=Page not found, [data-not-found]').first().isVisible().catch(()=>false);
    visited.set(url, { status: notFound ? 'not-found' : 'ok', from });

    if (notFound) return;
    const links = await page.locator('a[href]').all();
    for (const a of links) {
      const href = await a.getAttribute('href');
      if (!href) continue;
      if (!isInternal(href)) continue;
      const abs = norm(href);
      if (!visited.has(abs)) queue.add(abs);
    }
  } catch (e) {
    visited.set(url, { status: 'blocked', from });
  }
}

while (queue.size) {
  const [next] = queue;
  queue.delete(next);
  await visit(next, null);
}

const report = Array.from(visited.entries()).map(([url, meta]) => ({ url, ...meta }));
writeFileSync('spa-link-report.json', JSON.stringify({ origin: ORIGIN, total: report.length, items: report }, null, 2));
console.log(`\nSPA audit complete for ${ORIGIN}`);
const broken = report.filter(r => r.status === 'not-found');
console.log(`Checked: ${report.length} pages | 404s: ${broken.length}`);
broken.slice(0, 25).forEach((b,i)=>console.log(`${i+1}. ${b.url} (from: ${b.from || '-'})`));
await browser.close();


How to run

# public-only crawl
SPA_ORIGIN=https://your-domain.replit.app node scripts/spa-audit.mjs

# with auth (optional)
SPA_ORIGIN=https://your-domain.replit.app \
SPA_EMAIL=you@example.com SPA_PASSWORD=secret \
node scripts/spa-audit.mjs

2) Add a lightweight runtime 404 reporter

Ensure there is a distinct NotFoundPage that renders a wrapper with data-not-found.

On mount of NotFoundPage, console.warn('[404]', location.pathname) and (optionally) window.dispatchEvent(new CustomEvent('spa-404', { detail: location.pathname })) so we can later hook analytics.

client/src/pages/NotFoundPage.tsx

export default function NotFoundPage() {
+  React.useEffect(() => { console.warn('[404]', window.location.pathname); }, []);
  return (
-    <div className="mx-auto max-w-3xl p-8 text-center">
+    <div className="mx-auto max-w-3xl p-8 text-center" data-not-found>
      <h1 className="text-2xl font-semibold mb-2">Page not found</h1>
      <p className="text-gray-600 mb-6">The link you followed may be broken or the page may have been removed.</p>
      <div className="flex gap-2 justify-center">
        <a className="btn btn-primary" href="/dashboard">Go to Dashboard</a>
        <a className="btn btn-outline" href="/">Back to Home</a>
      </div>
    </div>
  );
}

Acceptance

spa-link-report.json is generated and lists any client-side 404s.

NotFoundPage has data-not-found so the crawler can detect it reliably.

(Optional) If any 404s are found, update links or add redirects and re-run until 0.