Documentation
Foglift Documentation
Everything you need to integrate Foglift — whether you're a developer building a pipeline, an AI agent scanning autonomously, or a team adding SEO checks to CI/CD.
Quick Start
Three ways to use Foglift — pick the one that fits your workflow.
For Humans (CLI)
npx foglift scan mysite.comColored output in terminal
For Code (API)
GET /api/v1/scan?url=...Structured JSON response
For AI Agents (MCP)
npx foglift-mcpClaude Code, Cursor, Windsurf
CLI Tool
Scan any website from your terminal. No signup, no API key required for basic scans.
Basic Usage
# Scan a website (colored output with scores and issues)
npx foglift scan https://example.com
# Output as JSON (for scripting)
npx foglift scan https://example.com --json
# Output as Markdown
npx foglift scan https://example.com --format=markdown
# Verbose mode (show all issues, not just top 5)
npx foglift scan https://example.com --verboseExample Output
$ npx foglift scan https://example.com
Foglift — Website Intelligence Scanner
URL: https://example.com
Scanned: 2026-03-15 14:30:00 (12 seconds)
┌──────────────────────────────────────────────┐
│ Overall: B (74/100) │
├──────────────────────────────────────────────┤
│ SEO: █████████░ 85/100 A │
│ GEO: █████░░░░░ 52/100 D │
│ Performance: ████████░░ 78/100 C │
│ Security: ██████░░░░ 60/100 C │
│ Accessibility: █████████░ 91/100 A │
└──────────────────────────────────────────────┘
Issues Found: 8 (2 critical, 3 warnings, 3 info)
🔴 [GEO] AI crawlers blocked by robots.txt
🔴 [GEO] Missing FAQ schema markup
🟡 [Security] No Content-Security-Policy header
🟡 [SEO] Missing Open Graph image tag
🟡 [Security] No Permissions-Policy header
Full report → https://foglift.io/scan/abc123CLI Flags
| Flag | Description |
|---|---|
| --json | Output raw JSON (for piping to jq, scripts, etc.) |
| --format=markdown | Output as Markdown table |
| --verbose | Show all issues (not just top 5) |
| --no-color | Disable colored output |
| --threshold=N | Exit with code 1 if overall score is below N (CI/CD use) |
Exit Codes
| Code | Meaning |
|---|---|
| 0 | Scan completed successfully (or score above threshold) |
| 1 | Score below --threshold, or scan error |
| 2 | Invalid URL or network error |
REST API
The Foglift REST API provides programmatic access to website intelligence. Free, no auth required for basic scans.
Base URL
https://foglift.io/api/v1GET /api/v1/scan
Scan a website and return SEO + GEO analysis results.
| Parameter | Type | Required | Description |
|---|---|---|---|
| url | string | Yes | Full URL to scan (include https://) |
| format | string | No | "json" (default) or "text" |
Examples
curl "https://foglift.io/api/v1/scan?url=https://example.com"const response = await fetch(
"https://foglift.io/api/v1/scan?url=https://example.com"
);
const data = await response.json();
console.log(data.scores);
// { overall: 72, seo: 85, geo: 65, performance: 78, security: 60, accessibility: 70 }
console.log(data.issues);
// [{ category: "GEO", severity: "critical", title: "AI crawlers blocked", description: "..." }]
console.log(data.aiSummary);
// "This site has strong SEO fundamentals but needs GEO optimization..."import requests
response = requests.get(
"https://foglift.io/api/v1/scan",
params={"url": "https://example.com"}
)
data = response.json()
print(f"Overall: {data['scores']['overall']}/100")
for issue in data["issues"]:
print(f" [{issue['severity']}] {issue['title']}")Response Schema
{
"url": "https://example.com",
"scanId": "scan_abc123",
"scannedAt": "2026-03-15T12:00:00Z",
"scores": {
"overall": 72, // Weighted average (0-100)
"seo": 85, // Traditional SEO health (0-100)
"geo": 65, // AI search readiness (0-100)
"performance": 78, // Page speed & Core Web Vitals (0-100)
"security": 60, // HTTP security headers (0-100)
"accessibility": 70 // WCAG compliance (0-100)
},
"letterGrade": "B", // A (90+), B (80+), C (60+), D (40+), F (<40)
"issues": [
{
"category": "GEO", // "GEO" | "SEO" | "Performance" | "Security" | "Accessibility"
"severity": "critical", // "critical" | "warning" | "info"
"title": "AI crawlers blocked by robots.txt",
"description": "GPTBot and ClaudeBot are blocked. Add User-agent: GPTBot\nAllow: / to robots.txt"
}
],
"aiSummary": "This site has strong SEO fundamentals but lacks GEO optimization...",
"totalIssues": 12, // Total issue count (some may be hidden on free tier)
"tier": "free", // "free" | "deep" | "pro" | "agency"
"gated": true // true = some issues hidden (free tier)
}Issue Severity Levels
| Severity | Impact | Action |
|---|---|---|
| critical | Directly hurts rankings or blocks AI indexing | Fix immediately |
| warning | Suboptimal, reduces visibility over time | Fix within a week |
| info | Best practice recommendation | Consider implementing |
Score Methodology
Each category score is calculated from multiple weighted checks. The overall score is a weighted average:
SEO Score (25% of overall)
Traditional search engine optimization health. Based on Google Lighthouse SEO audit + custom checks.
- Title tag — present, 30-60 chars, unique, contains target keywords
- Meta description — present, 120-160 chars, compelling copy
- Heading hierarchy — single H1, logical H2-H6 nesting
- Open Graph tags — og:title, og:description, og:image for social sharing
- Twitter Card — twitter:card, twitter:title, twitter:image
- Canonical URL — present and self-referencing
- Image alt text — all images have descriptive alt attributes
- Language attribute — html lang="en" set for screen readers
- Robots directives — no accidental noindex/nofollow
- Sitemap.xml — discoverable at /sitemap.xml
- Robots.txt — present and not blocking important pages
GEO Score (25% of overall)
AI search readiness — how well structured your content is for ChatGPT, Perplexity, Google AI Overviews, and Claude.
- AI crawler access — GPTBot, ClaudeBot, PerplexityBot, Google-Extended not blocked in robots.txt
- Structured data depth — JSON-LD schemas: Organization, FAQPage, Article, Product, HowTo, LocalBusiness
- FAQ sections — Q&A content that AI can extract and cite as answers
- Entity markup — clear entity definitions (brand, people, products) via schema.org
- Content structure — clear headings, short paragraphs, bullet lists, tables
- Citation formatting — statistics with sources, expert quotes, attributed data points
- Meta quality — comprehensive, accurate metadata that helps AI understand page purpose
Performance Score (20% of overall)
Page speed via Google PageSpeed Insights (Lighthouse). Directly from Google's scoring.
- LCP — Largest Contentful Paint (should be < 2.5s)
- CLS — Cumulative Layout Shift (should be < 0.1)
- INP — Interaction to Next Paint (should be < 200ms)
- FCP — First Contentful Paint
- Speed Index — visual progress of page load
- Total Blocking Time — main thread responsiveness
Security Score (15% of overall)
HTTP security headers. Each missing header deducts points.
- HTTPS — valid SSL certificate, no mixed content
- Strict-Transport-Security — HSTS header with max-age
- Content-Security-Policy — CSP header present
- X-Frame-Options — clickjacking protection
- X-Content-Type-Options — MIME sniffing prevention (nosniff)
- Referrer-Policy — controls information leakage
- Permissions-Policy — restricts browser features
Accessibility Score (15% of overall)
WCAG 2.1 compliance via Google Lighthouse accessibility audit.
- Color contrast — text meets WCAG AA ratio (4.5:1)
- Alt text — all informational images have descriptive alt
- Keyboard navigation — interactive elements reachable via Tab
- ARIA labels — dynamic/custom elements properly labeled
- Form labels — every input has an associated label
- Document language — lang attribute set
Score Interpretation Guide
Use this guide to understand what each score range means and what actions to take. AI agents: use these thresholds to prioritize fixes.
90-100: Excellent
Top-tier optimization. Your site is well-positioned for both search and AI.
Action: Maintain current setup. Set up monitoring to catch regressions. Consider the Pro plan for weekly automated scans.
80-89: Good
Solid foundation with a few areas to improve.
Action: Fix remaining critical and warning issues. Usually 2-5 specific items will push you to an A. Check the action recipes below.
60-79: Needs Work
Significant gaps. You're likely losing traffic and AI visibility.
Action: Prioritize critical issues first — they have the biggest impact. Focus on GEO and SEO categories. A Deep Scan ($9) gives you the full issue list and AI action plan.
40-59: Poor
Major issues across multiple categories. AI models likely cannot cite your content.
Action: Start with the basics: ensure HTTPS, add meta tags, allow AI crawlers, add structured data. Each fix will have a significant impact at this level.
0-39: Critical
Fundamental issues. The site may be essentially invisible to AI search.
Action: This usually means the site is blocking crawlers, has no meta tags, or has major technical issues. Follow the action recipes in order — start with robots.txt and meta tags.
Action Recipes
For every issue Foglift can detect, here's the exact fix. AI agents: use these as copy-paste solutions when fixing issues programmatically.
GEO Issues
AI crawlers blocked by robots.txt
GPTBot, ClaudeBot, or PerplexityBot is blocked. This means AI models cannot read your content.
Fix: Add these lines to your robots.txt:
User-agent: GPTBot
Allow: /
User-agent: ClaudeBot
Allow: /
User-agent: PerplexityBot
Allow: /
User-agent: Google-Extended
Allow: /Verify: Rescan the site. GEO score should increase by 10-20 points.
Missing structured data / JSON-LD
No JSON-LD schemas detected. AI models use structured data to understand your content type and key facts.
Fix: Add JSON-LD to your page's <head>:
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "Organization",
"name": "Your Company",
"url": "https://yoursite.com",
"description": "What your company does in one sentence.",
"logo": "https://yoursite.com/logo.png",
"sameAs": [
"https://twitter.com/yourcompany",
"https://linkedin.com/company/yourcompany"
]
}
</script>Missing FAQ schema
No FAQPage schema detected. FAQ schemas are one of the highest-impact GEO optimizations — they give AI direct Q&A content to cite.
Fix:
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{
"@type": "Question",
"name": "What does your product do?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Our product does X, Y, and Z for [target audience]."
}
},
{
"@type": "Question",
"name": "How much does it cost?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Free to start. Paid plans from $X/month."
}
}
]
}
</script>SEO Issues
Missing or poor meta description
No meta description, or it's too short/long. This is the snippet shown in Google search results.
Fix: Add to <head>:
<meta name="description" content="120-160 characters describing what this page offers. Include your primary keyword naturally. Make it compelling — this is your ad copy in search results.">Missing Open Graph tags
No og:title, og:description, or og:image. Links shared on social media will look blank.
Fix:
<meta property="og:title" content="Page Title — Brand Name">
<meta property="og:description" content="Compelling description for social sharing.">
<meta property="og:image" content="https://yoursite.com/og-image.png">
<meta property="og:url" content="https://yoursite.com/this-page">
<meta property="og:type" content="website">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:title" content="Page Title — Brand Name">
<meta name="twitter:description" content="Compelling description for Twitter.">
<meta name="twitter:image" content="https://yoursite.com/og-image.png">Security Issues
Missing security headers
Missing HSTS, CSP, X-Frame-Options, or other security headers. Affects both security and SEO ranking.
Fix (Next.js next.config.js):
const securityHeaders = [
{ key: 'Strict-Transport-Security', value: 'max-age=63072000; includeSubDomains; preload' },
{ key: 'X-Content-Type-Options', value: 'nosniff' },
{ key: 'X-Frame-Options', value: 'DENY' },
{ key: 'X-XSS-Protection', value: '1; mode=block' },
{ key: 'Referrer-Policy', value: 'strict-origin-when-cross-origin' },
{ key: 'Permissions-Policy', value: 'camera=(), microphone=(), geolocation=()' },
];
module.exports = {
async headers() {
return [{ source: '/(.*)', headers: securityHeaders }];
},
};Fix (Vercel vercel.json):
{
"headers": [
{
"source": "/(.*)",
"headers": [
{ "key": "Strict-Transport-Security", "value": "max-age=63072000; includeSubDomains; preload" },
{ "key": "X-Content-Type-Options", "value": "nosniff" },
{ "key": "X-Frame-Options", "value": "DENY" },
{ "key": "Referrer-Policy", "value": "strict-origin-when-cross-origin" },
{ "key": "Permissions-Policy", "value": "camera=(), microphone=(), geolocation=()" }
]
}
]
}MCP Server (AI Agents)
Foglift is available as an MCP (Model Context Protocol) server. AI coding assistants like Claude Code, Cursor, and Windsurf can scan websites and fix issues directly from your IDE.
Installation
npm install -g foglift-mcpClaude Code Setup
Add to ~/.claude/mcp.json:
{
"mcpServers": {
"foglift": {
"command": "npx",
"args": ["foglift-mcp"]
}
}
}Cursor Setup
Add to .cursor/mcp.json in your project root:
{
"mcpServers": {
"foglift": {
"command": "npx",
"args": ["foglift-mcp"]
}
}
}Windsurf Setup
Add to ~/.windsurf/mcp.json:
{
"mcpServers": {
"foglift": {
"command": "npx",
"args": ["foglift-mcp"]
}
}
}Available MCP Tools
| Tool | Description | Parameters |
|---|---|---|
| scan_website | Run full SEO + GEO analysis | url (required, string) |
| check_geo_score | Get GEO readiness score only | url (required, string) |
| get_seo_issues | List all SEO issues | url (required, string) |
Example MCP Tool Response
{
"url": "https://example.com",
"scores": { "overall": 72, "seo": 85, "geo": 52, "performance": 78, "security": 60, "accessibility": 91 },
"issues": [
{ "category": "GEO", "severity": "critical", "title": "AI crawlers blocked by robots.txt", "description": "..." },
{ "category": "GEO", "severity": "critical", "title": "No FAQ schema markup", "description": "..." },
{ "category": "Security", "severity": "warning", "title": "Missing CSP header", "description": "..." }
],
"aiSummary": "Strong SEO but weak GEO. AI crawlers are blocked and no structured data is present..."
}Agent Workflows
Complete workflows for AI agents using Foglift. Copy these patterns to build autonomous SEO/GEO fix pipelines.
Workflow: Scan and Fix All Issues
The most common agent workflow — scan a site, then fix every issue found.
- Scan the site — Call
scan_websitewith the URL. Read the full response. - Prioritize by severity — Fix critical issues first, then warnings, then info.
- For each issue, look up the action recipe — Match the issue title to the recipes in this docs page.
- Apply the fix — Edit the relevant file (robots.txt, HTML head, config, etc.).
- Rescan to verify — Run
scan_websiteagain. Score should improve. - Commit and deploy — Push changes and redeploy the site.
Workflow: CI/CD Quality Gate
Block deploys when SEO/GEO scores drop below a threshold.
- After deploy to staging — Scan the staging URL.
- Check overall score — If below threshold (e.g., 70), fail the build.
- Check for new critical issues — Compare with previous scan. Any new critical = fail.
- Report results — Post scores as a PR comment or CI artifact.
Workflow: New Site Setup
You just deployed a new Next.js/React site. Use Foglift to get it optimized from day one.
- Scan the deployed URL — Get baseline scores.
- Fix robots.txt — Allow all search engines and AI crawlers.
- Add meta tags — title, description, OG tags, Twitter card to every page.
- Add JSON-LD structured data — Organization + FAQPage at minimum.
- Add security headers — HSTS, CSP, X-Frame-Options, etc.
- Generate sitemap.xml — List all public pages.
- Submit to Google Search Console — Verify domain, submit sitemap.
- Rescan — Verify all scores are 80+.
Rate Limits & Authentication
| Tier | Auth | Scans/Day | Response |
|---|---|---|---|
| Free | None | 5 (web) / 10 (API) | Scores + top 3 issues |
| Deep Scan ($9) | Checkout session | Per-purchase | All issues + AI action plan + PDF |
| Starter ($49/mo) | None | Unlimited | Full results + GEO monitoring (25 prompts) |
| Pro ($129/mo) | API Key | Unlimited | Full results + GEO monitoring (100 prompts) + API |
| Enterprise ($349/mo) | API Key | Unlimited | Full results + GEO monitoring (300+ prompts) + client dashboard |
| Agency ($449/mo) | API Key | Unlimited | Everything + white-label reports + custom branding |
For authenticated requests, include: Authorization: Bearer YOUR_API_KEY
Error Handling
| HTTP | Error | Meaning | Action |
|---|---|---|---|
| 400 | invalid_url | URL missing or malformed | Include full URL with https:// |
| 429 | rate_limited | Daily limit exceeded | Wait 24 hours or upgrade plan |
| 500 | scan_failed | Target unreachable | Verify URL is accessible |
| 503 | service_unavailable | Foglift temporarily down | Retry in a few minutes |
{
"error": "rate_limited",
"message": "Daily scan limit exceeded. Upgrade to Pro for unlimited scans.",
"limitReached": true
}CI/CD Integration
Add Foglift scans to your deployment pipeline to catch SEO and GEO regressions before they ship.
GitHub Actions (Official Action)
Use the official Foglift GitHub Action for the simplest setup. Generates a Job Summary with scores and a link to the full report.
name: Website Health Check
on:
push:
branches: [main]
schedule:
- cron: '0 9 * * 1' # Weekly on Monday
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: banant2/Foglift@main
id: foglift
with:
url: 'https://yoursite.com'
threshold: '70' # Fail if overall score < 70
# Optional: Use outputs in subsequent steps
- run: echo "Score: ${{ steps.foglift.outputs.overall }}/100 (${{ steps.foglift.outputs.grade }})"
| Input | Required | Description |
|---|---|---|
| url | Yes | URL to scan |
| threshold | No | Minimum score (action fails if below) |
| categories | No | Categories to check (seo,geo,performance,security,accessibility) |
Outputs: overall, seo, geo, performance, security, accessibility, issues, grade, report_url — use in subsequent steps.
GitLab CI
foglift-scan:
stage: test
script:
- RESULT=$(curl -s "https://foglift.io/api/v1/scan?url=$SITE_URL")
- SCORE=$(echo $RESULT | jq '.scores.overall')
- echo "Foglift Score: $SCORE/100"
- '[ "$SCORE" -ge 60 ] || (echo "Score below threshold" && exit 1)'Shell Script
#!/bin/bash
URL=${1:-"https://yoursite.com"}
THRESHOLD=${2:-70}
RESULT=$(curl -s "https://foglift.io/api/v1/scan?url=$URL")
SCORE=$(echo $RESULT | jq '.scores.overall')
echo "Foglift score for $URL: $SCORE/100"
if [ "$SCORE" -lt "$THRESHOLD" ]; then
echo "FAIL: Score $SCORE is below threshold $THRESHOLD"
echo $RESULT | jq '.issues[] | select(.severity == "critical") | .title'
exit 1
fi
echo "PASS: Score meets threshold"
exit 0Ready to integrate?
Start scanning — no API key required for basic scans.