If you've ever clicked "Download JSON" in Google PageSpeed Insights or run lighthouse --output=json in the CLI, you've been confronted with a file that can be anywhere from 500KB to 2MB of deeply nested JSON. It's complete — every audit, metric, screenshot, and data point Google's Lighthouse engine produces — but it's not easy to navigate.
This guide explains what's in a Lighthouse JSON report, which fields matter for performance optimization, and how AI agents like ChatGPT and Claude can use this data to produce precise, code-level fixes.
What Is a Lighthouse JSON Report?
A Lighthouse JSON report is officially called a Lighthouse Result (LHR). It's the raw output of running a Google Lighthouse audit — the same engine that powers Google PageSpeed Insights, Chrome DevTools' Lighthouse panel, and the lighthouse CLI tool.
The report contains:
- Scores for four categories: Performance, Accessibility, Best Practices, SEO (each 0–100)
- Audit results for 150+ individual checks
- Core Web Vitals metrics (FCP, LCP, TBT, CLS, Speed Index, TTI) with exact timings
- CrUX field data — real-user measurements from Chrome users visiting your site
- Opportunities — specific changes with estimated time savings in milliseconds
- Diagnostics — issues that don't have a direct savings estimate but affect performance
- Stack packs — framework-specific hints for WordPress, React, Angular, and others
Top-Level Structure
The root of a Lighthouse JSON report looks like this:
{
"kind": "pagespeedonline#result",
"id": "https://example.com/",
"analysisUTCTimestamp": "2026-03-10T14:00:00.000Z",
"loadingExperience": { ... },
"originLoadingExperience": { ... },
"lighthouseResult": { ... }
}
The two most important top-level keys are loadingExperience (real-user CrUX data for the specific URL) and lighthouseResult (the full Lighthouse audit output).
The lighthouseResult Object
This is where all the actionable data lives.
{
"requestedUrl": "https://example.com",
"finalUrl": "https://www.example.com/",
"lighthouseVersion": "12.2.1",
"fetchTime": "2026-03-10T14:00:00.000Z",
"environment": {
"networkUserAgent": "Mozilla/5.0 ...",
"hostUserAgent": "...",
"benchmarkIndex": 1580,
"credits": { ... }
},
"runWarnings": [],
"configSettings": {
"emulatedFormFactor": "mobile",
"formFactor": "mobile",
"locale": "en-US",
"onlyCategories": ["performance", "accessibility", "best-practices", "seo"]
},
"audits": { ... },
"categories": { ... },
"categoryGroups": { ... },
"stackPacks": [ ... ],
"timing": { "total": 12453 }
}
Categories — The Score Numbers
The categories object contains the four headline scores you see on any Lighthouse report:
{
"performance": {
"id": "performance",
"title": "Performance",
"score": 0.72,
"auditRefs": [
{ "id": "first-contentful-paint", "weight": 10, "group": "metrics" },
{ "id": "largest-contentful-paint", "weight": 25, "group": "metrics" },
{ "id": "total-blocking-time", "weight": 30, "group": "metrics" },
{ "id": "cumulative-layout-shift", "weight": 25, "group": "metrics" },
{ "id": "speed-index", "weight": 10, "group": "metrics" },
{ "id": "unused-javascript", "weight": 0, "group": "load-opportunities" },
...
]
},
"accessibility": { "score": 0.95, ... },
"best-practices": { "score": 1.0, ... },
"seo": { "score": 0.91, ... }
}
Important: Scores in Lighthouse JSON are on a 0–1 scale (not 0–100). Multiply by 100 to get the familiar number. A score of 0.72 means a performance score of 72.
The auditRefs array inside each category tells you which audits contribute to that category score and their weight — the weight field tells you exactly how much each audit impacts the overall performance score. Audits in load-opportunities and diagnostics groups typically have weight: 0 (they don't directly move the score but are still important to fix).
Audits — The Detailed Findings
The audits object is where the actual diagnostic data lives. It's a large key-value map where each key is an audit ID:
{
"largest-contentful-paint": {
"id": "largest-contentful-paint",
"title": "Largest Contentful Paint",
"description": "Largest Contentful Paint marks the time at which the largest text or image is painted...",
"score": 0.45,
"scoreDisplayMode": "numeric",
"displayValue": "4.2 s",
"numericValue": 4200,
"numericUnit": "millisecond"
},
"render-blocking-resources": {
"id": "render-blocking-resources",
"title": "Eliminate render-blocking resources",
"score": 0.43,
"scoreDisplayMode": "numeric",
"displayValue": "Potential savings of 520 ms",
"details": {
"type": "opportunity",
"overallSavingsMs": 520,
"items": [
{
"url": "https://example.com/styles.css",
"totalBytes": 28432,
"wastedMs": 320
}
]
}
}
}
Key Audit Fields
| Field | What It Means |
|---|---|
| id | Unique identifier for the audit (e.g., "largest-contentful-paint") |
| score | 0–1 score (null for informational audits) |
| scoreDisplayMode | "numeric", "binary", "informative", "notApplicable", "manual", "error" |
| displayValue | Human-readable result (e.g., "4.2 s", "Potential savings of 520 ms") |
| numericValue | Raw number in the unit specified by numericUnit |
| details | Structured data — tables, items, chains, etc. |
Which Audits to Focus On
Audits with score < 0.9 need attention. But for maximum performance impact, prioritize by:
- Audits with
details.type === "opportunity"— these have explicit millisecond savings estimates. Sort bydetails.overallSavingsMsdescending. - Audits with a
weight > 0in the performance category — these directly move the performance score. - Core metric audits —
largest-contentful-paint,total-blocking-time,cumulative-layout-shifthave the highest weights (25, 30, 25).
CrUX Field Data — Real Users
The loadingExperience object contains real-user measurements from Chrome users who have visited your URL in the last 28 days. This is the data Google actually uses for its Core Web Vitals ranking signal.
{
"loadingExperience": {
"id": "https://example.com/",
"metrics": {
"LARGEST_CONTENTFUL_PAINT_MS": {
"percentile": 3800,
"distributions": [...],
"category": "SLOW"
},
"FIRST_CONTENTFUL_PAINT_MS": {
"percentile": 1900,
"category": "AVERAGE"
},
"CUMULATIVE_LAYOUT_SHIFT_SCORE": {
"percentile": 0,
"category": "FAST"
},
"INTERACTION_TO_NEXT_PAINT": {
"percentile": 180,
"category": "AVERAGE"
}
},
"overall_category": "SLOW"
}
}
The percentile value is the 75th percentile of real user measurements — the threshold Google uses. The category is "FAST", "AVERAGE", or "SLOW". If your overall_category is "SLOW", your site has a Core Web Vitals issue that could be affecting search rankings right now, not just in lab conditions.
Stack Packs — Framework-Specific Hints
If Lighthouse detects you're using a specific framework (WordPress, React, Angular, Next.js, AMP, etc.), the stackPacks array provides framework-specific remediation advice for each failing audit:
{
"stackPacks": [
{
"id": "wordpress",
"title": "WordPress",
"iconDataURL": "...",
"descriptions": {
"render-blocking-resources": "To defer or async load the scripts, use the WP Rocket plugin or...",
"unused-javascript": "Consider using a plugin like Asset CleanUp or Autoptimize to..."
}
}
]
}
These hints are gold when feeding data to an AI agent — it instantly knows which WordPress plugin or React pattern to recommend.
Why Raw Lighthouse JSON Is Too Large for AI Agents
A raw Lighthouse JSON report is 500KB to 2MB. That is too large to paste into most AI chat interfaces, and even when it fits, a significant portion of it is useless for performance optimization:
- Screenshots and filmstrips — base64-encoded image data, often 300–500KB
- Treemap data — complex bundle visualization data
- Passing audits — 80% of audits may have
score >= 0.9and need no action - Category groups — UI metadata, not actionable data
- Config settings — internal Lighthouse configuration
PageSpeed Exporter's buildAIReport() function strips all of this down to an AIReport object under 50KB that contains only:
- Meta (URL, strategy, environment, warnings)
- Category scores
- Core Web Vitals metrics with values and scores
- CrUX field data
- Stack pack hints
- A prioritized issue list (opportunities sorted by ms savings, diagnostics sorted by worst score)
- Each issue includes its performance weight, savings estimate, and relevant detail items
This is exactly the data an AI agent needs to produce actionable, code-level fixes — with no noise.
How to Use Lighthouse JSON With an AI Agent
The most effective approach is a structured prompt that tells the AI exactly what you need:
I'm attaching my full Lighthouse JSON performance report for example.com.
Context:
- Current performance score: 72/100
- Worst metric: LCP at 4.2 seconds (target: under 2.5s)
- Framework: Next.js 15 with Tailwind CSS
Please:
1. Identify the top 3 opportunities sorted by ms savings, and provide exact code fixes
2. Check if any of the issues have WordPress/Next.js/React-specific hints in stackPacks
3. Tell me which fixes will have the most impact on LCP specifically
4. Provide before/after code snippets for each fix
The AI agent will parse the JSON structure, cross-reference audit IDs with the performance category weights, and generate targeted recommendations — not generic advice.
Getting a Lighthouse JSON Report
There are three ways:
- PageSpeed Exporter (recommended) — No installation required. Runs directly in your browser at speedexporter.com. Includes pre-built AI prompt templates and exports a token-efficient AIReport.
- Lighthouse CLI — Run
npx lighthouse https://example.com --output=json --output-path=report.json. Requires Node.js. Gives you the full raw LHR file.
- Google PageSpeed Insights API —
curl "https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=https://example.com&key=YOUR_KEY". Raw API access, no UI.
PageSpeed Exporter wraps option 3 with an AI-optimized output layer, prompt templates, and a comparison interface — making it the fastest path from URL to AI-generated fix plan.