ComparisonPageSpeed InsightsGTmetrixWebPageTestTools

PageSpeed Insights vs GTmetrix vs WebPageTest: Which Should You Use in 2026?

A detailed, side-by-side technical comparison of Google PageSpeed Insights, GTmetrix, and WebPageTest — covering data quality, AI workflow compatibility, pricing, and which tool is right for each use case.

PageSpeed Exporter··8 min read

If you are trying to improve your website's performance, you have three main free or freemium tools to choose from: Google PageSpeed Insights, GTmetrix, and WebPageTest. Each measures roughly the same thing — how fast your site loads — but they use different engines, expose different data, and are suited to very different workflows.

This comparison covers everything that matters in 2026: data accuracy, Core Web Vitals coverage, export formats, AI agent compatibility, lab vs. field data, and pricing.


The Short Answer

  • Use Google PageSpeed Insights for official Core Web Vitals scores and CrUX real-user field data. It runs the exact same engine Google uses for ranking signals.
  • Use WebPageTest for deep waterfall analysis, filmstrip views, and multi-step scripting of authenticated flows.
  • Use GTmetrix if you need scheduled monitoring, historical trend tracking, and a visual dashboard without self-hosting anything.
  • Use PageSpeed Exporter if you need the complete Lighthouse JSON in a format you can feed directly to AI agents like ChatGPT, Claude, or Cursor to get code-level fixes.

How Each Tool Works

Google PageSpeed Insights (PSI)

PageSpeed Insights runs a Lighthouse audit from a Google server (using a throttled mobile profile) and returns two types of data:

  1. Lab data — synthetic measurements from a single controlled audit run. Scores can vary 5–15 points between runs due to server load and network variability.
  2. Field data (CrUX) — real-user measurements from the Chrome User Experience Report. Only available if your URL or origin has enough traffic (typically thousands of visits per month from Chrome users).

PSI is the authoritative source for Core Web Vitals because Google uses this data for ranking. However, the web UI is not designed for data export. You can retrieve the raw JSON via the API, but the response is a 500KB–2MB file with screenshots, render-blocking analysis bitmaps, and other binary data that is expensive to process.

GTmetrix

GTmetrix runs both Lighthouse and its own WebPageTest-based engine. It reports:

  • GTmetrix Grade (A–F) — a proprietary blend of Lighthouse performance score and Web Vitals
  • Performance — the Lighthouse performance score
  • Structure — GTmetrix's own scoring of best practices
  • Web Vitals — LCP, FID/INP, CLS, TTFB, FCP

GTmetrix's main advantages are scheduled monitoring, location-specific testing from multiple cities, historical reports, and video recordings of page loads. Its API is available on paid plans.

The free tier is limited: one test location (Vancouver), no scheduled monitoring, and tests expire after a short retention period.

WebPageTest

WebPageTest is the most technically deep of the three tools. Run by Catchpoint and originally created by Patrick Meenan, it provides:

  • Full waterfall charts with request timing, DNS, TCP, SSL, and TTFB for every resource
  • Filmstrip view — screenshots at 100ms intervals showing exactly when visual content appears
  • Video recording and comparison — side-by-side video of two URLs loading
  • Custom scripting — multi-step tests that log in, fill forms, or navigate before measuring
  • Multi-location testing — real browsers in dozens of cities and on real mobile devices

WebPageTest uses Lighthouse for its performance score but goes far beyond Lighthouse for network analysis. Its free tier is generous (no account required), and the API is open.

The major drawback is complexity: the results are not designed for non-technical users, and turning a WebPageTest report into a prioritized list of code fixes requires significant manual interpretation.


Core Web Vitals Coverage

| Metric | PSI | GTmetrix | WebPageTest | PageSpeed Exporter |

|---|---|---|---|---|

| LCP | Lab + Field | Lab only | Lab only | Lab + Field |

| FCP | Lab + Field | Lab only | Lab only | Lab + Field |

| CLS | Lab + Field | Lab only | Lab only | Lab + Field |

| INP | Field only | No | No | Field (CrUX) |

| TBT (proxy for FID) | Lab | Lab | Lab | Lab |

| TTFB | Lab + Field | Lab | Lab | Lab + Field |

| Speed Index | Lab | Lab | Lab | Lab |

Key insight: INP (Interaction to Next Paint) replaced FID as a Core Web Vitals metric in March 2024. Only PSI and PageSpeed Exporter report CrUX-based INP data because it requires real-user measurements from the Chrome browser — synthetic tests cannot measure interaction responsiveness.

AI Workflow Compatibility

This is where the tools diverge most sharply in 2026.

Feeding performance data to AI agents (ChatGPT, Claude, Cursor, Copilot) produces dramatically better results than reading a visual dashboard and asking an AI for generic advice. The AI needs structured, complete data: exact audit IDs, estimated savings in milliseconds, the specific HTML elements that are slow, and your tech stack signals.

| Capability | PSI | GTmetrix | WebPageTest | PageSpeed Exporter |

|---|---|---|---|---|

| JSON export from web UI | No | No (API only) | Partial | Yes |

| AI-ready stripped JSON (<50KB) | No | No | No | Yes |

| Prompt templates included | No | No | No | Yes |

| CrUX field data in export | API only | No | No | Yes |

| StackPack hints in export | No | No | No | Yes |

| All audits with ms savings | API only | No | Partial | Yes |

PSI does provide a complete JSON response via its API, but the raw response contains base64-encoded screenshots, full DOM snapshots, and render-blocking analysis bitmaps that make it 1–2 MB in size. Pasting 2 MB of JSON into a chat window is wasteful and often hits context limits. PageSpeed Exporter strips all binary data and produces a token-efficient AIReport (under 50KB) that preserves every actionable finding.


Pricing Comparison

| Plan | PSI | GTmetrix | WebPageTest | PageSpeed Exporter |

|---|---|---|---|---|

| Free tier | Unlimited (API key required) | Limited tests | Generous | 5 reports/month |

| Monitoring | No | $13.50/month | No | No |

| API access | Free (Google API key) | $14+/month | Free | Starter $9/month |

| Team features | No | Yes | No | No |

| AI prompt templates | No | No | No | Yes |


When to Choose Each Tool

Choose Google PageSpeed Insights when:

  • You need the authoritative Core Web Vitals scores used by Google for ranking
  • You are checking whether your site has enough CrUX data to receive field data
  • You want to verify INP real-user experience
  • You are doing a one-off check and do not need to export or track data over time

Choose GTmetrix when:

  • You need scheduled uptime and performance monitoring with historical trend data
  • You want to compare performance from multiple geographic locations
  • You need video recordings of page loads for stakeholder presentations
  • You are monitoring a site for a client and need to share a visual dashboard

Choose WebPageTest when:

  • You need full waterfall analysis to debug specific network or resource loading problems
  • You want multi-step scripted tests (login → navigate → measure)
  • You need to test performance on a real mobile device (not an emulated profile)
  • You are diagnosing a specific blocking resource and need byte-level visibility

Choose PageSpeed Exporter when:

  • You want to feed complete Lighthouse results to an AI agent to get exact code fixes
  • You need a JSON export from the web UI without setting up the PSI API yourself
  • You want CrUX field data and StackPack hints in a single downloadable file
  • You prefer a simple workflow: URL → export → paste into AI agent → fix

Technical Accuracy: Variability and Reproducibility

All synthetic lab tools produce variable scores. A site scoring 72 on one run may score 67 or 78 on the next, due to CPU scheduling variance, network jitter, and server-side resource availability.

The tools handle this differently:

  • PSI runs a single audit and reports the result. High variability is a known limitation.
  • GTmetrix runs a single audit per test by default (premium plans can run 3 and average them).
  • WebPageTest runs 3 tests by default and reports the median. More reproducible.
  • PageSpeed Exporter runs a single live audit via the PSI API. Scores match PSI exactly.

For the most reproducible results, run 3+ audits and average them, or rely on CrUX field data (which is 28-day rolling average data and not subject to run-to-run variability).


Conclusion

In 2026, the right tool depends entirely on what you are trying to do:

  • GTmetrix wins for monitoring and history.
  • WebPageTest wins for low-level network analysis.
  • PSI wins for authoritative official scores.
  • PageSpeed Exporter wins for AI-assisted optimization — the only tool that produces Lighthouse data in the format AI coding agents actually need.

For most developers and SEOs following an AI-assisted optimization workflow, the recommended approach is: run audits in PageSpeed Exporter, export the JSON, paste it into your AI agent with one of the included prompt templates, and implement the prioritized fix list. Re-run after each batch of changes to track improvement.


Try it yourself

Run a free Lighthouse audit on any URL and get the full JSON report for your AI agent — no account required.

Analyze a URL for free