[{"content":"A Next.js app with next/image on every image component. Lighthouse image audit: no issues. LCP: 4.2 seconds. The hero was a CSS background-image. next/image doesn\u0026rsquo;t touch those. Nobody had checked what the LCP element actually was.\nFind your LCP element before you do anything else This is the step most people skip. They add next/image, run Lighthouse, see green checkmarks on the image audit, and wonder why LCP is still slow.\nOpen Chrome DevTools, run Lighthouse, and look at what it marks as the LCP element. If it\u0026rsquo;s a background-image set via CSS, the browser can\u0026rsquo;t preload it the same way it handles a real \u0026lt;img\u0026gt; tag, and it won\u0026rsquo;t get early fetch priority. Move it to an \u0026lt;img\u0026gt; element. This one change has fixed more LCP problems than anything else I\u0026rsquo;ve seen.\nfetchpriority=\u0026quot;high\u0026quot; is doing more work than most developers realize The browser assigns fetch priority based on what it finds during the initial HTML parse. Images discovered late — inside components that render after hydration, or below the fold at first scan — get normal or low priority. By the time the browser decides to fetch them, the LCP window is already closing.\nFor your LCP image, you want the fetch to start as early as possible.\n\u0026lt;img src=\u0026#34;/hero.webp\u0026#34; fetchpriority=\u0026#34;high\u0026#34; width={1200} height={600} alt=\u0026#34;...\u0026#34; /\u0026gt; In Next.js, the priority prop on next/image sets this automatically and also injects a preload link into \u0026lt;head\u0026gt;:\n\u0026lt;Image src=\u0026#34;/hero.webp\u0026#34; priority width={1200} height={600} alt=\u0026#34;...\u0026#34; /\u0026gt; Don\u0026rsquo;t use priority on more than one or two images per page. Telling the browser everything is urgent means nothing is.\nnext/image defaults will silently hurt your LCP next/image lazy-loads by default. That means if your LCP image is rendered via next/image without priority, the browser is intentionally delaying its fetch until the image is about to enter the viewport.\nI\u0026rsquo;ve seen this cause regressions on otherwise well-optimized pages. The format is correct, the dimensions are explicit, Lighthouse scores are green — but LCP is 300ms slower than it should be because someone forgot priority. It doesn\u0026rsquo;t throw a warning. It just quietly loads late.\nFor any image that could be the LCP element on any viewport — hero images, above-the-fold product shots, article cover images — set priority. Default to it rather than remembering to add it.\nExplicit dimensions are non-negotiable A browser that doesn\u0026rsquo;t know an image\u0026rsquo;s dimensions reserves no space for it. When the image loads, content shifts. That\u0026rsquo;s a CLS problem, not just a performance one — it makes the page feel broken to users even if the load time is acceptable.\nnext/image will warn you when dimensions are missing. Every other \u0026lt;img\u0026gt; in your codebase that doesn\u0026rsquo;t go through next/image should have explicit width and height set too. It takes ten seconds and prevents a class of layout bugs entirely.\nFormat: stop overthinking it WebP is 25–35% smaller than JPEG at equivalent quality. AVIF is another 20–30% on top of that. next/image serves AVIF to browsers that support it and falls back to WebP automatically — you don\u0026rsquo;t need to configure anything.\nThe format switch matters, but once you\u0026rsquo;re serving WebP, the gains from AVIF are marginal compared to getting fetchpriority right on the LCP element. Fix the priority first.\nOptimizing once isn\u0026rsquo;t enough Lighthouse confirms the fix on your machine. It doesn\u0026rsquo;t tell you whether it holds under real conditions — actual devices, varied networks, CDN behavior on cold loads.\nMeasuring from real users is the only way to know:\nnew PerformanceObserver((list) =\u0026gt; { const lcp = list.getEntries().at(-1)?.startTime; if (lcp) sendMetric({ metric: \u0026#39;LCP\u0026#39;, value: lcp, page: location.pathname }); }).observe({ type: \u0026#39;largest-contentful-paint\u0026#39;, buffered: true }); The harder problem is that optimizations regress. A new developer adds a hero image without priority. Someone replaces an \u0026lt;img\u0026gt; with a CSS background. The LCP you fixed at 1.6s quietly climbs back to 3.2s after the next deploy, and nobody notices until a user mentions it. If you want to catch that within minutes rather than days, I built RPAlert for exactly this — it handles the LCP monitoring and alerting layer for React apps, collecting field data from real browsers and posting to Slack or Discord when thresholds are crossed. Worth setting up after you\u0026rsquo;ve done the optimization work, so the gains actually stick.\nThe fix is almost always the same: find the LCP element, make it a real \u0026lt;img\u0026gt; tag, set fetchpriority=\u0026quot;high\u0026quot;, give it explicit dimensions. Everything else is secondary.\n","permalink":"https://rpalert.dev/blog/posts/most-lcp-fixes-come-down-to-one-image-2i09/","summary":"\u003cp\u003eA Next.js app with \u003ccode\u003enext/image\u003c/code\u003e on every image component. Lighthouse image audit: no issues. LCP: 4.2 seconds. The hero was a CSS \u003ccode\u003ebackground-image\u003c/code\u003e. \u003ccode\u003enext/image\u003c/code\u003e doesn\u0026rsquo;t touch those. Nobody had checked what the LCP element actually was.\u003c/p\u003e\n\u003chr\u003e\n\u003ch2 id=\"find-your-lcp-element-before-you-do-anything-else\"\u003eFind your LCP element before you do anything else\u003c/h2\u003e\n\u003cp\u003eThis is the step most people skip. They add \u003ccode\u003enext/image\u003c/code\u003e, run Lighthouse, see green checkmarks on the image audit, and wonder why LCP is still slow.\u003c/p\u003e","title":"Most LCP Fixes Come Down to One Image"},{"content":"Three hours after a deploy, someone posts a screenshot in Slack. One-star review. App \u0026ldquo;takes forever to load.\u0026rdquo; You check Lighthouse — fine. You check Sentry — no errors. The regression started the moment you deployed. Nobody knew until a user complained.\nThis is the normal state of affairs for most teams, and it\u0026rsquo;s not hard to fix.\nThe first 30 minutes are the cheapest Performance regressions don\u0026rsquo;t announce themselves. They show up in production under conditions you can\u0026rsquo;t fully replicate: real devices slower than your dev machine, networks that drop in and out, CDN cache misses on fresh deploys.\nThe first 10–30 minutes after a deploy are when regressions are cheapest to fix. You can just roll back. By the time a support ticket arrives, you\u0026rsquo;re already hours into the impact window and the fix is a proper investigation, not a revert.\nWhy your existing tools miss this Lighthouse CI runs against staging with synthetic conditions. It won\u0026rsquo;t catch regressions that only appear under real network speeds or with production data volumes. A LCP that went from 1.8s to 3.2s doesn\u0026rsquo;t throw an exception — Sentry has nothing to report. APM tools tell you about backend latency, not what\u0026rsquo;s happening in the browser.\nThe shared blind spot: real users on real devices. None of these tools will fire when your LCP degrades after a deploy.\nLCP is what to watch For deploy regressions specifically, LCP is the right metric. It\u0026rsquo;s the best proxy for perceived load speed, and it\u0026rsquo;s where most regressions surface first. Long Tasks are the clearest signal of render bloat. FCP is a useful early warning.\nThe browser has a native API for all of this:\nconst lcpObserver = new PerformanceObserver((list) =\u0026gt; { const entries = list.getEntries(); const lcp = entries[entries.length - 1].startTime; if (lcp \u0026gt; 2500) { // don\u0026#39;t batch threshold crossings — send immediately sendMetric({ metric: \u0026#39;LCP\u0026#39;, value: lcp, page: location.pathname }); } }); lcpObserver.observe({ type: \u0026#39;largest-contentful-paint\u0026#39;, buffered: true }); This runs in every user\u0026rsquo;s browser. The question is what you do with the data. A minimal pipeline:\nReact app → PerformanceObserver → batch POST every 30s (immediate on threshold cross) → your API → threshold check → Discord/Slack alert The batching distinction matters. Routine measurements can queue up — there\u0026rsquo;s no reason to POST on every LCP reading. But when something crosses a threshold you care about, you want it sent immediately, not held for the next batch window.\nFor the alert destination: email gets buried. If your team is in Discord or Slack, that\u0026rsquo;s where it should go. Someone needs to see it within five minutes of the regression starting.\nWhat the alert loop actually looks like You deploy at 2pm. At 2:03, a Discord message arrives: LCP exceeded 2.5s on /checkout, three minutes after the last deploy. You open the diff, find a new image component missing loading=\u0026quot;lazy\u0026quot;, fix it, deploy the hotfix by 2:15.\nFifteen minutes of degraded performance.\nWithout the alert: the first signal is a support ticket at 4:30pm. You dig through Sentry — nothing, because no exceptions were thrown. You run Lighthouse locally — looks fine, warm cache. You eventually find the image issue around 6pm. Four hours of impact instead of fifteen minutes.\nThe alert doesn\u0026rsquo;t prevent the regression. It collapses the time between \u0026ldquo;regression exists\u0026rdquo; and \u0026ldquo;someone is fixing it.\u0026rdquo;\nBuilding vs. not building this The pipeline above isn\u0026rsquo;t complicated to build. It\u0026rsquo;s also not that complicated to maintain — until the edge cases around batching logic, threshold tuning, and webhook routing start to accumulate.\nIf you\u0026rsquo;d rather skip building it, I built RPAlert for exactly this reason — it handles the PerformanceObserver setup, threshold logic, and Discord/Slack routing. Install the SDK and wrap your root layout:\nnpm install rpalert-sdk import { RPAlertProvider } from \u0026#34;rpalert-sdk/react\u0026#34;; export default function RootLayout({ children }: { children: React.ReactNode }) { return ( \u0026lt;html lang=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;body\u0026gt; \u0026lt;RPAlertProvider apiKey=\u0026#34;YOUR_API_KEY\u0026#34;\u0026gt; {children} \u0026lt;/RPAlertProvider\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; ); } LCP, FCP, CLS, Long Tasks — all measured from that point. Alert fires when thresholds are crossed. There\u0026rsquo;s a free tier if you want to verify the pipeline end to end before committing.\nOne thing worth being clear about: RPAlert isn\u0026rsquo;t a Sentry replacement. Sentry tells you why something broke. RPAlert tells you when to go look at Sentry. Different jobs, and they work well together.\nThe goal isn\u0026rsquo;t zero regressions — that\u0026rsquo;s not realistic in any active codebase. The goal is making sure you find out before your users do.\n","permalink":"https://rpalert.dev/blog/posts/detecting-performance-regressions-right-after-you-deploy-403f/","summary":"\u003cp\u003eThree hours after a deploy, someone posts a screenshot in Slack. One-star review. App \u0026ldquo;takes forever to load.\u0026rdquo; You check Lighthouse — fine. You check Sentry — no errors. The regression started the moment you deployed. Nobody knew until a user complained.\u003c/p\u003e\n\u003cp\u003eThis is the normal state of affairs for most teams, and it\u0026rsquo;s not hard to fix.\u003c/p\u003e\n\u003chr\u003e\n\u003ch2 id=\"the-first-30-minutes-are-the-cheapest\"\u003eThe first 30 minutes are the cheapest\u003c/h2\u003e\n\u003cp\u003ePerformance regressions don\u0026rsquo;t announce themselves. They show up in production under conditions you can\u0026rsquo;t fully replicate: real devices slower than your dev machine, networks that drop in and out, CDN cache misses on fresh deploys.\u003c/p\u003e","title":"Catching React Performance Regressions Before Your Users Do"},{"content":"React Compiler 1.0 went stable in October 2025. Half the tutorials I saw declared useMemo dead. It\u0026rsquo;s not — and on most existing codebases, the compiler will silently skip the components you most want it to optimize.\nThe compiler handles one thing Re-render performance. It\u0026rsquo;s a build-time plugin that analyzes your components and inserts memoization automatically, without you writing it.\nThe genuinely useful part: it can memoize values in code paths after an early return, which manual useMemo can\u0026rsquo;t do.\nfunction Component({ isAdmin, data }) { if (!isAdmin) return null; const processed = expensiveTransformation(data); // compiler memoizes this return \u0026lt;Chart data={processed} /\u0026gt;; } What it doesn\u0026rsquo;t touch: first render cost, Long Tasks from large list renders, expensive one-time computations on mount. None of that changes.\nSilent bailouts When the compiler encounters code it can\u0026rsquo;t safely analyze — mutating props, reading mutable refs during render, class instances with internal state — it skips the component entirely. No warning. No error. It just leaves that component unoptimized and moves on.\nThis is the part that catches people off guard. You enable the compiler expecting your most expensive component to benefit, and nothing changes. The compiler bailed on it without telling you.\nThe diagnostic is in React DevTools. Successfully compiled components show a \u0026ldquo;memo ✨\u0026rdquo; badge. Check your heaviest components first. If the badge isn\u0026rsquo;t there, that\u0026rsquo;s your answer.\nMost existing codebases have violations The compiler works well on clean, pure function components with immutable data. Greenfield Next.js apps tend to fit. Existing apps often don\u0026rsquo;t.\nPatterns that cause silent skips:\n// Direct mutation during render function BadComponent({ items }) { items.push(newItem); // skipped return \u0026lt;List items={items} /\u0026gt;; } // Mutable ref read during render function AlsoProblematic({ inputRef }) { const value = inputRef.current; // skipped return \u0026lt;div\u0026gt;{value}\u0026lt;/div\u0026gt;; } // Class instance methods function WithClassInstance({ model }) { const label = model.getFormattedLabel(); // compiler can\u0026#39;t track internal state return \u0026lt;span\u0026gt;{label}\u0026lt;/span\u0026gt;; } None of these are bugs. Your app won\u0026rsquo;t break. But the compiler won\u0026rsquo;t help them.\nBefore enabling the compiler on an existing codebase, run the ESLint plugin first. eslint-plugin-react-hooks with recommended-latest includes compiler rules. The violation count is a rough proxy for actual benefit. High violation count means the compiler will spend most of its time bailing out.\nuseMemo isn\u0026rsquo;t dead There\u0026rsquo;s still one category where manual memoization is the right call: useEffect dependencies that need guaranteed reference stability.\nfunction Component({ userId }) { const options = useMemo(() =\u0026gt; ({ headers: { \u0026#39;X-User-Id\u0026#39;: userId } }), [userId]); useEffect(() =\u0026gt; { fetchData(options); }, [options]); } The compiler\u0026rsquo;s own docs call useMemo and useCallback valid escape hatches for this. The mental shift is from reaching for them by default to reaching for them when you have a specific reason. That\u0026rsquo;s a real improvement — just not elimination.\nFor existing code with lots of manual memoization, don\u0026rsquo;t rush to remove it. The docs explicitly recommend leaving it in place for now. Removing it can change the compiler\u0026rsquo;s output in ways that don\u0026rsquo;t surface until something re-renders unexpectedly.\nRoll it out on a subset first Next.js 15+ supports annotation mode, which only compiles files that opt in:\n// next.config.js const nextConfig = { experimental: { reactCompiler: { compilationMode: \u0026#39;annotation\u0026#39;, }, }, }; \u0026#39;use memo\u0026#39;; // top of each file you want compiled export function MyComponent() { // compiler applies here } One thing worth following: pin the exact compiler version with --save-exact. The React team has said memoization behavior may change in minor versions. Auto-upgrading and then debugging unexpected re-render changes is not a good use of a morning.\nWhat to write going forward New components: write them without manual memoization. Pure functions, no mutations during render, and the compiler handles it.\nExisting components: run ESLint first, check the DevTools badges after enabling, and don\u0026rsquo;t touch working useMemo/useCallback calls until you have a concrete reason.\nFor components doing genuinely heavy work — large list renders, expensive data transformations — the compiler helps with unnecessary re-renders, but the underlying cost is still there. Those still need virtualization, useTransition for non-urgent updates, or Web Workers for off-thread computation.\nThe compiler is a real improvement, particularly for deeply nested trees where unnecessary re-renders compound. It raises the floor for everyone. It just doesn\u0026rsquo;t replace thinking about where the expensive work actually is.\n","permalink":"https://rpalert.dev/blog/posts/memoization-in-the-react-compiler-era-what-actually-changes-3e6b/","summary":"\u003cp\u003eReact Compiler 1.0 went stable in October 2025. Half the tutorials I saw declared \u003ccode\u003euseMemo\u003c/code\u003e dead. It\u0026rsquo;s not — and on most existing codebases, the compiler will silently skip the components you most want it to optimize.\u003c/p\u003e\n\u003chr\u003e\n\u003ch2 id=\"the-compiler-handles-one-thing\"\u003eThe compiler handles one thing\u003c/h2\u003e\n\u003cp\u003eRe-render performance. It\u0026rsquo;s a build-time plugin that analyzes your components and inserts memoization automatically, without you writing it.\u003c/p\u003e\n\u003cp\u003eThe genuinely useful part: it can memoize values in code paths after an early return, which manual \u003ccode\u003euseMemo\u003c/code\u003e can\u0026rsquo;t do.\u003c/p\u003e","title":"What the React Compiler Quietly Skips"},{"content":"A Lighthouse score of 95 on staging doesn\u0026rsquo;t mean your users will see that. It means your machine, on your network, with your warm cache, hit that number once.\nThe gap between staging and production isn\u0026rsquo;t random bad luck. It has predictable causes that almost every team hits in the same order.\nYou\u0026rsquo;re not testing on anything like a real device The biggest one. Most web developers work on hardware that\u0026rsquo;s two to three times faster than the median device visiting their app. React component trees that reconcile in 40ms on a MacBook Pro take 180ms on a mid-range Android phone from 2022. That\u0026rsquo;s not a small difference — it crosses the line between \u0026ldquo;feels fast\u0026rdquo; and \u0026ldquo;feels like something is wrong.\u0026rdquo;\nCPU throttling in DevTools gets you closer. It\u0026rsquo;s not the same. A simulated 6x slowdown doesn\u0026rsquo;t capture memory pressure, thermal behavior, or how the GPU pipeline behaves on constrained hardware. Test on a physical mid-range Android device at least once per feature that touches render-heavy components. This is the most reliable signal you have. Everything else is an approximation.\nBrowserStack works if you don\u0026rsquo;t have a device. It\u0026rsquo;s slower to iterate but it\u0026rsquo;s still a real device.\nCold cache is a different product When you\u0026rsquo;re iterating on staging, you\u0026rsquo;ve hit that URL dozens of times. The browser cache is warm. The CDN has every asset hot. Your service worker is running.\nYour users don\u0026rsquo;t have any of that on their first visit. The first visit is what determines whether they stay or leave, and it\u0026rsquo;s exactly the scenario you never test.\nCold cache isn\u0026rsquo;t just slower — the loading sequence is different. Resources that appear instant in your workflow take seconds the first time. Font requests that look like they resolve immediately actually block layout. Preconnect hints that feel redundant do real work on a cold visit.\nRun your staging tests in an incognito window with cache disabled. It\u0026rsquo;s not a substitute for real-user data but it surfaces the worst offenders immediately.\nStaging data doesn\u0026rsquo;t tell you how your components scale Staging databases are seeded for developer convenience: enough data to see the UI, not enough to stress it. A list component that renders 50 items smoothly in staging might be rendering 5,000 in production, and nobody noticed because the test data never revealed it.\nReact re-renders scale with data. A component tree that\u0026rsquo;s fine at 50 records creates Long Tasks at 500. You don\u0026rsquo;t need to copy production data — synthetic data at realistic scale is enough. But it has to be realistic scale.\nThird-party scripts you forgot about Analytics, chat widgets, A/B testing tools, tag managers. In staging they\u0026rsquo;re often disabled, sandboxed, or absent entirely. In production they load fully, compete for main thread time, and contribute to LCP delays in ways that never show up in local testing.\nRun your production URL through WebPageTest with mobile throttling enabled and look at the waterfall. You\u0026rsquo;ll see scripts you forgot were there. For each one, the question is simple: what breaks if this doesn\u0026rsquo;t load? If the answer is \u0026ldquo;nothing visible to users,\u0026rdquo; question whether it belongs.\nMeasure from real browsers, not synthetic tests This is where most teams underinvest.\nLighthouse runs in a controlled environment on a single configuration. It\u0026rsquo;s useful for catching regressions in a CI pipeline. It is not telling you what your actual users experience.\nPerformanceObserver runs in your users\u0026rsquo; browsers and gives you the real distribution:\nnew PerformanceObserver((list) =\u0026gt; { const entries = list.getEntries(); const lcp = entries[entries.length - 1].startTime; fetch(\u0026#39;/api/metrics\u0026#39;, { method: \u0026#39;POST\u0026#39;, body: JSON.stringify({ metric: \u0026#39;LCP\u0026#39;, value: lcp, page: location.pathname }), }); }).observe({ type: \u0026#39;largest-contentful-paint\u0026#39;, buffered: true }); Add this to production. Not staging. The data you want is from real users on real devices on real networks. Once you have it, performance stops being a feeling and becomes something you can track across deploys.\nThe deploy window is where this matters most. A CSS change that pushes your LCP element below the fold, or a new image that wasn\u0026rsquo;t optimized, can move your p75 LCP from 1.8s to 3.5s overnight. If you\u0026rsquo;re only checking periodically, you\u0026rsquo;ll find out from a user complaint. If you\u0026rsquo;re watching the real-user data, you\u0026rsquo;ll know within an hour of deploying.\nThe PerformanceObserver approach above works if you have somewhere to send the data and something watching the thresholds. If you\u0026rsquo;d rather not build and maintain that alerting layer yourself, RPAlert does exactly this for React apps — install the SDK, wrap your component, set your LCP threshold, and it posts to Slack or Discord within 60 seconds of a regression. There\u0026rsquo;s a free tier if you want to try it on a single app first.\nThree things worth doing this week Check your LCP element on your main pages. Verify it\u0026rsquo;s WebP, has fetchpriority=\u0026quot;high\u0026quot;, and has explicit dimensions. Twenty minutes, fixes the most common issue.\nAdd the PerformanceObserver snippet to production and log to your existing analytics. Just having the data changes what gets prioritized in your next sprint.\nRun your production URL through WebPageTest once with a mobile throttling preset. Look at what loads, in what order, and what you\u0026rsquo;ve forgotten about.\nThe compounding problem with performance is that each change seems fine in isolation, in the environment where it was built. Production is the only place where all of it adds up at once. Measuring there isn\u0026rsquo;t an advanced optimization — it\u0026rsquo;s the baseline for knowing what\u0026rsquo;s actually happening.\n","permalink":"https://rpalert.dev/blog/posts/why-your-app-feels-fast-in-staging-and-slow-in-production-27e6/","summary":"\u003cp\u003eA Lighthouse score of 95 on staging doesn\u0026rsquo;t mean your users will see that. It means your machine, on your network, with your warm cache, hit that number once.\u003c/p\u003e\n\u003cp\u003eThe gap between staging and production isn\u0026rsquo;t random bad luck. It has predictable causes that almost every team hits in the same order.\u003c/p\u003e\n\u003chr\u003e\n\u003ch2 id=\"youre-not-testing-on-anything-like-a-real-device\"\u003eYou\u0026rsquo;re not testing on anything like a real device\u003c/h2\u003e\n\u003cp\u003eThe biggest one. Most web developers work on hardware that\u0026rsquo;s two to three times faster than the median device visiting their app. React component trees that reconcile in 40ms on a MacBook Pro take 180ms on a mid-range Android phone from 2022. That\u0026rsquo;s not a small difference — it crosses the line between \u0026ldquo;feels fast\u0026rdquo; and \u0026ldquo;feels like something is wrong.\u0026rdquo;\u003c/p\u003e","title":"Why Your App Feels Fast in Staging and Slow in Production"},{"content":"Here\u0026rsquo;s something that doesn\u0026rsquo;t get talked about enough: your React app can have great LCP and FCP scores, pass all your Lighthouse checks, and still feel sluggish to use.\nThe culprit is usually Long Tasks.\nWhat\u0026rsquo;s a Long Task? The browser\u0026rsquo;s main thread handles everything — parsing HTML, running JavaScript, responding to user input, painting pixels. It can only do one thing at a time.\nA Long Task is any task that occupies the main thread for more than 50 milliseconds without a break. While a Long Task is running, the browser can\u0026rsquo;t respond to anything else. Click a button during a Long Task? Nothing happens — until the task finishes.\n50ms might sound short, but human perception starts noticing unresponsiveness around 100ms. Any Long Task over that threshold will feel broken to a user.\nWhy React Makes This Easy to Get Wrong React renders synchronously by default (outside of concurrent features). When you trigger a state update, React processes the entire component tree update in one go. If that update is expensive — lots of components, heavy computations, large lists — it becomes a Long Task.\nThe tricky part: this doesn\u0026rsquo;t show up in unit tests. It doesn\u0026rsquo;t throw an error. It doesn\u0026rsquo;t affect your Lighthouse score in a way that\u0026rsquo;s obvious. It just makes your app feel slow.\nSome common patterns that create Long Tasks in React apps:\nRendering large lists without virtualization\n// If `items` has 500+ entries, this creates a Long Task on every render function ItemList({ items }) { return ( \u0026lt;ul\u0026gt; {items.map(item =\u0026gt; \u0026lt;ExpensiveItem key={item.id} item={item} /\u0026gt;)} \u0026lt;/ul\u0026gt; ); } Expensive computations in render\nfunction Dashboard({ rawData }) { // This runs on every render, blocking the main thread const processed = rawData.reduce((acc, row) =\u0026gt; { return heavyTransformation(acc, row); }, {}); return \u0026lt;Chart data={processed} /\u0026gt;; } State updates that cascade through large component trees A single setState at the top of a deeply nested tree can trigger hundreds of re-renders in one synchronous block.\nHow to Detect Long Tasks The browser exposes this through PerformanceObserver:\nconst observer = new PerformanceObserver((list) =\u0026gt; { for (const entry of list.getEntries()) { console.log(\u0026#39;Long Task detected:\u0026#39;, { duration: entry.duration, // how long it ran (ms) startTime: entry.startTime, // when it started attribution: entry.attribution, // which script caused it (limited support) }); } }); observer.observe({ type: \u0026#39;longtask\u0026#39;, buffered: true }); Run this in your production app for a day and look at the output. If you\u0026rsquo;re seeing regular Long Tasks over 100ms — especially clustering around user interactions or page loads — you have a real problem.\nOne thing worth knowing: entry.attribution gives you some information about what caused the task, but browser support varies and the data is often vague. It\u0026rsquo;ll tell you it was a script, but not always which function.\nFor more precise attribution, the Chrome DevTools Performance panel is your best friend. Record a session, look for the red triangles at the top of the flame chart — those are Long Tasks. Click into them and you\u0026rsquo;ll see exactly which functions ran.\nFixing Long Tasks There\u0026rsquo;s no single fix. The approach depends on what\u0026rsquo;s causing the task.\nFor expensive renders: useMemo\nfunction Dashboard({ rawData }) { // Only recalculates when rawData changes const processed = useMemo(() =\u0026gt; { return rawData.reduce((acc, row) =\u0026gt; heavyTransformation(acc, row), {}); }, [rawData]); return \u0026lt;Chart data={processed} /\u0026gt;; } useMemo doesn\u0026rsquo;t prevent Long Tasks on the first render, but it prevents them from happening again unnecessarily.\nFor large lists: virtualization\nLibraries like react-window or @tanstack/virtual only render the rows visible in the viewport. If you have more than a couple hundred items in a list, this is almost always worth doing.\nimport { FixedSizeList } from \u0026#39;react-window\u0026#39;; function ItemList({ items }) { return ( \u0026lt;FixedSizeList height={600} itemCount={items.length} itemSize={50} width=\u0026#34;100%\u0026#34;\u0026gt; {({ index, style }) =\u0026gt; ( \u0026lt;div style={style}\u0026gt; \u0026lt;ExpensiveItem item={items[index]} /\u0026gt; \u0026lt;/div\u0026gt; )} \u0026lt;/FixedSizeList\u0026gt; ); } For non-urgent updates: useTransition (React 18+)\nfunction SearchPage() { const [query, setQuery] = useState(\u0026#39;\u0026#39;); const [results, setResults] = useState([]); const [isPending, startTransition] = useTransition(); function handleSearch(value) { setQuery(value); // urgent — update input immediately startTransition(() =\u0026gt; { setResults(searchItems(value)); // non-urgent — can be interrupted }); } return ( \u0026lt;\u0026gt; \u0026lt;input value={query} onChange={e =\u0026gt; handleSearch(e.target.value)} /\u0026gt; {isPending ? \u0026lt;Spinner /\u0026gt; : \u0026lt;ResultsList results={results} /\u0026gt;} \u0026lt;/\u0026gt; ); } useTransition tells React that the update inside startTransition is low priority. React can interrupt it if something more urgent comes in (like another keystroke). This is particularly effective for search-as-you-type patterns.\nFor truly heavy work: move it off the main thread\nIf you\u0026rsquo;re doing something computationally expensive that can\u0026rsquo;t be memoized — parsing a large dataset, running a sorting algorithm on thousands of items — consider a Web Worker:\n// worker.ts self.onmessage = (e) =\u0026gt; { const result = heavyComputation(e.data); self.postMessage(result); }; // component const worker = new Worker(new URL(\u0026#39;./worker.ts\u0026#39;, import.meta.url)); worker.onmessage = (e) =\u0026gt; { setResult(e.data); // runs on main thread, but the computation didn\u0026#39;t }; worker.postMessage(largeDataset); Web Workers don\u0026rsquo;t have access to the DOM, so this only works for pure computation. But when it applies, it\u0026rsquo;s the cleanest solution — zero Long Task, because the work literally doesn\u0026rsquo;t happen on the main thread.\nThe Connection to INP If you\u0026rsquo;ve looked at your Core Web Vitals recently, you might have noticed INP (Interaction to Next Paint) — the metric that replaced FID in 2024. It measures how long it takes the page to respond to user interactions.\nLong Tasks are the primary cause of bad INP. When a user clicks and there\u0026rsquo;s a Long Task in progress, the browser queues the input event and processes it after the task finishes. If that task runs for 200ms, your INP for that interaction is 200ms+ — in the \u0026ldquo;needs improvement\u0026rdquo; range.\nFixing Long Tasks improves INP directly.\nMonitoring This in Production DevTools is great for debugging a specific session, but it won\u0026rsquo;t tell you how often Long Tasks are happening for real users across different devices.\nThe PerformanceObserver code above works in production. A few things worth tracking:\nCount of Long Tasks per page load — is this happening on every visit or just edge cases? Duration — are they 60ms or 400ms? The severity matters When they happen — during initial load, or triggered by user interactions? If Long Tasks spike after a deploy, that\u0026rsquo;s a signal something in the new code is blocking the main thread. Having an alert set up for unusual Long Task counts is worth it — it\u0026rsquo;s the kind of regression that\u0026rsquo;s easy to introduce and hard to notice until users start complaining.\nThis is actually what pushed me to build RPAlert — I kept finding out about Long Task spikes and LCP regressions from users instead of catching them myself. It handles the PerformanceObserver setup and sends a Discord alert when thresholds are crossed, so you don\u0026rsquo;t have to build the plumbing yourself.\nThat\u0026rsquo;s the gist of it. Long Tasks aren\u0026rsquo;t glamorous, but they\u0026rsquo;re one of the most direct causes of \u0026ldquo;this app feels slow\u0026rdquo; complaints — and they\u0026rsquo;re largely invisible without instrumentation. Worth adding to your monitoring stack.\n","permalink":"https://rpalert.dev/blog/posts/long-tasks-are-quietly-killing-your-react-apps-performance-3487/","summary":"\u003cp\u003eHere\u0026rsquo;s something that doesn\u0026rsquo;t get talked about enough: your React app can have great LCP and FCP scores, pass all your Lighthouse checks, and still feel sluggish to use.\u003c/p\u003e\n\u003cp\u003eThe culprit is usually Long Tasks.\u003c/p\u003e\n\u003chr\u003e\n\u003ch2 id=\"whats-a-long-task\"\u003eWhat\u0026rsquo;s a Long Task?\u003c/h2\u003e\n\u003cp\u003eThe browser\u0026rsquo;s main thread handles everything — parsing HTML, running JavaScript, responding to user input, painting pixels. It can only do one thing at a time.\u003c/p\u003e\n\u003cp\u003eA Long Task is any task that occupies the main thread for more than \u003cstrong\u003e50 milliseconds\u003c/strong\u003e without a break. While a Long Task is running, the browser can\u0026rsquo;t respond to anything else. Click a button during a Long Task? Nothing happens — until the task finishes.\u003c/p\u003e","title":"Long Tasks Are Quietly Killing Your React App's Performance"},{"content":" If you\u0026rsquo;ve seen the term \u0026ldquo;Core Web Vitals\u0026rdquo; and kept scrolling, this article is for you.\nIt\u0026rsquo;s not just SEO jargon. These three metrics are the clearest signal we have for whether a web app feels fast to a real user — and they\u0026rsquo;re measurable directly from your React code.\nThis article covers what the three metrics actually mean, how to measure them without any external tools, and what to do when they\u0026rsquo;re bad.\nWhat Are Core Web Vitals? Core Web Vitals are three metrics defined by Google to measure user experience from a loading and interactivity perspective. They\u0026rsquo;re based on real user data, not synthetic benchmarks.\nThe three metrics:\nMetric Measures Good threshold LCP — Largest Contentful Paint Loading speed ≤ 2.5s FCP — First Contentful Paint Time to first visible content (supporting metric, not a Core Web Vital) ≤ 1.8s CLS — Cumulative Layout Shift Visual stability ≤ 0.1 There\u0026rsquo;s a fourth metric worth knowing: INP (Interaction to Next Paint), which replaced FID (First Input Delay) in 2024. INP measures how responsive the page feels when you click or type. We\u0026rsquo;ll cover it briefly at the end.\nLCP — Largest Contentful Paint What it measures: How long until the largest visible element on screen finishes loading.\nThis is usually a hero image, a large heading, or the main content block. Whatever takes up the most screen real estate \u0026ldquo;above the fold.\u0026rdquo;\nWhy it matters: LCP is the closest single metric to \u0026ldquo;when does this page feel loaded.\u0026rdquo; Users don\u0026rsquo;t think in milliseconds — they think \u0026ldquo;did it load or not.\u0026rdquo; LCP is when the answer flips from \u0026ldquo;no\u0026rdquo; to \u0026ldquo;yes.\u0026rdquo;\nWhat causes bad LCP:\nLarge, unoptimized images (the most common cause) Render-blocking JavaScript or CSS that delays the page from painting Slow server response times (TTFB) Third-party scripts loading before your content How to measure it in code:\nconst lcpObserver = new PerformanceObserver((list) =\u0026gt; { const entries = list.getEntries(); // Use the last entry — LCP can be updated as more content loads const lastEntry = entries[entries.length - 1]; console.log(\u0026#39;LCP:\u0026#39;, lastEntry.startTime, \u0026#39;ms\u0026#39;); console.log(\u0026#39;Element:\u0026#39;, lastEntry.element); // Which element triggered it }); lcpObserver.observe({ type: \u0026#39;largest-contentful-paint\u0026#39;, buffered: true }); Good: ≤ 2.5s\nNeeds improvement: 2.5s – 4.0s\nPoor: \u0026gt; 4.0s\nFCP — First Contentful Paint What it measures: How long until the browser renders the first piece of DOM content — any text, image, or non-white canvas element.\nWhy it matters: FCP is a leading indicator. A slow FCP almost always means a slow LCP. If FCP is bad, users are staring at a blank screen, which is the worst user experience possible — worse than a slow load, because users don\u0026rsquo;t even know if anything is happening.\nWhat causes bad FCP:\nRender-blocking resources (CSS and JS that pause HTML parsing) Server-side rendering issues Heavy JavaScript bundles that need to parse before anything renders How to measure it:\nconst fcpObserver = new PerformanceObserver((list) =\u0026gt; { for (const entry of list.getEntries()) { if (entry.name === \u0026#39;first-contentful-paint\u0026#39;) { console.log(\u0026#39;FCP:\u0026#39;, entry.startTime, \u0026#39;ms\u0026#39;); } } }); fcpObserver.observe({ type: \u0026#39;paint\u0026#39;, buffered: true }); Good: ≤ 1.8s\nNeeds improvement: 1.8s – 3.0s\nPoor: \u0026gt; 3.0s\nCLS — Cumulative Layout Shift What it measures: How much the page layout shifts unexpectedly after it starts loading.\nYou\u0026rsquo;ve experienced this. You\u0026rsquo;re reading an article, an ad loads above the paragraph you\u0026rsquo;re on, and everything shifts down. You accidentally click the ad. That\u0026rsquo;s a layout shift — and CLS measures how much of this happens across the full page lifecycle.\nWhy it matters: Layout shifts erode user trust instantly. They also cause accidental clicks, which is particularly bad on e-commerce and form pages.\nWhat causes bad CLS:\nImages and videos without width and height attributes set Ads, embeds, or iframes without reserved space Dynamically injected content above existing content Web fonts loading and causing text to reflow (FOIT/FOUT) How to measure it:\nlet clsValue = 0; const clsObserver = new PerformanceObserver((list) =\u0026gt; { for (const entry of list.getEntries()) { // Only count shifts that happen without user interaction if (!entry.hadRecentInput) { clsValue += entry.value; console.log(\u0026#39;Current CLS:\u0026#39;, clsValue); } } }); clsObserver.observe({ type: \u0026#39;layout-shift\u0026#39;, buffered: true }); Good: ≤ 0.1\nNeeds improvement: 0.1 – 0.25\nPoor: \u0026gt; 0.25\nHow These Metrics Relate to Each Other Understanding the sequence helps:\nNavigation starts ↓ FCP fires — first pixel of content rendered ↓ LCP fires — largest content element rendered ↓ Page becomes interactive ↓ CLS accumulates throughout — tracks all layout shifts In practice: if FCP is bad, LCP will be bad too. If FCP is fine but LCP is bad, the issue is usually the main content (an image, a large element) taking too long. CLS is independent — a page can have great LCP and terrible CLS.\nMeasuring in Your React App: A Complete Setup Here\u0026rsquo;s a minimal but complete implementation that collects all three metrics and logs them:\n// utils/web-vitals.ts type MetricName = \u0026#39;LCP\u0026#39; | \u0026#39;FCP\u0026#39; | \u0026#39;CLS\u0026#39;; type MetricReport = { name: MetricName; value: number; rating: \u0026#39;good\u0026#39; | \u0026#39;needs-improvement\u0026#39; | \u0026#39;poor\u0026#39;; }; function getRating(name: MetricName, value: number): \u0026#39;good\u0026#39; | \u0026#39;needs-improvement\u0026#39; | \u0026#39;poor\u0026#39; { const thresholds = { LCP: [2500, 4000], FCP: [1800, 3000], CLS: [0.1, 0.25], }; const [good, poor] = thresholds[name]; if (value \u0026lt;= good) return \u0026#39;good\u0026#39;; if (value \u0026lt;= poor) return \u0026#39;needs-improvement\u0026#39;; return \u0026#39;poor\u0026#39;; } export function initWebVitals(onMetric: (metric: MetricReport) =\u0026gt; void) { // LCP new PerformanceObserver((list) =\u0026gt; { const entries = list.getEntries(); const last = entries[entries.length - 1]; const value = last.startTime; onMetric({ name: \u0026#39;LCP\u0026#39;, value, rating: getRating(\u0026#39;LCP\u0026#39;, value) }); }).observe({ type: \u0026#39;largest-contentful-paint\u0026#39;, buffered: true }); // FCP new PerformanceObserver((list) =\u0026gt; { for (const entry of list.getEntries()) { if (entry.name === \u0026#39;first-contentful-paint\u0026#39;) { const value = entry.startTime; onMetric({ name: \u0026#39;FCP\u0026#39;, value, rating: getRating(\u0026#39;FCP\u0026#39;, value) }); } } }).observe({ type: \u0026#39;paint\u0026#39;, buffered: true }); // CLS let clsValue = 0; new PerformanceObserver((list) =\u0026gt; { for (const entry of list.getEntries()) { if (!(entry as any).hadRecentInput) { clsValue += (entry as any).value; onMetric({ name: \u0026#39;CLS\u0026#39;, value: clsValue, rating: getRating(\u0026#39;CLS\u0026#39;, clsValue) }); } } }).observe({ type: \u0026#39;layout-shift\u0026#39;, buffered: true }); } Usage in your React app:\n// App.tsx or main.tsx import { initWebVitals } from \u0026#39;./utils/web-vitals\u0026#39;; initWebVitals((metric) =\u0026gt; { console.log(`${metric.name}: ${metric.value} (${metric.rating})`); // Send to your analytics endpoint, logging service, etc. }); The Measurement Gap: Local vs. Production Here\u0026rsquo;s the part that most tutorials skip.\nLighthouse and DevTools give you synthetic measurements — they simulate a specific device and network condition in a controlled environment. This is useful for relative comparisons (\u0026ldquo;did my change make it better or worse?\u0026rdquo;), but it doesn\u0026rsquo;t tell you what real users experience.\nReal users have:\nOlder devices with slower CPUs Variable network conditions (3G, congested WiFi) Many browser tabs open Cold cache (no previous visit to your site) The only way to know your real-world Core Web Vitals is to measure in production, from real browsers. The code above does exactly that — it runs in your users\u0026rsquo; browsers and captures their actual experience.\nWhat you do with those measurements is a separate question. At minimum, log them somewhere. Ideally, set up alerting so you know when they degrade — particularly after deploys.\nQuick Wins for Each Metric If you\u0026rsquo;re seeing bad numbers, here\u0026rsquo;s where to start:\nBad LCP?\nCheck if the LCP element is an image — if so, add fetchpriority=\u0026quot;high\u0026quot; to it Convert images to WebP format If using Next.js, switch to next/image Check TTFB — if your server responds slowly, everything else suffers Bad FCP?\nIdentify and remove render-blocking CSS/JS Inline critical CSS If using SSR, check that your server isn\u0026rsquo;t doing too much work before sending HTML Bad CLS?\nAdd explicit width and height to all images and videos Reserve space for ads and dynamic embeds with CSS min-height Avoid inserting content above existing content after page load What About INP? INP (Interaction to Next Paint) replaced FID in March 2024. It measures how quickly the page responds to user interactions — clicks, taps, keyboard input.\nGood threshold: ≤ 200ms\nThe most common cause of bad INP in React apps is expensive state updates that block the main thread. If you\u0026rsquo;re seeing high INP, Long Tasks are usually the culprit — something is blocking the browser from responding to user input.\nWe\u0026rsquo;ll cover Long Tasks in depth in the next article.\nSummary Core Web Vitals aren\u0026rsquo;t just for SEO. They\u0026rsquo;re the most concrete way to measure whether your app feels fast to a real user.\nThe three metrics tell a story:\nFCP: Does anything appear quickly? INP: Does the page respond to interactions? CLS: Does the layout stay stable while loading? FCP is worth tracking too — it\u0026rsquo;s a leading indicator for LCP — but it\u0026rsquo;s a supporting metric, not a Core Web Vital.\n","permalink":"https://rpalert.dev/blog/posts/core-web-vitals-explained-what-they-are-how-to-measure-them-and-why-they-matter-for-react-apps-3f2p/","summary":"\u003chr\u003e\n\u003cp\u003eIf you\u0026rsquo;ve seen the term \u0026ldquo;Core Web Vitals\u0026rdquo; and kept scrolling, this article is for you.\u003c/p\u003e\n\u003cp\u003eIt\u0026rsquo;s not just SEO jargon. These three metrics are the clearest signal we have for whether a web app \u003cem\u003efeels\u003c/em\u003e fast to a real user — and they\u0026rsquo;re measurable directly from your React code.\u003c/p\u003e\n\u003cp\u003eThis article covers what the three metrics actually mean, how to measure them without any external tools, and what to do when they\u0026rsquo;re bad.\u003c/p\u003e","title":"Core Web Vitals Explained　What They Are, How to Measure Them, and Why They Matter for React Apps"},{"content":"Monitoring and Alerting Are Different Jobs. Most React Teams Only Have One. Tags: #react #performance #webdev #javascript\nTuesday you deploy. Thursday your LCP climbs from 1.2s to 2.8s. Friday the reviews start coming in. Your monitoring dashboard shows the spike clearly — it just waited until you opened it to tell you.\nThat\u0026rsquo;s the gap. Not a tooling failure exactly. More of a category error.\nWhat monitoring tools are actually built for Datadog, New Relic, your APM of choice — these are built for historical analysis. What happened over the past week, which pages are consistently slow, where the backend bottlenecks are. That\u0026rsquo;s genuinely useful work, and they do it well.\nWhat they\u0026rsquo;re not designed for: telling you that something is wrong right now, minutes after it started. The data aggregation that makes historical analysis accurate also introduces latency. By the time a threshold alert fires in most APM tools, your users have already been in the slow experience for 10–20 minutes.\nError trackers have the same gap from the other direction. Sentry is excellent at catching exceptions. A LCP that degraded from 1.8s to 3.5s after a deploy doesn\u0026rsquo;t throw an exception. Nothing fires.\nThe question your tools can\u0026rsquo;t answer \u0026ldquo;Did the deploy I just pushed make the app slower for real users right now?\u0026rdquo;\nThat\u0026rsquo;s a different question from \u0026ldquo;what does our performance look like historically\u0026rdquo; or \u0026ldquo;are there errors in production.\u0026rdquo; It needs a different data source — real browsers, in real time — and a different output: an alert somewhere your team will see it within minutes, not a dashboard someone has to remember to check.\nThe measurement layer is straightforward with the browser\u0026rsquo;s native API:\nnew PerformanceObserver((list) =\u0026gt; { const lcp = list.getEntries().at(-1)?.startTime; if (lcp \u0026amp;\u0026amp; lcp \u0026gt; 2500) { sendMetric({ metric: \u0026#39;LCP\u0026#39;, value: lcp, page: location.pathname }); } }).observe({ type: \u0026#39;largest-contentful-paint\u0026#39;, buffered: true }); This runs in your users\u0026rsquo; browsers and gives you real field data. The rest — threshold logic, batching, webhook routing to Slack or Discord — is buildable but accumulates edge cases fast.\nI kept rebuilding this same pipeline across different projects, which is why I built RPAlert. SDK install and a provider wrapper:\nimport { RPAlertProvider } from \u0026#34;rpalert-sdk/react\u0026#34;; export default function RootLayout({ children }: { children: React.ReactNode }) { return ( \u0026lt;html lang=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;body\u0026gt; \u0026lt;RPAlertProvider apiKey=\u0026#34;YOUR_API_KEY\u0026#34;\u0026gt; {children} \u0026lt;/RPAlertProvider\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; ); } LCP, FCP, CLS, Long Tasks — all measured. Alert fires to Discord or Slack when thresholds are crossed.\nThis doesn\u0026rsquo;t replace what you already have RPAlert isn\u0026rsquo;t a Sentry or Datadog replacement. Sentry tells you why something broke. Datadog tells you what happened over time. RPAlert tells you when to go look at both of those.\nThe alert is the smoke detector. The investigation still happens with the tools you already have.\nHistorical monitoring and real-time alerting answer different questions. If your current stack only answers one of them, you\u0026rsquo;re going to keep finding out about performance regressions from your users.\n","permalink":"https://rpalert.dev/blog/posts/monitoring-vs-alerting-react-teams/","summary":"\u003ch1 id=\"monitoring-and-alerting-are-different-jobs-most-react-teams-only-have-one\"\u003eMonitoring and Alerting Are Different Jobs. Most React Teams Only Have One.\u003c/h1\u003e\n\u003cp\u003e\u003cem\u003eTags: #react #performance #webdev #javascript\u003c/em\u003e\u003c/p\u003e\n\u003chr\u003e\n\u003cp\u003eTuesday you deploy. Thursday your LCP climbs from 1.2s to 2.8s. Friday the reviews start coming in. Your monitoring dashboard shows the spike clearly — it just waited until you opened it to tell you.\u003c/p\u003e\n\u003cp\u003eThat\u0026rsquo;s the gap. Not a tooling failure exactly. More of a category error.\u003c/p\u003e\n\u003chr\u003e\n\u003ch2 id=\"what-monitoring-tools-are-actually-built-for\"\u003eWhat monitoring tools are actually built for\u003c/h2\u003e\n\u003cp\u003eDatadog, New Relic, your APM of choice — these are built for historical analysis. What happened over the past week, which pages are consistently slow, where the backend bottlenecks are. That\u0026rsquo;s genuinely useful work, and they do it well.\u003c/p\u003e","title":"Monitoring and Alerting Are Different Jobs. Most React Teams Only Have One."},{"content":"Welcome to the RPAlert blog.\nThis is where we\u0026rsquo;ll share insights on web performance monitoring, practical tips for improving Core Web Vitals, and updates on RPAlert.\nWhat to expect Web Performance — How to measure and improve LCP, INP, and CLS in production Using RPAlert — SDK integration guides and best practices Product Updates — Release notes and changelog highlights Stay tuned.\n","permalink":"https://rpalert.dev/blog/posts/hello-world/","summary":"\u003cp\u003eWelcome to the RPAlert blog.\u003c/p\u003e\n\u003cp\u003eThis is where we\u0026rsquo;ll share insights on web performance monitoring, practical tips for improving Core Web Vitals, and updates on RPAlert.\u003c/p\u003e\n\u003ch2 id=\"what-to-expect\"\u003eWhat to expect\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eWeb Performance\u003c/strong\u003e — How to measure and improve LCP, INP, and CLS in production\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eUsing RPAlert\u003c/strong\u003e — SDK integration guides and best practices\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eProduct Updates\u003c/strong\u003e — Release notes and changelog highlights\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eStay tuned.\u003c/p\u003e","title":"Introducing the RPAlert Blog"}]