Quando il frontend smette di stare nel browser
C'era una volta un frontend semplice: HTML, CSS, un po' di JavaScript, e tutto viveva nel browser dell'utente. Poi sono arrivate le Single When the Frontend Stops Being in the Browser
Once upon a time, there was a simple frontend: HTML, CSS, a little JavaScript, and it all lived in the user's browser. Then came Single Page Applications, and we moved more and more logic to the client side. It seemed like the future. The traditional Single Page Application model required all the rendering logic to run in the browser: the server would send a nearly empty "index.html" the client would download a JavaScript bundle, execute it, and only then build the DOM. This approach has real performance costs: large bundles, high parsing time, and a heavy dependency on the user's device's performance.
Today, frontend architecture has moved in a different direction: rendering on the server, execution at the edge, and less JavaScript in the browser. This isn't a return to the classic server-side rendering of the 2000s, but a more precise hybrid model, where each piece of the UI runs in the most suitable context—server, edge, or client—based on its needs.
Let's take a closer look:
Classic SPA (e.g., Create React App)
Browser receives a nearly empty
index.htmlDownloads a huge JS bundle (500kb+)
Executes all the JavaScript
Only then builds the DOM and displays something
The user stares at a blank page while the browser executes.
Next.js with Server Components
The server runs the React component, generating pre-populated HTML
The browser receives pre-populated HTML, making it immediately visible
It only downloads the JavaScript for the Client Components (much smaller bundles)
No heavy client-side execution for static content
The user sees the page almost instantly.
Edge Computing
Every HTTP request incurs a latency cost that depends, among other things, on the physical distance between the client and the server. A centralized server in New York responds to a user in Milan with a network latency of about 80-120 ms just for the round-trip—before the server has even performed any processing. Multiplied by the requests in a session, the cost adds up.
Edge computing solves this by distributing code execution across a network of nodes geographically close to the users. Instead of routing each request to a single data center, the logic is executed at the closest node—often within milliseconds of the user.
How it works in practice
The major platforms — Vercel Edge Functions, Cloudflare Workers, Netlify Edge Functions — manage all the infrastructure. For a Next.js project on Vercel, moving a route to the edge requires just one line:
// app/api/hello/route.ts
export const runtime = 'edge'
export async function GET(request: Request) {
return new Response('OK')
}
Vercel automatically distributes this feature across approximately 30 regions. No additional configuration is required.
Middleware: the most common use case
In Next.js, the middleware runs on the edge by default. This is the right place for logic that must intercept every request before it reaches the server: authentication, redirects, geographic customization.
This redirect occurs from the edge node closest to the user, without involving the main server. For a user in Italy, the response typically comes from Frankfurt rather than a data center in the US.
// middleware.ts
import { NextResponse } from 'next/server'
export function middleware(request: Request) {
const paese = request.geo?.country // popolato da Vercel automaticamente
if (paese === 'IT') {
return NextResponse.redirect(new URL('/it', request.url))
}
}
Edge Function Limitations
Edge functions are not full-fledged Node.js servers. They run in an isolated V8 environment (similar to a Service Worker), which imposes specific constraints:
✅ Supported
Fetch to external APIs
JWT token verification
URL redirect and rewrite operations
Header and cookie reading
❌ Not supported
Traditional database queries (Postgres, MySQL)
Native Node.js modules (fs, path, crypto, etc.)
CPU-intensive operations
To access a database from the edge, you need to use solutions compatible with this environment: Turso (distributed SQLite), Neon (serverless Postgres with HTTP API), or Upstash (Redis with HTTP API).
React Server Components and Next.js
With classic SPAs, the JavaScript bundle includes everything: components, fetching logic, and rendering libraries. Even parts of the UI that don't require interactivity are sent to the client as code to be executed. This increases parsing and execution time, directly impacting Time to Interactive (TTI).
React Server Components (RSC), introduced permanently with the Next.js App Router, address this issue at its root: components that don't require interactivity are rendered exclusively on the server and sent to the client as HTML—without the corresponding JavaScript in the bundle.
Server Components vs. Client Components:
The distinction is clear:
Server Components (default in App Router):
Run only on the server side, not in the browser.
They can
async/awaitdirectly, thus making server/database-side requests.They do not generate JavaScript in the client bundle.
They cannot
use useState,useEffect, orevent handlers.
Client Components (marked with 'use client'):
Run in the browser, like traditional React components.
They support all local state, hooks, and event handlers.
They generate JavaScript that is downloaded and executed by the client.
// ProductPage.tsx — Server Component (nessun 'use client')
// Accede direttamente al DB, non genera JS nel bundle
async function ProductPage({ id }: { id: string }) {
const product = await db.products.findUnique({ where: { id } })
return (
<div>
<h1>{product.name}</h1>
<p>{product.description}</p>
<AddToCartButton productId={id} /> {/* Client Component */}
</div>
)
}
// AddToCartButton.tsx — Client Component
'use client'
export function AddToCartButton({ productId }: { productId: string }) {
const [added, setAdded] = useState(false)
return (
<button onClick={() => setAdded(true)}>
{added ? 'Aggiunto ✓' : 'Aggiungi al carrello'}
</button>
)
}
Core Web Vitals
What They Are and Why They Matter
Core Web Vitals are the metrics Google uses to measure the quality of a web page's user experience. Since 2021, they have been a ranking factor. In 2026, they have become a standard design reference, not an end-of-project checklist.
The three main metrics:
LCP – Largest Contentful Paint: Time to load the main visible content. A good threshold is less than 2.5 seconds.
FID – First Input Delay: Latency between the user's first click/tap and the browser response. A good threshold is less than 100 ms.
CLS – Cumulative Layout Shift: Cumulative shift of elements during loading. A good threshold is less than 0.1 (https://web.dev/articles/cls?hl=en#layout-shift-score)
You can measure them with Google Search Console (real user data), PageSpeed Insights (synthetic analysis), or Lighthouse integrated into Chrome DevTools.
How Edge and Server Components Impact Web Vitals
The three topics in this article are directly related to the metrics:
LCP depends largely on TTFB (Time to First Byte): the time between the request and the first byte of the response. Edge computing lowers TTFB by reducing network latency. Server Components further lower TTFB because the server sends pre-rendered HTML, without waiting for client-side JavaScript to execute.
FID measures how busy the browser is when the user interacts for the first time. A heavy JavaScript bundle blocks the main thread during parsing and execution, worsening FID. By reducing the JavaScript sent to the client via Server Components, the main thread is freer and FID improves.
CLS is often caused by asynchronous hydration of SPAs: the browser first displays the empty HTML, then adds content via JavaScript, causing visible layout shifts. With Server Components, the HTML arrives pre-rendered, eliminating this source of CLS.
Accessibility and performance: the same solution
Molte ottimizzazioni per i Core Web Vitals coincidono con requisiti di accessibilità WCAG. Alcuni esempi concreti:
HTML semantico corretto (heading gerarchici, landmark ARIA) migliora sia la navigazione da screen reader che i punteggi Lighthouse
Immagini con
widtheheightespliciti prevengono il layout shift (CLS) e forniscono informazioni agli assistenti vocaliFont caricati con
font-display: swapevitano il blocco del rendering (impatto su LCP) e garantiscono leggibilità anche prima che il font personalizzato sia disponibileContenuto non dipendente da JavaScript riduce CLS e rende la pagina accessibile anche senza JS abilitato
Trattare performance e accessibilità come un'unica area di lavoro riduce la duplicazione degli sforzi e porta a risultati migliori su entrambi i fronti.
Many optimizations for Core Web Vitals coincide with WCAG accessibility requirements. Some concrete examples:
Correct semantic HTML (hierarchical headings, ARIA landmarks) improves both screen reader navigation and Lighthouse scores
Images with explicit
widthandheightprevent layout shift (CLS) and provide information to voice assistantsFonts loaded with
font-display:swapavoid rendering blocking (impact on LCP) and ensure readability even before the custom font is availableJavaScript-independent content reduces CLS and makes the page accessible even without JavaScript enabled
Treating performance and accessibility as a single workspace reduces duplication of effort and leads to better results on both fronts.