ecabigting
← Back to Blog

In the internet, size do matter.

by Eric Thomas D. Cabigting
In the internet, size do matter.

The Current Setup

The issue began with a portfolio that, on paper, was already doing what I needed: a clean Next.js application that neatly listed my projects and highlighted my experience as a full‑stack engineer. It loaded, it displayed correctly, and it served its purpose. Yet, when I inspected the network capture of a fresh, cold request, the HTML payload came back at 23 KB. That number stood out for a reason beyond sheer size; it was almost twice the “14 KB rule,” a guideline that most web performance engineers regard as a hard cap for the first byte of data delivered over the internet.

The “14 KB rule” is rooted in the way the Transmission Control Protocol (TCP) transports data. TCP breaks everything it sends into segments, each of which must fit within the Maximum Segment Size (MSS). The MSS is typically set just under the Maximum Transmission Unit (MTU) of the underlying network, which for Ethernet is 1500 bytes. Accounting for the TCP and IP headers (roughly 40 bytes total), the payload that can travel in a single packet without fragmentation ends up around 1460 bytes. However, modern networks often employ TCP window scaling and path MTU discovery, allowing the sender to use larger packets when the path supports them. Even so, a widely accepted safe boundary for an initial HTML response is about 14 KB (approximately 10–12 KB of actual content plus overhead). Staying under this threshold ensures that the first response can be transmitted in one round‑trip, without the need for the receiver to acknowledge a fragmented packet or request retransmission of lost pieces.

When a response exceeds this limit, the data is split across multiple TCP segments. The client must then wait for the first segment, acknowledge it, and request the next, which introduces additional latency. On high‑latency connections—such as cellular networks, satellite links, or congested Wi‑Fi—those extra round‑trips become noticeable, often adding hundreds of milliseconds to the page load time. Moreover, each extra packet carries its own header overhead, subtly inflating the total amount of data transmitted. For users on limited data plans or in regions where bandwidth is expensive, a larger payload translates directly into higher costs and a poorer experience.

My goal, therefore, was clear: bring the initial HTML payload back under that 14 KB ceiling. Doing so would keep the entire response inside a single TCP segment, eliminate the need for extra acknowledgements, and guarantee that the browser could start rendering the page as soon as it received the first packet. The challenge was to achieve this reduction without sacrificing the functionality, styling, or the professionalism of the site—essentially an upgrade that respected the economics of the web rather than a full rewrite from scratch. This constraint set the stage for a series of optimizations—code‑splitting, lazy‑loading of non‑essential assets, compressing inline styles, and pruning unused dependencies—all aimed at delivering a leaner, faster, and more predictable first‑paint experience.

The Starting Point

The original portfolio was, in many ways, a complete showcase of everything I wanted the web to know about me. It opened with a Hero banner that introduced me and set the tone, then flowed into an About Me narrative that explained my background and motivations. Below that lived a Projects gallery where each case study was displayed with screenshots and links to the live demos.

I also included a Blog section that hosted a collection of technical write‑ups—long, code‑heavy posts that dove deep into patterns, libraries, and performance tricks I had discovered while building software.

Everything on that page was rendered on the server for every request. The server generated the full HTML markup—including the Hero, About, Projects, Blog, Experience, Skills, and Footer—each time a visitor loaded the site. Consequently, each request transferred the entire content payload, regardless of whether the visitor needed all of the information immediately. This all‑inclusive, server‑side approach formed the baseline from which I began the performance overhaul.

All rendered server-side. All sent with every single request.


The mathematics were concerning:

  • Header + Hero ~3KB
  • About Me + Projects ~4KB
  • Blog Posts (3 with code blocks) ~4KB
  • Experience (7 jobs) ~8KB
  • Skills (~85 items) ~3KB
  • Footer ~1KB
  • Total ~23KB
Twenty-three kilobytes. For a website that should be a quick introduction.
I knew I could do better. Every professional engineer knows that feeling — looking at your own work and seeing the improvements you'd make if someone just gave you the time.

The Problems and Solutions

The first thing I tackled was the unnecessary weight of the page’s initial markup. Because the Hero, About, and Skills sections were all rendered server‑side, users received a massive HTML blob that included content they would never see until they scrolled—full blog posts, an exhaustive list of 85 technologies, and the footer. I moved those three areas to lazy‑loaded client components, displaying lightweight skeletons while the data fetched after mount, which instantly trimmed the payload and delivered a snappier first paint.

With the page now leaner, the blog’s code snippets surfaced as the next pain point. The original highlighter tangled with the App Router, producing hydration errors and inconsistent styling that made the posts look amateurish. Swapping it for a server‑only highlighter that renders pre‑styled HTML removed all client‑side JavaScript for code blocks, resulting in flawless, editor‑accurate highlighting.

A clean code view revealed an odd visual glitch: my profile picture, cropped to a square in the CMS, still appeared rectangular on the site. The issue was that the cropping data lived on the image reference, not the asset itself, so my query never applied the crop. By pulling the crop field directly from the image reference, the portrait now displays exactly as intended.

Having fixed the portrait, the footer’s shortcomings became obvious. It was functional but visually unremarkable and exposed my email address to bots. I rebuilt it as a three‑column layout that showcases recent blog posts, quick navigation links, and contact details, and I added a simple Unicode‑direction obfuscation to mask the email from scrapers while keeping it readable for humans.

The newly structured footer naturally led me to rethink the projects showcase. A conventional grid forced every card to share the same height, resulting in awkward gaps because each project varied in description length and tag count. Implementing a masonry‑style column layout let cards flow naturally, creating a polished mosaic that scales from a single column on mobile to three columns on desktop.

Finally, with a cohesive project grid and an engaging footer in place, the site’s navigation needed the same level of polish. I standardized the header and footer links, ensuring that routes like Home, Blog, and Projects behaved consistently across all breakpoints, giving the portfolio a seamless, professional experience from start to finish.

The Conclusion

The final insight from the upgrade is that, unlike the mantra of “move fast and break things,” success on the web hinges on restraint: every byte carries a hidden cost in bandwidth, latency, and even mobile battery life, so the sites that respect those costs load instantly and feel professional. By shrinking the initial payload, streamlining code highlighting, fixing image metadata, redesigning the footer, re‑architecting the projects grid, and unifying navigation, I turned a bulky showcase into a lean, responsive experience that demonstrates not just what I’ve built but the depth of understanding I bring to the systems I claim to master.

Continue Reading.