A very thorough article discussing how to convert a Chrome browser into an SEO testing environment, through a combination of browser settings and third-party add-ons. I personally like the idea of setting this up as a Chrome "profile", so you can load it and just begin testing, without resetting everything each time. Alternatively, the article suggests using Chrome Canary for the test bench.
In brief, this works by using the following steps:
Grab the latest Googlebot user-agent string from the Chrome dev tools (Menu -> More Tools -> Network Conditions -> User Agent -> Uncheck "use browser default" -> Select "Googlebot" (or similar) from the dropdown -> Copy the string shown
Add that string to User Agent Switcher as a new User Agent (Googlebot, paste string, Bots & Spiders, Replace, GB <-- these are the settings I used, can differ) and then select it from the plugin
Disable service workers in the Chrome dev tools (Applications tab -> Service Workers -> Check "Bypass for Network")
Disable local cache (Dev tools menu -> More Tools -> Network Conditions -> Check "Disable Cache")
Disable all cookies (chrome://settings/cookies) and privacy-infringing APIs (chrome://settings/content) i.e. Location, Camera, Microphone, Notifications, and Background Sync
Disable JavaScript (suggestion is to use the Web Developer extension)
They also suggest using a VPN to test from the US, as that is where Googlebot comes from. Similarly, the NoJS Side-by-Side bookmarklet is suggested as a good way to quickly test the site with JS running or not (seeing as Googlebot can now use JavaScript in some scenarios).
The bulk of the article is about a kind of pseudo-scraper for populating links and references whilst writing a blog post or article. But the broader conversation piece here is whether linking in an age of LLM-driven content is still needed, to which I and the author agree: it is. Now more than ever. Not because linking serves any specific purpose in terms of differentiating humans from machines (LLMs are just as capable of generating linked Markdown etc.), but because, as Crash Lime writes, doing so should help build a "Web of Trust"; a web of verified information, which is known to be at least relevant. That is something LLMs (and, indeed, search engines) stand less hope of pulling off... for now.
On the ending of an Age (and just a really fun way of describing the situation):
Perhaps we’re at the end of the old-web, now a corner relegated to hobbyists, as all text ever written is absorbed in a single differentiable scream.
On the concept of a "web of trust":
Remember that you are not just linking, but building a Web of Trust. Links need to be a signal that cuts through the noise, not vice-versa.
Maggie's writing is always fantastic, and their thoughts on digital gardens are always worth reading. The history here is nothing new to me personally, but does present it in an ideal manner. I also found myself thoroughly agreeing with their "patterns" of digital gardening. Just a great article, full of quote-worthy comments, on a topic that continues to intrigue.
On where the concept of digital gardening originated:
If anyone should be considered the original source of digital gardening, it's Caufield. They are the first to lay out this whole idea in poetic, coherent words.
On the benefit of a true web of information, sprawling as it is:
You get to actively choose which curiosity trail to follow, rather than defaulting to the algorithmically-filtered ephemeral stream.
On the way digital gardens tend to form bi-directional links and rabbit warrens of interconnected information:
Gardens don't consider publication dates the most important detail of a piece of writing. Dates might be included on posts, but they aren't the structural basis of how you navigate around the garden. Posts are connected to other by posts through related themes, topics, and shared context.
One of the best ways to do this is through Bi-Directional Links – links that make both the destination page and the source page visible to the reader. This makes it easy to move between related content.
On the shift that occurred in blogs, as they morphed from personal (yet public) journals into reverse-chronological personal branding exercises:
We act like tiny magazines, sending our writing off to the printer.
On the difference with a garden:
Gardens are designed to evolve alongside your thoughts. When you first have an idea, it's fuzzy and unrefined.
On how we use social media differently (I'd never really considered this, but it feels very true):
We seem to reserve all our imperfect declarations and poorly-worded announcements for platforms that other people own and control.
And I adore the concept of a chaos stream within that same context:
Things we dump into private WhatsApp group chats, DMs, and cavalier Tweet threads are part of our chaos streams - a continuous flow of high noise / low signal ideas.
One final, clever analogy between digital spaces and agriculture, specifically discussing how digital gardens should use multiple mediums, not just rely on text and links like a Wiki:
Historically, monocropping has been the quickest route to starvation, pests, and famine. Don't be a lumper potato farmer while everyone else is sustainably intercropping.
An absolutely fantastic little utility site that takes pretty much any SVG and removes all of the white space. You can drag'n'drop, upload image files, or just paste markup directly into a text input, and it has worked on everything I've thrown at it so far. Occasionally the preview image is a little odd (it seems to always have a 1:1 aspect ratio), but the generated output code normally preserves the logical aspect ratio. Just super helpful, all around 👍
A wonderfully well-written look at the state of the "Fediverse", and whether or not that term has any value left in it. There's a lot of interesting history and some slightly spicy takes in here, but for me the most useful part is the framing of the distributed social graph that the Fediverse creates as a "Social Archipelago". That's just a nice term that helps visualise the reality of it all a bit better:
“The Fediverse” needs to end, and I don’t think anything should replace it. Speak instead about communities, and prioritize the strength of those communities. Speak about the way those communities interact, and don’t; the way they form strands and islands and gulfs. I’ve taken to calling this the Social Archipelago.
One of the most entertaining rants I've read in some years! And whilst it may no longer be that pertinent to my current career, having been the person in charge of developing a nested, tangled mess of VBA macros and Excel sheets to partially-automate several workflows, I can also deeply sympathise 😂
On the issue of "but we're only doing this for now":
Do you know how many times I've written a script that was really only run once? Never. It has never happened across my entire career. Every single thing I've ever written has been fucking welded into the soul of every organisation I've ever worked at.
On what happens once you've opened the door to Excel:
Beyond this lies naught but trying to work out why all the numbers are wrong, only to realize that Excel thought those IDs were integers and dropped all the leading zeroes.
I've seen a multi-million dollar analytics platform that is dynamically constructed from Excel spreadsheets. Do you think that was the intention when the damn thing was being assembled? Do you think any sane person would ever want to do that? No, of course not. One script got written that could parse a spreadsheet, and next thing you know, you're running a greedy search algorithm every sprint, and it never quite makes sense to spend the time removing them.
I'm not sure I agree fully with everything Jared has written here – and there's a strong feeling of bias-tinted vision to some of the claims – but I enjoyed the overall trend of the argument and felt there were a few nuggets worth saving. In particular, I do think that the heavy focus on React is churning out a generation of developers that will be underskilled once React is no longer as popular. And that the way the React community has behaved for the better part of the last decade is appalling. But I'm not quite so sure that the nostalgia towards other web frameworks is truly warranted; nor do I think the framing of front-end development as particularly useful. Like it or not, React provides a suite of APIs and tools that solve otherwise tricky problems easily. And whilst I agree with Jared's inferences that not all of the solutions it provides are actually solving for genuine user needs, some of them definitely are. So I think there's a little more grey painted into the view before me, than the black and white imagined in this piece 😉
On the disparity between the bulk majority of web development work and the online discourse around it:
There has been a small but mighty ecosystem of “influencers” peddling a sort of “pop culture developer abstractions” ethos on the web whether it’s about React, or CSS-in-JS, or Tailwind CSS, or “serverless”, or “microservices”, or (fill in the blank really)—and they’re continuing to gaslight and obfuscate the actual debates that matter.
On the role of JavaScript on the web, and in particular the issue with learning it as a foundational part of the web stack (or even worse, learning its abstractions instead of the underlying technologies like HTML and CSS):
JavaScript is not required to build a simple web site. JavaScript is an “add-on” technology if you will, the third pillar of the web frontend alongside HTML and CSS. HTML, CSS, and (eventually) JavaScript. Not JavaScript, JavaScript, and JavaScript.
On how we've wound up in a situation where we have "trends" of technology that fail to become foundations of future tools:
The problem is an industry rife with faulty thinking that assumes (a) popular technology is popular because it’s good, and (b) the web platform itself is somehow severely lacking even in 2023, so heavily abstracted frontend frameworks remain a necessity for programmer happiness.
Like Matthias, I had always assumed that assistive tech (e.g. screen readers) would have some method of translating <strong> and <em> HTML elements into emphasised content. A change in voice inflexion; a user hint; something. Apparently, they don't, and the one time NVDA added the feature, it was so universally despised that they subsequently removed it. Not hid it behind a settings flag; full-on deleted it entirely. Useful to be aware of!
On why you should still use these semantic elements anyway (not least of all, they're a better DX than arbitrary styled <span> wrappers, and sighted users do benefit from visual emphasis):
Does this mean that you don’t need to use semantic elements to convey emphasis? Of course not. You never know where the semantics still matter. And it is also just the right way to indicate emphasis in HTML for the majority of users. But it is nevertheless interesting and important to know that screen readers don’t care by default.
There's nothing too exceptional about this video from Tom Scott. I've seen similar arguments made before – I've made them myself – but I think Tom does a very good job of outlining my precise concerns about the current state of machine learning software, and the "soft AI" that seems to be proliferating rapidly. I've felt for a while that ChatGPT, in particular, has felt a little different, and the Napster analogy in this video perfectly nails where that feeling is coming from. And, as much as I am enjoying the current Bing-powered memes about the populations of Mars or how search engines are now gaslighting people about the current year, I remain very concerned that we're on the brink of making a lot of the mistakes of the Napster era all over again.
So, whilst it will be fascinating to find out where on that sigmoid curve we actually are... I also really don't want to know 😅
A wonderful talk from Manuel on the hidden complexities of HTML. There's a huge amount of interesting stuff going on here in terms of writing accessible, semantic websites, but I particularly liked the way various issues were framed. I think Manuel is completely correct in their assertion that HTML is surprisingly hard – it's very easy to explain the syntax, but incredibly complex to learn how it actually works – and that this secret learning curve is the root cause of many performance and accessibility issues on the web today.
We're wrongfully downplaying the complexity of HTML due to the simplicity of its syntax.
On one common concern with a lot of accessibility issues:
All of these issues have one thing in common, and that is that they should be in the document, but are not visible in the design.
👆 This is a fantastic framing method. So many accessibility concerns arise from developers translating a design file into code, and forgetting the extra bits. Missing alternative text, form labels, buttons and links with no visible text (icon buttons etc.), all likely match the design files, but fail to match the document needs.
On making icon buttons and close buttons:
Make sure you are using a <button> element, add ARIA attributes (such as aria-controls, aria-expanded, aria-hidden on the icon, etc.) as needed.
Watch out for using strange glyphs as labels (e.g. the multiplication icon as an "x" for a close button). If you do use them, hide them from assistive tech so that they aren't announced (e.g. "Close multiplication icon" doesn't sound great 😂).
On how heading levels should be used to create a contents table for your website:
On the web, we can use headings to create the same structure [as a contents page in a book]. If you create a nice heading structure on the website, sighted users can scan the page and see the structure and get an idea of what it's about. And especially screenreader users, they get an additional way of navigation. They can use shortcuts and jump from heading to heading. This is really useful!
Try to ensure that every logical "section" of the page has a heading. You can do this by:
Having visible headings (the best option);
Using heading elements that are visually hidden (e.g. an .sr-only class);
Or using the <section> element and giving it an ARIA label.
That last one is a bit weird. It won't surface the section within a heading index, so if users are trying to navigate that way it's not great. But it will convert the otherwise semantically meaningless <section> element into a landmark within the page, so anyone navigating that way can find it. It's probably the worst option for heading order, but a great tip for creating additional, easy-access sections (the example given here was Smashing Magazine's summary section at the top of all their articles).
On the HTML lang attribute:
I hadn't realised that this actively impacts the rendering of text nodes (things like quotation marks, hyphenation, even formatting will change based on the language), nor did I know Chrome will use it to determine translation settings.
Harry has created an absolutely phenomenal talk here that provides an immense amount of depth whilst still being completely accessible to someone like me who largely doesn't deal with the technical side of web performance.
If you're trying to wrap your head around LCP, this is the talk you need to cover. There are also slides available here and what appears to be a blog post of additional content on Harry's site here, both of which supplement the video well. My own notes are below.
What is LCP?
It's the time it takes for the largest piece of content on a page to finish rendering on screen. Though not all elements are counted. Those that are include text blocks, headings, and images (including background images). Videos are partially included (more details below). Critically, this means that the LCP element may not be considered the focal point, the most useful, or even the most relevant piece of content on your page; context irrelevant, all that matters is size. (This feels very gameable. I wonder if you hide a large white background image or something, if that would trick it into always using the wrong element.)
It's a proxy for how fast was the site, visually.
What does "good" look like?
Google's LCP measurements regard 2.5s as "good". This is apparently ambitious (only around 10% of sites achieve this). It's also a moveable threshold; Google's own definition is clear that the threshold can change and Harry reckons it is likely to do so.
Why bother about it at all?
Google uses it to rank pages, but SEO is far from the only valid reason.
I was working with a client two months ago, and we worked out that a 500ms improvement in LCP would be worth an extra €11.5 million a year to them. This is staggering!
The better your overall sight speed, the better the customer experience, and the more likely you will get meaningful business engagement from the customer as a result.
What elements are used?
<img>;
<image> within <svg>;
block-level elements that contain text nodes (e.g. paragraphs and headings);
<video> elements that have a poster image (only the loading of the poster image is actually counted);
And any element with a background image loaded via url() (as opposed to background gradients, for instance).
How to improve LCP?
You can't really optimise LCP in its own right; it's a culmination of several other preceding steps.
Your best bet is to optimise the steps that happen first. Optimise these steps and LCP will naturally improve.
The biggest red flag is a bad Time To First Byte. If your first byte takes several seconds, then you cannot have a good LCP.
The time delta between TTFB and LCP is normally your critical path... Very crudely, but it's a good proxy.
Note: If there's a gap between First Paint and First Contentful Paint then that's likely a web fonts issue, and this can also really hurt LCP.
The biggest thing is focus on the element used.
Which is faster? Well, testing shows that text nodes are inherently the fastest option. SVGs appear to be faster than images or videos, which are about the same, and background images are the worst. But the SVG being faster is a bug in Chrome's reporting, and should actually be behind standard images and videos. So:
text > image > video > svg > background image
Also, all of the above assumes raw HTML.
We love HTML! HTML is super fast. Giving this to a browser is a good idea.
Why are some elements faster?
The big issue is that <image> elements in <svg> are hidden from the browser's preload scanner, which means they cannot be requested until most scripts have been run (Harry provides a whole technical reason for how browser engines work, but this is the tl;dr version).
Video is the inverse; the poster attribute is available to the preload scanner and therefore downloads in parallel with other resources, allowing the LCP to be much faster.
Note: there are rumours that <video> element handling may change in the near future for LCP calculations. Currently, a video without a poster attribute is just ignored, so you can have a fullscreen video playing and LCP will likely pick some floated text or a navigation element instead (fast). If the suggested changes occur, this will switch to be the first frame of the video, which will be slow. Hopefully, they keep the poster attribute caveat alongside any changes, at which point using poster will be massively beneficial for LCP (you will always download an image quicker than a video).
Background image will always be slow. Avoid them as much as possible!
External stylesheets, CSS-in-JS, even inline styles are all partially blocked. All CSS is hidden from the preload scanner, so any images will be downloaded towards the end of the waterfall.
This is a necessary part of CSS. You can have thousands of background images in a stylesheet, but the browser only wants to download the ones it needs, so it waits until the DOM is fully calculated to parse those styles (even if they are inline).
Overall, this massively helps with web performance, but it hurts LCP significantly, because it shunts your final paint to the very end of the rendering process.
Avoid background images above the fold, try to use simple <img> and text nodes instead. (I'm interested that <picture> isn't mentioned.)
Common mistakes to avoid
DOM and browser APIs
Never lazy load anything above the fold! It slows the page load down significantly. And if you do need to do this (though you really don't), please don't use JavaScript-based lazy loading 🤦♂️
In particular, lazy loading hides the image from the preload scanner. Again, this is super beneficial, so long as you aren't using lazy loading when you shouldn't be. This means that a natively lazy loaded image will still be much slower, even though the browser technically "ignores" that part of the HTML, because none of the resources are preloaded.
Similarly, avoid overusing preload on linked resources. Try to avoid using it for anything that is already directly linked from the URL (e.g. a src attribute), but you can (and should) use it for things you want to pull in earlier than they otherwise would be, such as background images that you know you will need for LCP.
The more you preload, though, the more bandwidth is split between multiple other resources, so the slower this benefit will get. There be dragons with preload in general too, so read up on it first (and rely on it last). And don't overuse it:
If you make everything important, then you have made nothing important.
Same goes for fetch priority levels/hints. All images are requested with low priority, but once the browser knows it's rendered in the viewport, it will upgrade it to high priority. Adding fetchpriority="high" to certain images can therefore help to tell the browser "hey, this will always be in the viewport on load" and skip that internal logic.
More recently, the decoding attribute can be used to make images decode synchronously, which is faster because it further prioritises things in the rendering order (needs testing per website though).
Content woes
Avoid using JavaScript wherever possible, but in particular for rendering your LCP. If the LCP element is only being built and rendered on the client, it will be blocked by the execution time of that script and then all of the normal DOM rendering.
Make sure your LCP candidate is right there, in HTML, ready for discovery. HTML is fast!
Don't host your LCP on a third-party site. Even hyper-optimised CDNs will massively hurt (even with significant compression enabled) purely because the round trip to another service will always be slower than self-hosted.
As an example, Harry moved his homepage image (LCP element) onto Cloudinary, and saw a 2.6x increase in LCP, even though the file size was halved.
Always self host your static assets. Never host them on someone else's origin. There is never any performance benefit to doing that.
Be very careful about dynamic content, and late-loaded JavaScript elements such as cookie banners.
Dynamic content can fundamentally change the LCP element. If you have a large title on the page, and then change this to have fewer characters or words, your LCP might jump to a different, less optimal element, such as an image.
Or even worse, if that new element is now a cookie banner or something later in the waterfall, you can hammer your LCP.
I've been saying for a couple of years that we are on the brink of a "fluid design" revolution in front-end development, similar to what happened around the late 2000s with "responsive design". Container queries are a key part of that puzzle, and Robin's example here of using a combination of fluid type (via clamp()) and container queries is precisely the kind of pattern I've been thinking about. Plus, I love seeing a solid example for the new cqw unit. (Though a standard accessibility disclaimer around fluid type and how that interacts with page and text zoom.)
It's so cool to see these patterns slowly becoming a possibility 🙌
On why container sizes (and viewport sizes) can have an outsized impact on typography:
This is because in typography the font-size, the line-height, and the measure (the width of the text) are all linked together. If you change one of these variables, you likely have to change the others in response.