Your TTFB can do better!
So many times in conversations with developers and agencies, it is more of a norm never to question the TTFB of uncached page responses. Most of the projects that land on my table have the same problems over and over. Usually, it results in db queries to render category pages, with some even reaching more than fifty thousand queries to render fifty products.
The pity is how the problem is fixed for that issue - using Redis cache for each small piece that runs wild with hundreds of DB queries per product. This usually replaces one problem with two new ones, as Redis gets overloaded with block cache writes and reads, and your network between nodes gets bottlenecked by all the HTML output being shared around. I already covered this issue in the article about volatile data caching. It is still the most prevalent issue I see, which costs merchants a lot of their infrastructure budgets, which can be spent on valuable server time like converting orders.
Most people assume that the issue is fixed because data is cached, while the actual cache hit rate is very low because of constantly active sales that invalidate those caches. So, as a result, even more money gets poured into more server resources to pre-warm caches on each product update, burning through development and infrastructure budgets till they run dry. I saw some merchants spending up to 80% of their Magento backend infrastructure on cache warmers that endlessly visit their pages and inflate the AWS bill.
A better alternative
In most cases, it can be a different success story for agencies and merchants, where they achieve far better results and direct that budget to do better things. Let's do a small case study of the store we optimize together with my performance training attendees.
It is a typical fashion retailer store that sells primarily configurable products and is based on the most recent 2.4.6-p4 build without any customizations with the default luma frontend, although the same issue will be visible in Hyvä or any other frontend implementation:
As clearly visible, the render time is directly affected by a cascade of Redis calls and database queries that have a clear repeating pattern. A sign of N+1 issues that exist in the rendering process.
Mainly those issues are caused by three main components:
Price renderer
Color swatches
Salability checks in MSI
By writing custom product preloaders for those components and customizing places where this data gets populated without losing actual functionality or business logic, we can achieve a result like this:
A total reduction in TTFB from 2.6 seconds to around 400ms. As you can see, it has far fewer Redis and database queries and renders faster, a win for both infrastructure costs and end users, which will never hit terrible P99 for a category page again.
This is also directly reflected by data under the load tests of 50 concurrent visitors:
The difference is staggering, especially given that the optimized version processed without a hitch 42k requests more and kept a very consistent 91RPS with max TTFB not going over 664 ms. This 80% more traffic is being processed by the same infrastructure while being more responsive and giving a good end-user experience.
The preloader and examples are available on my GitHub. And if you want to get hands-on experience on how to write your custom preloaders, I highly recommend attending one of my training courses, with the upcoming one being in The Hague this March, but hurry up as there are only three tickets left. More trainings are coming this year to other countries, but I do not have a definitive rooster.
Magento Lead | Lightna Architect
1wMagento 2 doesn’t use its own price index on category pages - crazy! This issue will soon be resolved by Lightna using the Coin Concept. With this approach, the frontend page can be rendered with zero database queries
--
10moIvan Chepurnyi What tool is used to display this beautified profiler graph?
Associate software Engineer || L.D. College of Engineering
10moThanks for posting
CEO at Codilar | eComm Tech, Adobe Commerce (Magento), Shopify, Pimcore, Omnichannel
10mogreat post Ivan. While I totally agree that TTFB of uncached pages need to be fixed, IMO it doesn't throw away the usability of cache warmers (the smart ones). Because they can give amazing results with very low effort. Something we are experiencing daily with cWarmer.io clients. A lot of merchants have a lot of poor code and heavy custom module sitting in their code base. It is high effort work to fix all of these. And the Cache Hit Ratio is about 35% on an average from our experience (without a warmer). If we use a smart cache warmer that can warm ONLY the right pages with the right context (customer group, storeview etc) and be outside Magento (something like cWarmer.io) it can get 80%+ CHR without DDos-ing your servers. And if you use something like fastly for page caching or distributed varnish, your TTFB will drop to 20ms, provided the cache warmer is able to warm these nodes smartly.
Tideways Founder solving your PHP Performance issues
10moYes! If you only look at cached TTFB, then you must know what the cache hit ratio actually is for different page types. Unless it above 90% - you must also optimize uncached TTFB.