Home Blogs Jamstack for eCommerce at Scale
Applications

About The Author

Outline

Despite all its advantages, applying Jamstack to eCommerce websites with large catalogs and frequent updates involves many challenges. If you’re running an eCommerce site on a backend platform such as Salesforce Commerce Commerce Cloud, Magento, or SAP-Hybris—you’re probably already facing some of them.

This article covers the key challenges in building large-scale eCommerce Jamstack sites and how Layer (now Edgio) can help you tackle these problems.

For the full version of Layer0 CTO Ishan Anand’s presentation at Jamstack Conference 2020, go to the official Layer0 YouTube channel.

What is Layer0 (now Edgio)?

Layer0 brings the advantages of Jamstack to eCommerce, accelerating site speeds and simplifying development workflows. By streaming cached data from the edge into the browser, before it is requested, Edgio can keep websites 5 seconds ahead of shoppers’ taps. Sharper Image, REVOLVE, and Shoe Carnival are just examples of sites leveraging the Layer0 Jamstack platform to increase developer productivity and deliver their sub-second websites.

What are the challenges of using Jamstack for eCommerce at scale?

Using Jamstack and headless for eCommerce, especially on sites with large catalogs, frequent updates, or those on monolithic eCommerce platforms, is typically associated with dealing with the following challenges:

  • Long build times
  • Frequent updates
  • Tricky site migrations
  • Dynamic data
  • Personalization
  • A/B testing
  • Incomplete APIs
  • Data Pipeline Architecture
  • Customizations lost by APIs
  • Database connection limits
  • Team capability
  • CMS integration
  • Styles embedded in CMS content
  • Backoffice workflow integration

Build time friction and other challenges at scale

Jamstack has high-traffic scalability built in. But the build step introduces a new scaling dimension, as typical static rendering happens during the build. As you expand your website or perform more frequent changes, you exit the sweet spot where Jamstack is fast and agile. The result is build-time friction. It is easy to sweep the problem under the rug if you’re working on a small site, but that is not the case for the typical eCommerce site.

Another important thing to remember is that sites are built as much by non-developers as by developers. Because content, marketing, and merchandising constantly change things, build time friction can quickly become a problem for the entire organization.

All this is to say that “at scale” happens more than you would think, and it’s not limited to eCommerce. Take a look at this comparison between retailers and news websites. For eCommerce sites, the number of SKUs is a proxy for the number of pages.

eCommerce sites with many products (SKUs), Publishers with many articles

Publishers with many articles

 

While you might think that only sites like Amazon deal with millions of SKUs, this is not true. Car parts websites are a great example—they host millions of products based on the year/model/make/vehicle search criteria (YMMV). For example, TruPar.com sells forklift parts exclusively, with 8M SKUs.

Thankfully, a few static and dynamic rendering techniques help deal with Jamstack at scale problems.

“Static” techniques

  • Optimizing build times
  • Client-side rendering
  • Incremental static (re)generation

“Dynamic” techniques

  • Serverless server-side rendering + CDN
  • Parallel static rendering

“Mixed” rendering techniques

  • Choosing the best rendering technique for each class of pages
  • Choosing a framework and platform that lets you mix techniques as needed

In the following paragraphs, we will discuss what these techniques mean.

Static techniques
Optimizing build times

There are several methods to optimize build times for dynamic Javascript pages.

Incremental builds

With incremental builds, you can save build artifacts and only regenerate what’s changed. If only a single page is changed, you will regenerate that single page.

Parallel builds

The framework splits the build across multiple processes or threads. This is helpful for image processing.

Alternate static site generators

Natively powered SSGs are an emerging method and have been found to report better build times. Examples include Hugo (Go) and Nift (C++). However, many natively written static site generators don’t work well with Javascript-heavy websites. Relatively new toast is trying to tackle that.

The caveat is that framework and cloud provider support for parallel and incremental builds vary. Not all of them support it, and those which do offer only limited support.

Potential excess cost for pages with infrequent visits

There is also the issue of potential excess cost. If you have a large site with tens of thousands of SKUs or more, most of your traffic follows a power distribution and you spend extra compute time rebuilding pages that will never be visited. The more you update the site, the larger the cost will grow. Keep that in mind when thinking about some of these techniques.

According to willit.build (a Gatsby build benchmark page, which provides historical build times of sites built on Gatsby Cloud) build times for Contentful and WordPress sites are about 200ms per page, which means that for a site with 10k pages, a full site build could take 25 minutes. Incremental builds can get you down to a few minutes, showing incremental builds’ power. This technique can be helpful if you don’t do full builds.

Client-side rendering

Also known as the app shell or the SPA fallback model, client-side rendering is CDN routing. If your site hosts a million products, all these are routed by this CDN layer into the index.html and become one static file containing an app shell and is client-side rendered. When the browser loads that page, the client-site router will fetch and render the page content in the browser.

With client-side rendering, you can effectively host an infinite number of pages, but there are some important considerations:

CSR may negatively impact SEO

The caveat with client-side rendering is it might hurt performance—because the page can’t render until the Javascript loads. Starting May 2021, Google will rank websites based on three-speed metrics, CLS, LCP, and FID, collectively called Core Web Vitals. Client-side rendering can negatively impact all of these, especially Cumulative Layout Shifts. It’s not impossible, and it’s just hard to get good CLS with the app shell model. To do so, you must create custom versions of the app shell for each page type.

Client-side rendered pages can’t be read by (some) bots

Some bots cannot read client-side rendered content. Google claims their bots can render Javascript and interpret it, but we know most other bots cannot, including those of most social platforms, which is a significant traffic source for many sites.

CSR requires support for rewrite and redirect rules

The third caveat in implementing CSR is that it requires your CDN provider’s support for rewrite and redirect rules, and some do it more elegantly than others. For example, you have to shoehorn this on AWS CloudFront through their 404-page support or use Lambda@Edge handlers.

Thankfully the leading Jamstack platforms Netlify, Vercel, and Layer0 offer a fairly easy way to enable CSR.

In Netlify, you have a redirects file. With the 200 modifiers, it’s a rewrite, but it’s a hidden redirect that the user never sees.

Netlify

Vercel offers rewrites support in vercel.json, it also integrates very tightly with Next.js.

Vercel

The CDN-as-JavaScript command shell in Layer0 offers Next.js rewrites and supports other frameworks.

Layer0 (Edgio)

Incremental static generation

This technique was pioneered by Next.js and involved generating new static pages on demand in response to incoming traffic. The browser requests a new page that has not yet been visited, and for every page—regardless of what the page is—the CDN will quickly return a universal fallback page that only contains placeholder data and no content.

While the fallback page is displayed, the page’s static build process runs in the background. When that build completes, the fallback page loads the static JSON data and displays the final page. From then on, future visits will get the statically built HTML.

Incremental static regeneration

There is a version of incremental static generation called incremental static regeneration, which is essentially the same process. Still, it involves updating an existing static page in response to traffic. So if the underlying data changes, it’s re-running the build process, inspired by stale-while-revalidate, a popular yet not widely appreciated cache protocol. This will serve a stale version of the page instead of the fallback while it’s rebuilding the page and then swap that for the new version once the build process finishes.

Incremental static regeneration:

  • Updates existing static pages in response to traffic,
  • Serves as a stale version of the page instead of a fallback.

Incremental static regeneration has a minor impact on SEO and compatibility, especially on the first page. The fallback page is entirely CSR and has no data, so it’s unclear how bots will respond to it.

Dynamic techniques

In addition to static techniques, eCommerce websites can also benefit from dynamic techniques like:

  • Serverless server-side rendering + CDN
  • Parallel static rendering

Serverless server-side rendering + CDN

Using SSR in conjunction with the CDN allows you to generate pages on demand in response to traffic, which gives you some advantages. This technique is also more compatible with how traditional eCommerce platforms are made. It lets you support many pages—you can dynamically generate them when needed—and ensures high compatibility with legacy platforms.

However, this technique is also a little controversial. The Jamstack community tends to be very dogmatic about what Jamstack is and asserts that Jamstack requires static generation.

Serverless server-side rendering is effectively Jamstack-ish when 2 conditions are met:

  1. Zero DevOps and servers to manage. It’s serverless and developers don’t have to manage scale-way. It’s the same serverless that many Jamstack platforms use to power their APIs, which says you can use it to power HTML data and through SSR.
  2. HTML is served from the CDN. This is a critical condition. After the first cache miss, the CDN-served site is as fast as a static-generated Jamstack site. Please note that this requires proper cache management and is harder for multi-page sites.

Parallel static rendering / SSR preloading

Layer0 allows you to specify the set of URLs that should be pre-rendered and cached at the edge during deployment to ensure that users get a sub-second experience when accessing your site.

Static pre-rendering involves sending requests to your application code and caching the result right after your site is deployed. In this way, you simply build your app to implement server-side rendering and get the speed benefits of a static site for some or all of your pages. This feature is especially useful for large, complex sites with too many URLs to prerender without incurring exceptionally long build times.

SSR preloading is another technique used by Layer0 to accelerate page speeds. It is very similar to the regular SSR pipeline but based on analyzing the traffic logs after deployment. The high-trafficked pages are pre-loaded in parallel to the deploy. We let the deploy happen instantaneously and asynchronously build the high-traffic pages. In this way, you decouple deploy from the build. So you get immediate deploys while also maximizing cache hits.

Essentially, if there is a request for a page with high traffic levels, there’ll most likely be a cache hit. It’s the best way to get the best possible cache hits in this environment.

Parallel static rendering allows you to:

  • Analyze logs for high-traffic pages
  • Fetch and store HTML for high-traffic pages asynchronously after deploy
  • Immediately deploy while maximizing cache hits

Static prerendering

Mixed rendering techniques

You don’t have to choose between static and dynamic rendering techniques. You can choose what’s right for each class of pages on your site. You might want to declare the “About us,” “Return Policy,” or blog static, and other pages like cart, product, and categories as dynamic. We recommend choosing a platform provider that lets you flexibly mix the techniques as needed, especially if you’re doing this at scale.

Choose the best rendering technique for each class of pages, e.g., declare some pages static (e.g., blog, about us, etc.), and other pages dynamic (e.g., cart, products, categories, etc.)‍

Choose a framework and platform provider that lets you flexibly mix techniques as needed

Jamstack at scale with Layer0

Today’s CDN cache images, JavaScript and CSS, but not JSON or HTML files, and that’s what’s holding up your page load times. Layer0 CDN-as-JavaScript makes it maintainable to cache that data at the edge even in a dynamic, serverless SSR environment.

Jamstack takes the server out of the equation and effectively lets the CDN manage the traffic, which it can do with ease regardless of traffic fluctuations. Layer0 does the same but differently—instead of rendering at build, we render on request but cache each build at the edge, so a build is no longer required after 1 build.

Rendering each page at build is fine for smaller sites, but build time becomes almost unbearable once you are larger. The lack of customization/personalization or the workarounds to deliver these makes Jamstack’s focus on build time less relevant for large-scale database-driven websites like eCommerce and Travel.

CDN-as-JavaScript

Layer0 CDN-as-JavaScript gives you powerful edge control over cache keys, headers, cookies, and more, and it also works with your code. It understands your code and your framework’s routing and can be emulated locally or in pre-production environments.

Edge rules live in your code, just like in classic Jamstack, giving you complete control over the edge with live logs, versioning, and 1-click rollbacks.

See the Layer0/Edgio Cookbook for some detailed examples of routing patterns on CDN-as-JavaScript.

Performance Monitor

To maximize cache hit rates, it’s important to know what these rates are in the first place, but this information is usually buried deep in your CDN’s access logs.

Layer0 has built-in performance monitoring, making it easier to understand when page cache hits and misses happen and exposing this information to the developer in a very friendly way. The Performance Monitor in Layer0 allows you to:

  • Understand traffic based on routes, not URLs, because that’s how developers think about their app. It also tracks each deploy, so developers can pinpoint any regression.
  • Measure performance issues across the stack and loading scenarios (API, SSR, Edge, etc.)

Layer0 has also created a tool to diagnose whether the response comes from the edge of the origin: DevTools. DevTools helps you determine whether the response comes from the edge of the origin. The example below presents how it works on top of an app shell built with React Storefront, showing when a request hits. The response in this example is coming through the Layer0 (now Edgio) edge network.

Layer0 DevTools allow you to diagnose whether responses come from the edge or origin

Understanding if a response comes from the edge or origin is critical for prefetching at scales, which is another thing Layer0 does for you.

Prefetching at scale

Prefetching is important for performance because it unlocks instant page speeds. Traditional page speed tests, like what you test with Lighthouse, are focused on what happens after the customer clicks. But you can start to do a lot before the customer taps and get zero latency and almost infinite bandwidth.

Layer0 DevTools allow you to diagnose whether responses come from the edge or origin

Websites on Layer0 are blazingly fast because they use advanced predictive prefetching along with Layer0 CDN-as-JavaScript, which allows them to stay 5 seconds ahead of shoppers’ taps. This is done by streaming cached dynamic data from the edge to the users’ browsers before clicking anything—based on what they are expected to click next. In other words, your store can serve JSON data for the different products you are offering, their prices, and information in a fraction of the time.

Incremental migration

Layer0 offers iterative (gradual, progressive) migration, which lets you iteratively migrate one section of the app at a time, following Martin Fowler’s strangler pattern. This way, you incrementally “strangle” specific functionalities and replace them with new applications and services. It’s like moving a mountain stone by stone.

Incremental migration requires routed control at the CDN edge or origin. Here’s an example of how you do this on Layer0 using CDN-as-JavaScript.

Personalization and segmentation

Incremental, gradual, progressive migration is important for large sites. But it’s not limited to personalization! It also covers language, geography, etc. And it makes sense because large sites usually work across geography, and they must be able to customize the content to users as they visit the site.

The general guideline is: if this content is below the fold, we recommend late-loading and client-side rendering. If it’s above-the-fold personalized content, you want it in a server-rendered output.

Above the fold personalized = add personalization to the cache key

On Layer0, you can declare a custom cache key and personalize it, for example, based on currency or behavior. You can customize the promotions and sorting order on the category pages—based on whether somebody is a frequent visitor or a new visitor—with just a few lines in CDN-as-JavaScript.

A/B testing and the Layer0

A/B testing and personalization add a new layer of complexity to building Jamstack sites. Testing is very important for large sites and big organizations, where decisions are ROI driven and must be proven to improve conversion rates.

In traditional Jamstack, however, the only option you have is client-side A/B testing that runs in the browser. The issue is that this can impact performance and nullify your testing in two ways. It can hurt your variants’ performance, erasing any kind of improvement. And sometimes, A/B tests take effect after the eye has passed the tested elements. You may have the A/B test in the header, and the user has already scanned past that header once the JavaScript runs and changes that element.

The problems with client-side A/B testing

  • Usually, the only option for static sites
  • It doesn’t run until JavaScript runs
  • Poor performance that possibly nullifies the test

Layer0 Edge Experiments remedy these problems by enabling A/B testing at the edge. On the XDN, new experiences are always native, cached, and sub-second. This extends beyond A/B tests to any variant of your website.

Edge Experiments

Layer0 also comes with a powerful Edge Experiments engine built in. The module is part of CDN-as-JavaScript and is aware of all your variants, ensuring each is cached separately at the edge. This gives you control over exactly which visitors see which variant.

Edge Experiments allow you to:

  • Route live traffic to any deployed branch at the edge of the network
  • Run A/B tests, canary deploys, or feature flags
  • Write routing rules based on probabilities or header values and even
  • IP addresses

With Edge Experiments, you can easily split tests without affecting your site’s performance. Splits are executed at the edge through an easy-to-use yet powerful interface. Edge Experiments can be used for A/B and multivariate tests, canary deploys, blue-green tests, iterative migration off of a legacy website, personalization, and more.

How our clients benefit from Layer0

Layer0 provides a frictionless transition to Jamstack and headless and offers a huge advantage for sites with large catalogs, frequent updates, or those running legacy eCommerce platforms. Shoe Carnival and Turnkey Vacation Rentals are two examples of developer teams at large sites using Jamstack and headless for eCommerce on Layer0.

Turnkey

TurnKey Vacation Rentals is a full-service vacation rental property management company for premium and luxury-level rental homes in top travel destinations across the country. Unlike sites like Airbnb, TurnKey offers only pre-vetted listings. It also handles management details centrally, using a standardized set of tech tools.

Original setup

TurnKey was running an app inside of Docker on AWS Elastic Beanstalk and was looking for a solution to provide them with greater control and insight into performance.

They considered a few Jamstack solutions but wanted a platform that would support Next.js natively, like Layer0. One of the deciding factors was that with Layer0, they could avoid refactoring how their codebase and data pipeline worked.

Layer0 has helped Turnkey increase agility with some features listed below.

Environments

In the past, Turnkey used a custom pipeline built inside of Jenkins, and the team was deploying from a trunk branch, never having complete confidence in what was getting ready to go out into production.

With Layer0, the branches have individual environments, and the team at Turnkey can set up pristine environments—they don’t merge into the staging environment until they know something has passed QA. This removes the mental burden associated with QA.

Logs

Digging through server logs on Beanstalk can be a nightmare—you have to figure out exactly which logs you’re looking for, which server they’re on, if they’re load-balanced, etc. With Layer0, you can live stream logs directly from your build, which allows you to find the build you want to troubleshoot, press play, and watch the log.

Incremental migration

Turnkey had pages, not on React/Next.js, and ran on the old architecture. With Layer0, they could take what they’d already migrated, put that on the XDN and continue migrating incrementally.

Layer0 gave the team at Turnkey tools to focus on performance.

Shoe Carnival

Shoe Carnival Inc. is an American retailer of footwear. The company currently operates an online store alongside 419 brick-and-mortar stores throughout the US midwest, south, and southeast regions.

Below are some of Layer0’s features that the Shoe Carnival team found especially useful.

Flexibility

Shoe Carnival uses Salesforce Commerce Cloud, which is not meant to run headless frontends like that of Shoe Carnival. So there was a lot of engineering and understanding on the backend side to execute the data to the frontend. Those challenges could be solved because of the flexibility that the Layer0 backend sitting in-between Salesforce and the React frontend offered. The team at Shoe Carnival could freely build with React and ignore the limitations of Salesforce.

Time to a production boost

Shoe Carnival’s time to production dramatically increased. The team can be separated from Salesforce development cycles and make very quick changes in deployment.

Site speed

Speed to production is a huge benefit, but the site performance generally is hard to ignore as Shoe Carnival went from 5-6 second average page loads to sub-seconds. They can cache things at a very granular level and have the tools to ensure that the customers are looking for is always available and up to date.

Incremental deployment

Incremental deployment lets the team deploy to production much faster than building a complete application to deploy it.

As for the impact of the migration to Layer0, when Shoe Carnival tested the origin site against the headless site for conversions 50/50 at the CDN level, the headless always won, outperforming the origin site and improving speed and visibility.

Summary

At Layer0, we believe Jamstack is the future of web development. Layer0 essentially brings the performance and simplicity benefits of Jamstack to front-end developer teams at large, dynamic eCommerce sites where traditional static techniques typically don’t apply. We like to call it dynamic Jamstack. It makes SPA websites instant-loading and easier to develop.

Layer0 comes with an application-aware CDN-as-JavaScript, which can augment or even replace your current CDN and bring all the web security features you need to the edge. Layer0 also comes with a bunch of dev-focused technologies that make the entire process of developing, deploying, previewing, experimenting on, monitoring, and running your headless frontend simple, including automated full-stack preview URLs, a serverless JavaScript backend for frontend, advanced cache monitoring and more.

Layer0 is an all-in-one development platform that lets you:

  • Utilize Jamstack for eCommerce via both pre and just-in-time rendering
  • Enable zero latency networking via prefetching of data from your product catalog APIs
  • Configure edge natively in your app (CDN-as-JavaScript)
  • Run edge rules locally and in pre-prod
  • Create preview URLs from GitHub, GitLab, or Bitbucket with every new branch and push
  • Run splits at the edge for performant A/B tests, canary deploys, and personalization
  • Serverless JavaScript that is much easier and more reliable than AWS Lambda