Skip to Content

Faster SONiC builds for effective CI / CD processes

Thovi Keerthi Kumar
Nov 28, 2023

How to make faster builds of SONiC images to enable Agile engineering and efficient CI / CD practices

One of the puzzling things about SONiC is the enormous amount of time required to compile and build an image for a white box. About a year back, this was anywhere between 24 to 36 hours, depending upon the speed of your system and internet connection. In recent releases, this has come down to 10 to 12 hours. This is still a problem for many companies that rely on Agile processes with CI / CD setups.

The SONiC build steps are described nicely in this readme. If a user simply follows these instructions and launches a build before breakfast at 8 am, they can expect to have an image at 7 or 8 pm, a little after dinner.

At first, we thought this was the case for the initial build and that subsequent builds would be faster. A few days later, after four builds or so, it was just as slow. Start the build in the morning, get an image before bedtime! This was frustrating. 

We started observing the build steps and trying to match them with what was in the build config file. It turned out that there were some ways in which the default build steps could be changed to substantially reduce the build time.

Four options in the build configuration were major contributors to the build time.

  • Downloading multiple Debian releases
  • Running all the build steps sequentially one after the other
  • Fetching / creating the required docker images for every build
  • Use of the docker buildkit

1. Downloading multiple Debian releases

The build, by default, fetches many Debian releases, since the different packages and containers in SONiC tend to be based on different Debian releases. As new features (using new Debian releases) are added to SONiC, the number of Debian releases to be pulled grows.

Though some of the features have been upgraded, the old Debian releases that some features require seem to be included in the downloads.

In our case, the Debian releases “jessie” and “stretch” were unnecessary. So, how do we avoid downloading them? There are 2 build options – NOSTRETCH and NOJESSIE – and setting these to 1 avoids downloading these releases.

As such, the new build instruction looks like:

time NOSTRETCH=1 NOJESSIE=1 make target/sonic-[ASIC_VENDOR].bin

Not downloading jessie and stretch saves around 45 minutes of build time. Bear in mind, you may want to check if these uses are required by your target system before trying this option. In our case, we were building for an Edgecore white box, using Broadcom’s StrataXGS ASICs.

2. Running all the build steps sequentially one after the other

The SONiC build comprises many individual “build jobs”. The default is – start job 1, finish job 1, start job 2, finish job 2, start job 3…

We must do it this way if we have a server with a single CPU core, or if every job is dependent on the preceding job. However, we are running this on a server with 8 CPU cores, and the SONiC build configuration provides a simple way to execute jobs in parallel. The configuration file has a line:


With the number chosen based on how much CPU power is available. We can increase the number, for example to:




The SONiC build scripts take care of ensuring that dependencies are taken care in the right order when running parallel build jobs.

We found that even with more computing power, there was no build time reduction with numbers greater than 7. As such, the best result we got was with SONIC_CONFIG_BUILD_JOBS = 7. We prefer doing this from our own external script (we didn’t want to change the config file downloaded from the community repository), so now the build instruction reads as:

time NOSTRETCH=1 NOJESSIE=1 make SONIC_BUILD_JOBS=7 target/sonic-[ASIC_VENDOR].bin

Enabling parallel build jobs saved a few hours of build time.

3. Fetching / creating the required docker images for every build

Features in SONiC run in docker containers. These docker containers isolate the dependencies for the features, allowing the co-existence of software developed in different environments and different periods of time. The default build is like a “make clean” – every docker image is created new. In all likelihood, most developers work with one feature (or maybe 2) at a time, thus wasting a couple of hours at least, for each build.

The SONiC build provides a way to cache previous docker entries and avoid recreating (or re-downloading) them for every build. The build option for this is:


This can also be specified in the build config file. Since we want to do it from the external script, the script reads:


4. Use of the docker buildkit

By default, SONiC uses the legacy docker builder. Docker released a new improved builder as of version 23. This speeds up the build process, by skipping unused build stages and parallelizing independent build stages. You can read more about this in Docker’s buildkit overview.

Changing the line:

# SONIC_USE_DOCKER_BUILDKIT – use docker buildkit for build.

in the SONiC build config file to:


Enables this new faster builder.

And if we add this to the build script, the script now becomes:


However, at this time, this results in a larger build image (almost twice the size), so if you are tight on memory in your switch / router, you may want to skip this step.

The result? Significant time savings

With all of these steps, we have been able to bring the time down significantly – from 10-12 hours with an unmodified image – to building a SONiC image in ~ 6 hours on a system with 6 CPU cores, and in ~ 4 hours on a system with 16 CPU cores.

Capgemini Engineering helps clients to best use SONiC in their projects. Contact us today to see how we can help you leverage the benefits of open networking.


Thovi Keerthi Kumar

Expert II-Lead Connectivity & NW Engineer at Capgemini Engineering
With 17+ years of hands-on IT experience as a Software R&D Engineer, Integration Specialist, Technical Lead, Mentor & Associate Architect, Keerthi is an expert in the fields of networking and telecom for product development and support services. He also has years of experience in domains of VOIP, HomeNetworking, STB Middleware, PTT based MissionCritical Services and L2/L3 Networking. Keerthi is an enthusiastic learner, holding a Master’s Degree in Computer Networks from Manipal Institute of Technology and Bachelor’s Degree from VTU University.