Benchmarking Cold Start Performance in Node.js GraphQL Frameworks
Introduction
Cold start performance is an important consideration for applications running in serverless environments or containerized deployments. Understanding how different GraphQL frameworks perform during initialization can help developers choose the right tool for their workloads.
This article presents a benchmarking study on the cold start times of popular Node.js GraphQL frameworks. It complements the previous article, Choosing the Right Compute Model, by providing concrete data on the startup performance of GraphQL frameworks that might be deployed in different compute environments.
The Benchmarking Experiment
Methodology
This experiment is based on the node-graphql-benchmarks repository by Ben Awad, which primarily benchmarks GraphQL frameworks in terms of throughput. I forked that repository and incorporated cold start performance metrics, taking inspiration from the Fastify benchmark repository. You can find the repository in my github where the actual results for the cold starts are in METRICS.md
To measure the cold start performance of each framework, I used a simple hello-world GraphQL API setup. The test captures two metrics:
- Startup time: Time taken to initialize the application.
- Listen time: Time taken until the HTTP server is ready to accept requests.
Test environment:
- Machine:
darwin x64 | 16 vCPUs | 16.0GB RAM
- Node.js version:
v18.7.0
- Number of samples per framework:
5
Results
Framework | startup(ms) | listen(ms) |
---|---|---|
fastify-REST.js | 177.98 | 190.56 |
graphql-compose+async.js | 277.69 | 281.22 |
express-graphql+graphql-compose.js | 278.04 | 281.54 |
express-graphql+graphql-jit+graphql-compose.js | 326.61 | 330.82 |
type-graphql+async.js | 343.35 | 347.15 |
type-graphql+middleware.js | 345.45 | 349.01 |
type-graphql+async-middleware.js | 346.59 | 350.13 |
express-graphql+type-graphql.js | 352.75 | 356.24 |
express-graphql.js | 387.67 | 391.12 |
fastify-express-grapql-typed.js | 377.15 | 391.17 |
benzene-http.js | 391.38 | 394.66 |
apollo-schema+async.js | 392.47 | 395.98 |
express-graphql+graphql-jit+type-graphql.js | 397.39 | 400.84 |
fastify-express-graphql-typed-jit.js | 398.94 | 411.06 |
graphql-api-koa.js | 410.65 | 414.32 |
apollo-server-express-tracing.js | 412.23 | 415.73 |
apollo-server-express.js | 413.64 | 417.04 |
core-graphql-jit-str.js | 419.10 | 422.30 |
core-graphql-jit-buf.js | 419.95 | 423.16 |
apollo-opentracing.js | 420.35 | 423.78 |
express-gql.js | 428.06 | 431.72 |
express-graphql+graphql-jit.js | 440.57 | 444.12 |
graphql-api-koa+graphql-jit.js | 458.00 | 461.67 |
core-graphql-jit-buf-fjs.js | 464.44 | 468.12 |
mercurius+graphql-compose.js | 457.14 | 502.77 |
apollo-server-koa+graphql-jit+type-graphql.js | 502.76 | 506.43 |
mercurius+graphql-jit+type-graphql.js | 487.01 | 532.63 |
yoga-graphql.js | 550.11 | 556.97 |
yoga-graphql-trace.js | 550.72 | 557.56 |
fastify-express-graphql-jit.js | 547.10 | 559.70 |
express-graphql-dd-trace-no-plugin.js | 649.30 | 651.26 |
mercurius.js | 647.97 | 695.01 |
mercurius+graphql-jit.js | 656.22 | 704.80 |
express-graphql-dd-trace.js | 801.28 | 803.21 |
express-graphql-dd-trace-less.js | 819.56 | 821.55 |
Note: This experiment was conducted in August 2022, but I only got around to writing about it over two years later. This means that the versions of Node.js and the frameworks tested are likely outdated. Additionally, since these benchmarks were run on my personal machine, the results may not be as accurate as those obtained in a dedicated VM or controlled environment.
Conclusion
This benchmark provides a rough estimate of cold start times for various Node.js GraphQL frameworks. While the absolute numbers may vary depending on the runtime environment, these results give insight into how different frameworks initialize and prepare their HTTP servers.
If cold start performance is an important factor for your infrastructure, chances are that your cloud provider already offers built-in tools to measure it more accurately within your deployment environment.
It’s also important to note that this benchmark only measures a hello-world implementation of the different frameworks. In a real-world application, you will likely have more dependencies, which can significantly increase cold start times. Additionally, real-world applications often connect to external services like databases, which can introduce further delays—especially if the database goes into a sleep state (e.g., Neon or other serverless databases).
If you're deploying GraphQL applications in a serverless or containerized setup, understanding these performance characteristics can help in choosing the right framework based on startup performance.
For a broader discussion on compute models and their impact on application performance, check out the previous article: Choosing the Right Compute Model.
Acknowledgments
Special thanks to Ben Awad for creating the original node-graphql-benchmarks
repository, and to Rafael Gonzaga and the rest of the Fastify team for their work on benchmarking cold start performance in the fastify/benchmarks
repository. Their contributions provided a strong foundation for this experiment.