





















































Hi ,
Welcome to the second issue of Deep Engineering.
Postman’s 2024 State of the API report reveals that 74% of teams now follow an API-first approach, signaling a major shift in software development from the code-first approach. As APIs grow more complex—and as AI agents, gRPC, and GraphQL reshape how services communicate—the question is no longer whether to test early, but how to test well at scale.
In this issue, we speak with Dave Westerveld—developer, author of API Testing and Development with Postman, and testing specialist with years of experience across both mature systems and early-stage teams. Drawing from his work on automation strategy, API integration, and scaling quality practices, Dave offers a grounded take on CI pipelines, parallel execution, and the tradeoffs of modern API protocols.
You can watch the full interview and read the full transcript here— or keep reading for our distilled take on what makes modern test design both reliable and fast
Sponsored:
Learn how your app could evolve automatically, leaving reverse engineers behind with every release.
Hosted by Guardsquare featuring:
Anton Baranenko-Product Manager
Date/time: Tuesday, June 10th at 4 PM CET (10 AM EDT)
Some testing principles are foundational enough to survive revolutions in tooling. That’s the starting point for Dave Westerveld’s approach to API testing in the post-AI tech landscape.
“There are testing principles that were valid in the '80s, before the consumer internet was even a thing. They were valid in the world of desktop and internet computing, and they’re still valid today in the world of AI and APIs.”
And while the landscape has shifted dramatically in the last two years—Postman now ships an AI assistant, supports gRPC and GraphQL, and offers orchestration features for agentic architectures—Westerveld believes the best way to scale quality is to combine these new capabilities with timeless habits of mind: systems thinking, structured test design, and a bias for clarity over cleverness.
Westerveld argues that API testers need to operate with a systems-level understanding of the software they’re validating. He calls this:
“(The) ability to zoom out and see the entire forest first, and then come back in and see the tree, and realize how it fits into the larger picture and how to approach thinking about and testing it.”
In practice, that means asking not just whether an endpoint returns the expected result, but how it fits into the larger architecture and user experience. It means understanding when to run exploratory tests, when to assert workflows, and when to defer to contract validation.
These instincts, he says, haven’t changed even as APIs have diversified:
“Things like how to approach and structure your testing are … timeless when it comes to REST APIs. They haven’t fundamentally changed in the last 20 years—neither should the way you think about testing them.”
What matters more than syntax is structure—how testers reason about coverage, maintainability, and feedback cycles.
Postman’s Postbot is the most visible new capability in the platform’s AI strategy. Built atop LLM infrastructure, it can suggest test cases, generate assertions, and translate prompts into working scripts. Internally, it draws on your Postman data—collections, environments, history—to provide context-aware assistance.
Westerveld sees the benefit, but draws a hard line between skilled and unskilled use:
“For a skilled tester, someone with a lot of experience, these AI tools can help you move more quickly through tasks you already know how to do. Often, when you reach that level, you’ve done a lot of testing—you can look at something and say, ‘OK, this is what I need to do here.’ But it can get repetitive to implement some scripts or write things out again and again.”
He frames AI as an accelerant: helpful when you understand the underlying logic, risky when you don’t.
“For more junior people, there’s a temptation to use AI to auto-generate scripts without fully understanding what those scripts are doing. I think that’s the wrong approach early in your career, because once the AI gets stuck, you won’t know how to move forward.”
This caution aligns with Postman’s architectural choices. Postbot uses a deterministic intent classifier to map prompts to supported capabilities, orchestrates tool usage through a controlled execution layer, and codifies outputs as structured in-app actions—such as generating test scripts, visualizing responses, or updating request metadata. Its latest iteration adds a memory-aware agent model that supports multi-turn conversations and multi-action workflows, but with strict boundaries around tool access and state transitions.
In this, Westerveld agrees: AI-generated tests are often brittle and opaque. Use them, he advises,
“more as a learning tool than an autocomplete tool.”
One of Westerveld’s strongest positions concerns test design: automated tests should be independent of each other. This is both a correctness and scalability concern. When teams overuse shared setup code or rely on common state, it breaks test parallelism and increases the chance of cascading failures.
In Postman, reusable scripts are managed via the Package Library, which allows teams to store JavaScript test logic in named packages and import them into requests, collections, monitors, and Flows. While this enables consistency and reuse, Westerveld notes that it also introduces new failure points if not applied judiciously.
“If something in the shared code breaks—or if a dependency the shared code relies on fails—you can end up with all your tests failing. …So, you have to be careful that a single point of failure doesn’t take everything down.”
His solution: only abstract what truly reduces duplication, and mock where necessary.
“In cases like that, it’s worth asking: ‘Do we really need this to be a shared script, or can we mock this instead?’ For example, if you're repeatedly calling an authentication endpoint that you're not explicitly testing, maybe you could insert credentials directly instead. That might be a cleaner and faster solution.”
He also advocates for test readability. Tests, he says, should act as documentation. Pulling too much logic into shared libraries makes them harder to understand.
“A well-written test tells you what the system is supposed to do. It shows real examples of expected behavior, even if it's not a production scenario. You can read it and understand the intent.
But when you extract too much into shared libraries, that clarity goes away. Now, instead of reading one script, you’re bouncing between multiple files trying to figure out how things work. That hurts readability and reduces the test's value as living documentation.”
With Postman’s new Spec Hub, teams can now author, govern, and publish API specifications across supported formats, helping standardize collaboration around internal and external APIs. As Westerveld puts it:
“The whole point of having a specification is that it defines the contract we’re all agreeing to—whether that’s between frontend and backend teams, or with external consumers.”
He recommends integrating schema checks as early as possible:
“If you're violating that contract, the right response is to stop. … So yes, in that sense, we want development to ‘slow down’ when there’s a spec violation. But in the long run, this actually speeds things up by improving quality. You’re building on a solid foundation.”
He advocates running validation as part of the developer CI pipeline—using lightweight checks at merge gates or as part of pull requests.
This pattern aligns with what Postman now enables. Spec Hub introduces governance features such as built-in linting to enforce organizational standards by default. For CI integration, Postman’s contract validation tooling can be executed using the Postman CLI or Newman, both of which support running test collections—including those that validate OpenAPI contracts—within continuous integration pipelines. Together, these tools allow teams to maintain a single, trusted specification that anchors both collaboration and automated enforcement across environments.
Protocol diversity is a reality for modern testers. Westerveld emphasizes that while core principles carry over across styles, testing strategies must adapt to the nuances of each protocol.
gRPC, for example, provides low-level access through strongly typed RPC calls defined in .proto
files. This increases both the power and the surface area of test logic.
“One area where you really see a difference with modern APIs is in how you think about test coverage. The way you structure and approach that will be different from how you’d handle a REST API.
That said, there are still similar challenges. For instance, if you’re using gRPC and you’ve got a protobuf or some kind of contract, it’s easier to test—just like with REST, if you have an OpenAPI specification.
So, advocating for contracts stays the same regardless of API type. But with GraphQL or gRPC, you need more understanding of the underlying code to test them adequately. With REST, you can usually just look at what the API provides and get a good sense of how to test it.”
GraphQL, he notes, introduces different complexities. Because it’s introspective and highly composable:
“With GraphQL, there are a lot of possible query combinations… A REST API usually has simple, straightforward docs—‘here are the endpoints, here’s what they do’—maybe a page or two.
With GraphQL, the documentation is often dynamically generated and feels more like autocomplete. You almost have to explore the graph to understand what’s available. It’s harder to get comprehensive documentation.”
Postman supports both gRPC and GraphQL natively, enabling users to inspect schemas, craft requests, and run tests—all without writing code. But effective testing still depends on schema discipline and clarity. Westerveld points out that with GraphQL, where documentation can feel implicit or opaque, mock servers and contract-first workflows are critical. Postman helps here too, offering design features that can generate mocks and example responses directly from imported specs.
Postman’s recent support for the Model Context Protocol (MCP) and the launch of its AI Tool Builder mark a shift toward integrating agent workflows into the API lifecycle. Developers can now build and test MCP-compliant servers and requests using Postman’s familiar interface—lowering the barrier to designing autonomous agent interactions atop public or internal APIs.
But as Westerveld points out, these advances don’t replace fundamentals. His focus remains on feedback speed, execution reliability, and test independence.
“Shift-left and orchestration have been trending for quite a while. As an industry, we’ve been investing in these ideas for years—and we’re still seeing those trends grow. We’re pushing testing closer to where the code is written, which is great. At the same time, we’re seeing more thorough and complete API testing, which is another great development.”
He notes a natural tension between shift-left principles and orchestration complexity:
“Shift-left means running tests as early as possible, close to the code. The goal is quick feedback. But orchestration often involves more complexity—more setup, broader coverage—and that takes longer to run.
So those two trends can pull in different directions: speed versus depth.”
The path forward, he argues, lies in test design and execution architecture:
“We’re pushing testing left and improving the speed of execution. That’s happening through more efficient test design, better hardware, and—importantly—parallelization.
Parallelization is key. If we want fast feedback loops and shift-left execution, we need to run tests in parallel. For that to work, tests must be independent. That ties back to an earlier point I made—test independence isn’t just a nice-to-have. It’s essential for scalable orchestration.”
“So I think test orchestration is evolving in a healthy direction. We’re getting both faster and broader at the same time. And that’s making CI/CD pipelines more scalable and effective overall.”
If you are looking to implement the principles discussed in our editorial—from contract-first design to CI integration, Westerveld’s book, API Testing and Development with Postman, offers a clear, hands-on walkthrough. Here is an excerpt from the book which explains how contract testing verifies that APIs meet agreed expectations and walks you through setting up and validating these tests in Postman using OpenAPI specs, mock servers, and automated tooling.
In this chapter, we will learn how to set up and use contract tests in Postman, but before we do that, it’s important to make sure that you understand what they are and why you would use them. So, in this section, we will learn what contract testing is. We will also learn how to use contract testing and then discuss approaches to contract testing – that is, both consumer-driven and provider-driven contracts. To kick all this off, we are going to need to know what contract testing is. So, let’s dive into that.
…Contract testing is a way to make sure that two different software services can communicate with each other. Often, contracts are made between a client and a server. This is the typical place where an API sits, and in many ways, an API is a contract. It specifies the rules that the client must follow in order to use the underlying service. As I’ve mentioned already, contracts help make things run more smoothly. It’s one of the reasons we use APIs. We can expose data in a consistent way that we have contractually bound ourselves to. By doing this, we don’t need to deal with each user of our API on an individual basis and everyone gets a consistent experience.
However, one of the issues with an API being a contract is that we must change things. APIs will usually change and evolve over time, but if the API is the contract, you need to make sure that you are holding up your end of the contract. Users of your API will come to rely on it working in the way that you say it will, so you need to check that it continues to do so.
When I bought my home, I took the contract to a lawyer to have them check it over and make sure that everything was OK and that there would be no surprises. In a somewhat similar way, an API should have some checks to ensure that there are no surprises. We call these kinds of checks contract testing. An API is a contract, and contract testing is how we ensure that the contract is valid, but how exactly do you do that?
API Testing and Development with Postman, Second Edition by Dave Westerveld (Packt, June 2024) covers everything from workflow and contract testing to security and performance validation, the book combines foundational theory with real-world projects to help developers and testers automate and improve their API workflows.
Use code POSTMAN20 for 20% off at packtpub.com.
Bruno 2.3.0 — A Git-Native API Client for Lightweight, Auditable Workflows
Bruno is an open source, offline-first API client built for developers who want fast, version-controlled request management. The latest release, version 2.3.0 (May 2025), adds capabilities that push it further into production-ready territory:
It’s a strong fit for small teams, CI/CD testing, or cases where you want to keep everything under version control—without a heavyweight UI.
Westerveld on Bruno
“I recently tried Bruno. I liked it—I thought their approach to change management was really well designed. But it didn’t support some of the features I rely on. I experimented with it on a small project, but in the end, I decided I still needed Postman for my main workflows.”
“That said, I still open Bruno now and then. It’s useful, simple, and interesting—but we’re not ready to adopt it team-wide.”
Westerveld's advice: evaluate new tools with clear use cases in mind. Bruno may not replace your primary API platform overnight, but it’s a valuable addition to your workflow toolkit—especially for Git-native or OpenAPI-first teams.
That’s all for today. Thank you for reading this issue of Deep Engineering. We’re just getting started, and your feedback will help shape what comes next.
Take a moment to fill out this short survey—as a thank-you, we’ll add one Packt credit to your account, redeemable for any book of your choice.
We’ll be back next week with more expert-led content.
Stay awesome,
Divya Anne Selvaraj
Editor in Chief, Deep Engineering
If your company is interested in reaching an audience of developers, software engineers, and tech decision makers, you may want toadvertise with us.