





















































Welcome to the fourth issue of Deep Engineering.
In enterprise software systems, few challenges loom larger than refactoring legacy systems to meet modern needs. These efforts can feel like open-heart surgery on critical applications that are still running in production. Systems requiring refactoring are often business-critical, poorly modularized, and resistant to change by design.
To understand how Domain-Driven Design (DDD) can guide this process, we spoke with Alessandro Colla and Alberto Acerbis—authors of Domain-Driven Refactoring (Packt, 2025) and co-founders of the "DDD Open" and "Polenta and Deploy" communities.
Colla brings over three decades of experience in eCommerce systems, C# development, and strategic software design. Acerbis is a Microsoft MVP and backend engineer focused on building maintainable systems that deliver business value. Together, they offer a grounded, pattern-skeptical view of what DDD really looks like in legacy environments—and how teams can use it to make meaningful change without rewriting from scratch.
You can watch the full interview and read the full transcript here—or keep reading for our distilled take on the principles, pitfalls, and practical steps that shape successful DDD refactoring.
dev2next is the premier conference designed for software developers, architects, technology leaders, development managers, and directors. Explore cutting-edge strategies, tools, and essential practices for building powerful applications using the latest trends and good practices.
When: September 29 - October 2, 2025
Where: Colorado Springs, CO
Legacy systems are rarely anyone’s favorite engineering challenge. Often labeled “big balls of mud,” these aging codebases resist change by design—lacking tests, mixing concerns, and coupling business logic to infrastructure in ways that defy modular thinking. Yet they remain critical. “It’s more common to work on what we call legacy code than to start fresh,” Acerbis notes from experience. Their new book, Domain-Driven Refactoring, was born from repeatedly facing large, aging codebases that needed new features. “The idea behind the book is to bring together, in a sensible and incremental way, how we approach the evolution of complex legacy systems,” explains Colla. Rather than treat DDD as something only for new projects, Colla and Acerbis show how DDD’s concepts can guide the incremental modernization of existing systems.
They begin by reinforcing core DDD concepts—what Colla calls their “foundation”—before demonstrating how to apply patterns gradually. This approach acknowledges a hard truth: when a client asks for “a small refactor” of a legacy system, “it’s never small. It always becomes a bigger refactor,” Acerbis says with a laugh. The key is to take baby steps. “Touching a complex system is always difficult, on many levels,” Colla cautions, so the team must break down the work into manageable changes rather than trying an all-at-once overhaul.
One of the first decisions in a legacy overhaul is whether to break a monolithic application into microservices. But Colla and Acerbis urge caution here—hype should not dictate architecture.
“Normally, a customer comes to us asking to transform their legacy application into a microservices system because — you know — ‘my cousin told me microservices solve all the problems,’” Acerbis jokes. The reality is that blindly carving up a legacy system into microservices can introduce as much complexity as it removes. “Once you split your system into microservices, your architecture needs to support that split,” he explains, from infrastructure and deployment to data consistency issues.
Instead, the duo advocates an interim step: first evolve the messy monolith into a well-structured modular monolith. “Using DDD terms, you should move your messy monolith into a good modular monolith,” says Acerbis. In a modular monolith, clear boundaries are drawn around business subdomains (often aligning with DDD bounded contexts), but the system still runs as a single deployable unit. This simplification and ordering within the monolith can often deliver the needed agility and clarity. “We love monoliths, OK? But modular ones,” Colla admits. With a modular monolith in place, teams can implement new features more easily and see if further decomposition is truly warranted. Only if needed—due to scale or independent deployment demands—should you “split it into microservices. But that’s a business and technical decision the whole team needs to make together,” Acerbis emphasizes.
By following this journey, teams often find full microservices unnecessary. Colla notes that many times they’ve been able to meet all business requirements just by going modular, without ever needing microservices. The lesson: choose the simplest architecture that solves the problem and avoid microservices sprawl unless your system’s scale and complexity absolutely demand it.
A central theme from Colla and Acerbis is that DDD is fundamentally about understanding the problem domain, not checking off a list of patterns. “Probably the most important principle is that DDD is not just technical — it’s about principles,” says Acerbis. Both engineers stress the importance of exploration and ubiquitous language before diving into code. “Start with the strategic patterns — particularly the ubiquitous language — to understand the business and what you’re dealing with,” Colla advises. In practice, that means spending time with domain experts, clarifying terminology, and mapping out the business processes and subdomains. Only once the team shares a clear mental model of “what actually needs to be built” should they consider tactical design patterns or write any code.
Colla candidly shares that he learned this the hard way.
“When I started working with DDD, CQRS, and event sourcing, I made the mistake of jumping straight into technical modeling — creating aggregates, entities, value objects — because I’m a developer, and that’s what felt natural. But I skipped the step of understanding why I was building those classes.
I ended up with a mess.”
Now he advocates for understand the why, then the how. “We spent the first chapters of the book laying out the principles. We wanted readers to understand the why — so that once you get to the code, it comes naturally,” Colla says.
This principle-centric mindset guards against a common trap: applying DDD patterns by rote or “cloning” a solution from another project.
“I’ve seen situations where someone says, ‘I’ve already solved a similar problem using DDD — I’ll just reuse that design.’ But no, that’s not how it works,” Acerbis warns.
Every domain is different, and DDD is “about exploration. Every situation is different.” By treating DDD as a flexible approach to learning and modeling the domain—rather than a strict formula—teams can avoid over-engineering and build models that truly fit their business.
Once the team has a solid grasp of the domain, they can start to apply DDD’s tactical patterns (entities, value objects, aggregates, domain events, etc.) to reshape the code. But which pattern comes first? Colla doesn’t prescribe a one-size-fits-all sequence. “I don’t think there’s a specific pattern to apply before others,” he says. The priority is dictated by the needs of the domain and the pain points in the legacy code. However, the strategic understanding guides the tactical moves: by using the ubiquitous language and bounded contexts identified earlier, the team can decide where an aggregate boundary should be, where to introduce a value object for a concept, and so on.
Acerbis emphasizes that their book isn’t a compendium of all DDD patterns—classic texts already cover those. Instead, it shows how to practically apply a selection of patterns in a legacy refactoring context. The aim is to go from “a bad situation — a big ball of mud — to a more structured system,” he says. A big win of this structure is that new features become easier to add “without being afraid of introducing bugs or regressions,” because the code has clear separation of concerns and meaningful abstractions.
Exploring the domain comes first. Only then should the team “bring in the tactical patterns when you begin touching the code,” says Colla. In other words, let the problem guide the solution. By iteratively applying patterns in the areas that need them most, the system gradually transforms—all while continuing to run and deliver value. This incremental refactoring is core to their approach; it avoids the risky big-bang rewrite and instead evolves the architecture piece by piece, in sync with growing domain knowledge.
In theory, it sounds ideal to methodically refactor a system. In reality, business stakeholders are rarely patient—they need new features yesterday. Colla acknowledges this tension:
“This is the million-dollar question. As in life, the answer is balance. You can't have everything at once — you need to balance features and refactoring.”
The solution is to weave refactoring into feature development, rather than treating it as a separate project that halts new work.
“Stakeholders want new features fast because the system has to keep generating value,” Colla notes. Completely pausing feature development for months of cleanup is usually a non-starter (“We’ve had customers say, ‘You need to fix bugs and add new features — with the same time and budget.’”). Instead, Colla’s team refactors in context: “if a new feature touches a certain area of the system, we refactor that area at the same time.” This approach may slightly slow down that feature’s delivery, but it pays off in the long run by preventing the codebase from deteriorating further. Little by little (“always baby steps,” as Colla says), they improve the design while still delivering business value.
Acerbis adds that having a solid safety net of tests is what makes this sustainable. Often, clients approach them saying it’s too risky or slow to add features because “the monolith has become a mess.” The first order of business, then, is to shore up test coverage.
“We usually start with end-to-end tests to make sure that the system behaves the same way after changes,” he explains.
Writing tests for a legacy system can be time-consuming initially, but it instills confidence.
“In the beginning, it takes time. You have to build that infrastructure and coverage. But as you move forward, you’ll see the benefits — every time you deploy a new feature, you’ll know it was worth it.”
With robust tests in place, the team can refactor aggressively within each iteration, knowing they will catch any unintended side effects before they reach users.
Even the best technical refactoring will falter if organizational structure is at odds with the design. This is where Conway’s Law comes into play—the notion that software systems end up reflecting the communication structures of the organizations that build them.
“When introducing DDD, it’s not just about technical teams. You need involvement from domain experts, developers, stakeholders — everyone,” says Acerbis.
In practice, this means that establishing clean bounded contexts in code may eventually require realigning team responsibilities or communication paths in the company.
Of course, changing an organization chart is harder than changing code. Colla and Acerbis therefore approach it in phases. “Context mapping is where we usually begin — understanding what each team owns and how they interact,” Colla explains. They first try to fix the code boundaries while not breaking any essential communication between people or teams. For instance, if two modules should only talk via a well-defined interface, they might introduce an anti-corruption layer in code, even if the same two teams still coordinate closely as they always have. Once the code’s boundaries stabilize and prove beneficial, the case can be made to align the teams or management structure accordingly.
“The hardest part is convincing the business side that this is the right path,” Acerbis admits. Business stakeholders control budgets and priorities, so without their buy-in, deep refactoring stalls. The key is to demonstrate value early and keep them involved. Ultimately, “it only works if the business side is on board — they’re the ones funding the effort,” he says. Colla concurs: “everyone — developers, architects, business — needs to share the same understanding. Without that alignment, it doesn’t work.” DDD, done right, becomes a cross-discipline effort, bridging tech and business under a common language and vision.
Given the complexity of legacy transformation, what tools or frameworks can help? Colla’s answer may surprise some: there is no magic DDD framework that will do it for you. “There aren’t any true ‘DDD-compliant’ frameworks,” he says. DDD isn’t something you can buy off-the-shelf; it’s an approach you must weave into how you design and code. However, there are useful libraries and techniques to smooth the journey, especially around testing and architecture fitness.
“What’s more important to me is testing — especially during refactoring. You need a strong safety net,” Colla emphasizes. His team’s rule of thumb: start by writing end-to-end tests for current behavior. “We always start with end-to-end tests. That way, we make sure the expected behavior stays the same,” Colla shares. These broad tests cover critical user flows so that if a refactoring accidentally changes something it shouldn’t, the team finds out immediately. Next, they add architectural tests (often called fitness functions) to enforce the intended module boundaries. “Sometimes, dependencies break boundaries. Architectural tests help us catch that,” he notes. For instance, a test might ensure that code in module A never calls code in module B directly, enforcing decoupling. And of course, everyday unit tests are essential for the new code being written: “unit tests, unit tests, unit tests,” Colla repeats for emphasis. “They prove your code does what it should.”
Acerbis agrees that no all-in-one DDD framework exists (and maybe that’s for the best). “DDD is like a tailor-made suit. Every time, you have to adjust how you apply the patterns depending on the problem,” he says. Instead of relying on a framework to enforce DDD, teams should rely on discipline and tooling – especially the kind of automated tests Colla describes – to keep their refactoring on track. Acerbis also offers a tip on using AI assistance carefully: tools like GitHub Copilot can be helpful for generating code, but “you don’t know how it came up with that solution.” He prefers to have developers write the code with understanding, then use AI to review or suggest improvements. This ensures that the team maintains control over design decisions rather than blindly trusting a tool.
DDD often goes hand-in-hand with event-driven architecture for decoupling. Used well, domain events can keep bounded contexts loosely coupled. But Colla and Acerbis caution that it’s easy to misuse events and end up with a distributed mess. Acerbis distinguishes two kinds of events with very different roles: domain events and integration events. “Domain events should stay within a bounded context. Don’t share them across services,” he warns. If you publish your internal domain events for other microservices to consume, you create tight coupling: “when you change the domain event — and you will — you’ll need to notify every team that relies on it. That’s tight coupling, not decoupling.”
The safer pattern is to keep domain events private to a service or bounded context, and publish separate integration events for anything that truly needs to be shared externally. That way, each service can evolve its internal model (and its domain event definitions) independently. Colla admits he’s learned this by making the mistakes himself. The temptation is to save effort by reusing an event “because it feels efficient,” but six months later, when one team changes that event’s schema, everything breaks. “We have to resist that instinct and think long-term,” he says. Even if it requires a bit more work upfront to define distinct integration events, it prevents creating what he calls a “distributed monolith that’s impossible to evolve” – a system where services are theoretically separate but so tightly coupled by data contracts that they might as well be a single unit.
Another often overlooked aspect of event-driven systems is the user experience in an eventually consistent world. Because events introduce asynchrony, UIs must be designed to handle the delay. Acerbis mentions using task-based UIs, where screens are organized around high-level business tasks rather than low-level CRUD forms, to better set user expectations and capture intent that aligns with back-end processes. The bottom line is that events are powerful, but they come with their own complexities – teams must design and version them thoughtfully, and always keep the end-to-end system behavior in mind.
If you found Colla and Acerbis’ insights useful, their book, Domain-Driven Refactoring offers a deeper, hands-on perspective—showing how to incrementally apply DDD principles in real systems under active development with substantial code examples. Here is an excerpt which covers how to integrate events within a CQRS architecture.
In this chapter, we will explore how to effectively integrate events into your system using the Command Query Responsibility Segregation (CQRS) pattern. As software architectures shift from monolithic designs to more modular, distributed systems, adopting event-driven communication becomes essential. This approach offers scalability, decoupling, and resilience, but also brings complexity and challenges such as eventual consistency, fault tolerance, and infrastructure management.
The primary goal of this chapter is to guide you through the implementation of event-driven mechanisms within the context of a CQRS architecture. By the end of this chapter, you will have a clear understanding of how events and commands operate in tandem to manage state changes, communicate between services, and optimize both the reading and writing of data.
(In this excerpt) you will learn about the following:
Domain-Driven Refactoring by Alessandro Colla and Alberto Acerbis is a practical guide to modernizing legacy systems using DDD. Through real-world C# examples, the authors show how to break down monoliths into modular architectures—whether evolving toward microservices or improving maintainability within a single deployable unit. The book covers both strategic and tactical patterns, including bounded contexts, aggregates, and event-driven integration.
Use code DOMAIN20 for 20% off at packtpub.com — valid through June 16, 2025.
Context Mapper 6.12.0 — Strategic DDD Refactoring, Visualized
Context Mapper is an open source modeling toolkit for strategic DDD, purpose-built to define and evolve bounded contexts, map interrelationships, and drive architectural refactorings. It offers a concise DSL for creating context maps and includes built-in transformations for modularizing monoliths, extracting services, and analyzing cohesion/coupling trade-offs.
The latest version continues its focus on reverse-engineering context maps from Spring Boot and Docker Compose projects, along with support for automated architectural refactorings—making it ideal for teams modernizing legacy systems or planning microservice transitions.
Highlights:
That’s all for today. Thank you for reading the first issue of Deep Engineering. We’re just getting started, and your feedback will help shape what comes next.
Take a moment to fill out this short survey—as a thank-you, we’ll add one Packt credit to your account, redeemable for any book of your choice.
We’ll be back next week with more expert-led content.
Stay awesome,
Divya Anne Selvaraj
Editor in Chief, Deep Engineering
If your company is interested in reaching an audience of developers, software engineers, and tech decision makers, you may want toadvertise with us.