





















































Hi ,
Welcome to the very first issue of Deep Engineering.
With memory safety behind more than 70% of all known security vulnerabilities (CVEs), the push toward safer programming has become a matter of urgency. Should we rewrite in Rust and Go, or modernize how we write C++?
To answer this question, we turned to Patrice Roy—author of C++ Memory Management, long-time member of the ISO C++ Standards Committee, and veteran educator with nearly three decades of experience training systems programmers.
You can watch the full interview and read the full transcript here—or keep reading for our distilled take on what modern memory management should look like in practice.
One of the most important lessons in modern C++ is clear: "avoid manual memory handling if you can." As Patrice Roy explains, C++’s automatic storage and Resource Acquisition Is Initialization (RAII) mechanisms “work really well” and should be the first tools developers reach for.
Modern C++ favors type-driven ownership over raw pointers and new
/delete
. Smart pointers and standard containers make ownership explicit and self-documenting. For example, std::unique_ptr
signals sole ownership in the code itself—eliminating ambiguity about responsibility. As Roy puts it:
“You don’t have to ask who will free the memory—it’s that guy. He’s responsible. It’s his job.”
Shared ownership is handled by std::shared_ptr
, with reference-counted lifetime management. The key idea, Roy stresses, is visibility: ownership should be encoded in the code, not left to comments or convention. This design clarity eliminates entire classes of memory bugs.
The same principle applies to Standard Library containers. Types like std::vector
manage memory internally—allocation, deallocation, resizing—so developers can focus on program logic, not logistics. RAII and the type system eliminate leaks, double frees, and dangling pointers, and improve exception safety by guaranteeing cleanup during stack unwinding.
As C++ veteran Roger Orr quipped:
“The most beautiful line of C++ code is the closing brace,”
because it signals the automatic cleanup of all resources in scope.
The takeaway is simple: default to smart pointers and containers. Use raw memory only when absolutely necessary—and almost never in high-level code.
Manual memory management still has its place in C++, especially in domains where performance, latency, or control over allocation patterns is critical. But as Roy emphasizes, developers should measure before reaching for low-level strategies:
“The first thing you should do is measure. Make sure the allocator or memory pool you already have doesn’t already do the job. If you're spending time on something, it has to pay off.”
He cites high-frequency trading as an example where even small delays can be unacceptable:
“Say you’re working in the finance domain, and you have nanosecond-level constraints because you need to buy and sell very fast—then yes, sometimes you’ll want more control over what’s going on.”
In such cases, allocation must be avoided during critical execution windows. One option is to pre-allocate memory buffers on the stack.
Modern C++ offers fine-grained control through allocator models. Roy contrasts the traditional type-based model with the polymorphic memory resources (PMR) model introduced in C++17:
“Since C++17, we’ve had the PMR (Polymorphic Memory Resource) model... a PMR vector has a member—a pointer to its allocator—instead of having it baked into the type.”
While PMR introduces a layer of indirection via virtual function calls, Roy notes that the overhead is usually negligible:
“Allocation is a costly operation anyway. So the indirection of a virtual function call isn’t much of a cost—it’s already there in the background.”
But when even that cost is too high, the traditional model may be more appropriate:
“If you're in a domain where nanoseconds matter, even that indirection might be too much. In that case, the traditional model... may be a better choice, even if you have to write more code.”
Roy’s guidance is clear: measure first, optimize only when necessary, and understand the trade-offs each model presents.
Despite decades of hard-won expertise, C++ developers still face memory safety risks—from dangling references and buffer overruns to subtle use-after-free bugs. The good news: the C++ ecosystem is evolving to tackle these risks more directly, through improved diagnostics, optional safety models, and support from both compilers and hardware.
Roy identifies dangling references as one of the most persistent and subtle sources of undefined behavior in C++:
“The main problem we still have is probably what we call dangling references... Lifetime is at the core of object and resource management in C++.”
Even modern constructs like string_view
can trigger lifetime errors, particularly when developers return references to local variables or temporaries. To address this, the ISO C++ committee has launched several initiatives focused on improving lifetime safety.
Roy highlights ongoing work by Herb Sutter and Gašper Ažman (P3656 R1) to introduce lifetime annotations and static analysis to make these bugs less likely:
“They’re trying to reduce undefined behavior and make lifetime bugs less likely.”
C++23 introduced an optional Lifetime Safety Profile, based on the C++ Core Guidelines, which flags unsafe lifetime usage patterns. This fits into a broader trend toward compiler-enforced profiles—opt-in language subsets proposed by Bjarne Stroustrup that would strengthen guarantees around type safety, bounds checking, and lifetimes.
Roy also mentions a proposal of his for C++29, allowing developers to mark ownership transfer explicitly in function signatures—reinforcing ownership visibility and lifetime clarity.
Alongside profiles, contracts are expected in C++26. These language features will allow developers to specify preconditions and postconditions directly in code:
“Let you mark preconditions and postconditions in your functions... written into the code—not just as prose.”
While not limited to memory management, contracts contribute to overall safety by formalizing intent and reducing the likelihood of incorrect usage.
Alongside language improvements, developers today have access to a mature suite of static and runtime tools for detecting memory errors.
Sanitizers have become essential for modern C++ development. Tools like AddressSanitizer (ASan), MemorySanitizer (MSan), and ThreadSanitizer (TSan) instrument the compiled code to detect memory bugs during testing. Roy endorses their use—even if he doesn’t run them constantly:
“They’re awesome…I don’t use them much... I think everyone should use them once in a while... They should be part of everyone’s test process.”
He encourages developers to experiment and weigh the costs.
Roy also recommends increasing compiler warning levels to catch memory misuse early:
“If you're using Visual Studio, try /W4
... Maybe not /Wall
with GCC, because it's too noisy, or with Clang—but raise the warning levels a bit.”
Static analysis tools like the Clang Static Analyzer and Coverity inspect code paths without execution and flag issues such as memory leaks, double frees, and buffer overruns.
On the hardware front, ARM’s Memory Tagging Extension (MTE) offers runtime memory validation through tagged pointers. Available on ARMv9 (e.g. recent Android devices), MTE can catch use-after-free and buffer overflow bugs with minimal runtime impact.
Where MTE isn't available, lightweight runtime tools help fill the gap. Google’s GWP-ASan offers probabilistic detection of heap corruption in production, while Facebook’s CheckPointer (in Folly) builds bounds-checking into smart pointer types.
No discussion of memory management today is complete without addressing the elephant in the room: memory-safe languages. Two prominent examples are Rust and Go, which take almost opposite approaches to solve the same problem. “The genius of Go is that it has a garbage collector. The genius of Rust is that it doesn’t need one,” as John Arundel of Bitfield consulting cleverly puts it.
Rust is designed from the ground up to eliminate classes of memory errors common in languages like C and C++. However, Rust’s safety comes with a learning curve, especially for developers accustomed to manually managing lifetimes. Despite this, according to JetBrains' 2025 Developer Ecosystem Survey, Rust has seen significant growth, with over 2.2 million developers using it in the past year and 709,000 considering it their primary language. While Rust's syntax can be initially challenging, many developers find that its multi-paradigm nature and strong safety guarantees make it a robust choice for complex systems development.
Researchers have also proposed refinement layers atop C2Rust that automatically reduce the use of unsafe
code and improve idiomatic style. One such technique, described in a 2022 IEEE paper, uses TXL-based program transformation rules to refactor translated Rust code—achieving significantly higher safe-code ratios than raw C2Rust output.
As one developer quoted by JetBrains put it, Rust is no longer just a safer C++; it's “a general-purpose programming language” powering everything from WebAssembly to command-line tools and backend APIs. And for those coming from legacy C or C++ environments, Rust doesn't demand a full rewrite—interoperability, through FFI and modular integration, allows new Rust code to safely coexist with existing infrastructure.
Go adopts a runtime approach to memory safety, deliberately removing the need for developers to manage memory manually. The Go team’s recent cryptography audit—conducted by Trail of Bits and covering core packages like crypto/ecdh
, crypto/ecdsa
, and crypto/ed25519
—underscored this design strength. The auditors found no exploitable memory safety issues in the default packages. Only one low-severity issue was found in the legacy Go+BoringCrypto integration, which required manual memory management via cgo and has since been deprecated. As the Go authors noted, “we naturally rely on the Go language properties to avoid memory management issues.”
By sidestepping manual allocation and pointer arithmetic, Go reduces the attack surface for critical bugs like buffer overflows and dangling pointers. While garbage collection does introduce latency trade-offs that make Go less suitable for hard real-time systems, its safety-by-default model and well-tested cryptographic APIs make it ideal for server-side development, cloud infrastructure, and security-sensitive applications where predictable correctness matters more than raw latency.
Go’s simplicity also extends to API design. The audit emphasized the team’s emphasis on clarity, safety, and minimalism: prioritizing security over performance, avoiding complex assembly where possible, and keeping code highly readable to support effective review and auditing.
The rise of memory-safe languages like Rust and Go has put C and C++ under scrutiny—especially in safety-critical domains. The U.S. White House Office of the National Cyber Director now recommends using memory-safe languages for new projects, citing their ability to prevent classes of vulnerabilities inherent in manual memory management.
But, replacing C and C++ wholesale is rarely feasible. Most real-world systems will continue to mix languages, gradually modernizing existing C++ code with safer idioms and tooling.
Modern C++ is adapting. While the language remains low-level, initiatives like the Core Guidelines, contracts, and lifetime safety proposals are making it easier to write safer code.
unique_ptr
, shared_ptr
, and standard containers to make ownership explicit. Avoid raw new
/delete
unless absolutely necessary—and never in high-level code./W4
for MSVC, -Wall -Wextra
for Clang/GCC) and treat warnings as errors to surface issues before they reach production.If you found the insights in our editorial useful, Roy’s book, C++ Memory Management (Packt, March 2025), offers a much deeper exploration, including tips on avoiding common pitfalls and embracing C++17/20/23 features for better memory handling. Here is an excerpt from the book which explains arena-based memory management in C++, using a custom allocator for a game scenario to demonstrate how preallocating and sequentially allocating memory can reduce fragmentation and improve performance.
The idea behind arena-based memory management is to allocate a chunk of memory at a known moment in the program and manage it as a “small, personalized heap” based on a strategy that benefits from knowledge of the situation or of the problem domain.
There are many variants on this general theme, including the following:
The best way to explain how arena-based allocation works is probably to write an example program that uses it and shows both what it does and what benefits this provides. We will write code in such a way as to use the same test code with either the standard library-provided allocation functions or our own specialized implementation, depending on the presence of a macro, and, of course, we will measure the allocation and deallocation code to see whether there is a benefit to our efforts.
In this hands-on guide to mastering memory in modern C++, Roy covers techniques to write leaner and safer C++ code, from smart pointers and standard containers to custom allocators and debugging tools. He also dives into examples across real-time systems, games, and more, illustrating how to balance performance with safety.
Use code MEMORY20 for 20% off at packtpub.com
Valgrind 3.25.0 — Classic Memory Debugging, Now with Broader Platform Support
Valgrind has long been a staple for memory debugging in C and C++ applications. The latest release, version 3.25.0, brings significant enhancements:
That’s all for today. Thank you for reading the first issue of Deep Engineering. We’re just getting started, and your feedback will help shape what comes next.
Take a moment to fill out this short survey—as a thank-you, we’ll add one Packt credit to your account, redeemable for any book of your choice.
We’ll be back next week with more expert-led content.
Stay awesome,
Divya Anne Selvaraj
Editor in Chief, Deep Engineering
If your company is interested in reaching an audience of developers, software engineers, and tech decision makers, you may want toadvertise with us.