





















































Join Snyk and OWASP Leader Vandana Verma Sehgal on Tuesday, July 15 at 11:00AM ET for a live session covering:
✓ The top LLM vulnerabilities
✓ Proven best practices for securing AI-generated code
✓ Snyk’s AI-powered tools automate and scale secure dev.
See live demos plus earn 1 CPE credit!
Hi
Welcome to the sixth issue of Deep Engineering.
A recent IBM and Morning Consult survey found that 99% of enterprise developers are now exploring or developing AI agents. Some have even christened 2025 as “the year of the AI agent”. We are experiencing a shift from standalone models to agentic systems.
To understand what this shift means for developers we spoke with Imran Ahmad, data scientist at the Canadian Federal Government’s Advanced Analytics Solution Center (A2SC) and visiting professor at Carleton University. Ahmad is also the author of 50 Algorithms Every Programmer Should Know (Packt, 2023) and is currently working on his highly anticipated next book with us, 30 Agents Every AI Engineer Should Know, due out later this year. He has deep experience working on real-time analytics frameworks, multimedia data processing, and resource allocation algorithms in cloud computing.
You can watch the full interview and read the transcript here—or keep reading for our take on the algorithmic mindset that will define the next generation of agentic software.
According to Gartner by 2028, 90% of enterprise software engineers will use AI code assistants (up from under 14% in early 2024). But we are already moving beyond code assistants to agents: software entities that don’t just respond to prompts, but plan, reason, and act by orchestrating tools, models, and infrastructure independently.
“We have a lot of hope around AI – that it can eventually replace a human,” Ahmad says. “But if you think about how a person in a company solves a problem, they rely on a set of tools… After gathering information, they create a solution. An ‘agent’ is meant to replace that kind of human reasoning. It should be able to discover the tools in the environment around it, and have the wisdom to orchestrate a solution tailored to the problem. We're not there yet, but that's what we're striving for.”
This vision aligns with where industry leaders are headed. Maryam Ashoori, Director of Product Management, IBM watsonx.ai concurs that 2025 is “the year of the AI agent”, and a recent IBM and Morning Consult survey found 99% of enterprise developers are now exploring or developing AI agents. Major platforms are rushing to support this paradigm: for instance, at Build 2025 Microsoft announced an Azure AI Agent Service to orchestrate multiple specialized agents as modular microservices. Such developments underscore the momentum behind agent-based architectures – which Igor Fedulov, CEO of Intersog, in an article for Forbes Technology Council, predicts will be a defining software trend by the end of 2025. Ahmad’s predicts this to be “the next generation of the algorithmic world we live in.”
An AI agent is more than just a single model answering questions – it’s a software entity that can plan, call on various tools (search engines, databases, calculators, other models, etc.), and execute multi-step workflows to achieve a goal. “An agent is an entity that has the wisdom to work independently and autonomously,” Ahmad explains. “It can explore its environment, discover available tools, select the right ones, and create a workflow to solve a specific problem. That’s the dream agent.” Today’s implementations only scratch the surface of that ideal. For example, many so-called agents are basically LLMs augmented with function-calling abilities (tool APIs) – useful, but still limited in reasoning. Ahmad emphasizes that “a large language model is not the only tool. It’s perhaps the most important one right now, but real wisdom lies outside the LLM – in the agent.” In other words, true intelligence emerges from how an agent chooses and uses an ecosystem of tools, not just from one model’s output.
Even as new techniques emerge, software professionals must decide how deep to go into theory. Ahmad draws a line between researchers and practitioners when it comes to algorithms. The researcher may delve into proofs of optimality, complexity theory, or inventing new algorithms. The practitioner, however, cares about applying algorithms effectively to solve real problems. Ahmad uses an analogy to explain this:
“Do you want to build a car and understand every component of the engine? Or do you just want to drive it? If you want to drive it, you need to know the essentials – how to maintain it – but not necessarily every internal detail. That’s the practitioner role.”
A senior engineer doesn’t always need to derive equations from scratch, but they do need to know the key parameters, limitations, and maintenance needs of the algorithmic “engines” they use.
Ahmad isn’t advocating ignorance of theory. In fact, he stresses that having some insight under the hood improves decision-making. “If you know a bit more about how the engine works, you can choose the right car for your needs,” he explains. Similarly, knowing an algorithm’s fundamentals (even at a high level) helps an engineer pick the right tool for a given job. For example: Is your search problem better served by a Breath-First Search (BFS) or Depth-First Search (DFS) approach? Would a decision tree suffice, or do you need the boost in accuracy from an ensemble method? Experienced engineers approach such questions by combining intuition with algorithmic knowledge – a very practical kind of expertise. Ahmad’s advice is to focus on the level of understanding that informs real-world choices, rather than getting lost in academic detail irrelevant to your use case.
In the wild, data is messy and scale is enormous – revealing which algorithms truly perform. “When algorithms are taught in universities… they’re usually applied to small, curated datasets. I call this ‘manicured pedicure data.’ But that’s not real data,” Ahmad quips. In his career as a public-sector data scientist, he routinely deals with millions of records and offers three key insights that shape how engineers should approach algorithm selection in production environments:
However, Algorithmic choices don’t occur in a vacuum – they influence and are influenced by software architecture. Modern systems, especially AI systems, have distinct phases (training, testing, inference) and often run in distributed cloud environments. Engineers therefore must integrate algorithmic thinking into high-level design and infrastructure decisions.
Take the example of training a machine learning model versus deploying it. “During training, you need a lot of data... a lot of processing power – GPUs, ideally. It’s expensive and time-consuming,” Ahmad notes. This is where cloud architecture shines. “The cloud gives you elastic architectures – you can spin up 2,000 nodes for 2 or 10 hours, train your model, and then shut it down. The cost is manageable…and you’re done.” Cloud platforms allow an elastic burst of resources: massive parallelism for a short duration, which can turn a week-long training job into a few hours for a few hundred dollars. Ahmad highlights that this elasticity was simply not available decades ago in on-prem computing. Today, any team can rent essentially unlimited compute for a day, which removes a huge barrier in building complex models. “If you want to optimize for cost and performance, you need elastic systems. Cloud computing… offers exactly that” for AI workloads, he says.
Once trained, the model often compresses down to a relatively small artifact (Ahmad jokes that the final model file is “like the tail of an elephant – tiny compared to the effort to build it”). Serving predictions might only require a lightweight runtime that can even live on a smartphone. Thus, the hardware needs vary drastically between phases: heavy GPU clusters for training; maybe a simple CPU or even embedded device for inference. Good system design accommodates these differences – e.g., by separating training pipelines from inference services, or using cloud for training but edge devices for deployment when appropriate.
So, how does algorithm choice drive architecture? Ahmad recommends evaluating any big design decision on three axes:
If adopting a more sophisticated algorithm (or distributed processing framework, etc.) will greatly improve accuracy or speed and the extra cost is justified, it may be worth it. “First, ask yourself: does this problem justify the additional complexity…? Then evaluate that decision along three axes: cost, performance, and time,” he advises. “If an algorithm is more accurate, more time-efficient, and the cost increase is justified, then it’s probably the right choice.” On the flip side, if a fancy algorithm barely improves accuracy or would bust your budget/latency requirements, you might stick with a simpler approach that you can deploy more quickly. This trade-off analysis – weighing accuracy vs. expense vs. speed – is a core skill for architects in the age of AI. It prevents architecture astronautics (over-engineering) by ensuring complexity serves a real purpose.
Ahmad views classical computer science algorithms and modern AI methods as complementary components of a solution.
“Take search algorithms, for instance,” Ahmad elaborates. “When you're preparing datasets for AI… you often have massive data lakes – structured and unstructured data all in one place. Now, say you're training a model for fraud detection. You need to figure out which data is relevant from that massive repository. Search algorithms can help you locate the relevant features and datasets. They support the AI workflow by enabling smarter data preparation.” Before the fancy model ever sees the data, classical algorithms may be at work filtering and finding the right inputs. Similarly, Ahmad points out, classic graph algorithms might be used to do link analysis or community detection that informs feature engineering. Even some “old-school” NLP (like tokenization or regex parsing) can serve as preprocessing for LLM pipelines. These building blocks ensure that the complex AI has quality material to work with.
Ahmad offers an apt metaphor:
“Maybe AI is your ‘main muscle,’ but to build a strong body – or a performant system – you need to train the supporting muscles too. Classical algorithms are part of that foundation.”
Robust systems use the best of both worlds. For example, he describes a hybrid approach in real-world data labeling. In production, you often don’t have neat labeled datasets; you have to derive labels or important features from raw data. Association rule mining algorithms like Apriori or FP-Growth (from classical data mining) can uncover patterns. These patterns might suggest how to label data or which combined features could predict an outcome. “If you feed transaction data into FP-Growth, it will find relationships – like if someone buys milk, they’re likely to buy cheese too… These are the kinds of patterns the algorithm surfaces,” Ahmad explains. Here, a classical unsupervised algorithm helps define the inputs to a modern supervised learning task – a symbiosis that improves the overall system.
Foundational skills like devising efficient search strategies, using dynamic programming for optimal substructure problems, or leveraging sorting and hashing for data organization are still extremely relevant. They might operate behind the scenes of an AI pipeline or bolster the infrastructure (e.g., database indexing, cache eviction policies, etc.) that keeps your application fast and reliable. Ahmad even notes that Google’s hyperparameter tuning service, Vizier, is “based on classical heuristic algorithms” rather than any neural network magic – yet it significantly accelerates model optimization.
“Math can be cruel,” Ahmad warns. “If you’re not careful, your problem might never converge… If you accidentally introduce an exponential factor in the wrong place, it might take years – or even centuries – for the solution to converge. The sun might die before your algorithm finishes!” This colorful exaggeration underscores a serious point: computational complexity can explode quickly, and engineers need to be vigilant. It’s not acceptable to shrug off inefficiencies with “just let it run longer” if the algorithmic complexity is super-polynomial. “Things can spiral out of control very quickly. That’s why optimization isn't a luxury – it’s a necessity,” Ahmad says.
Ahmad talks about three levels at which we optimize AI systems:
The cost of compute – and the opportunity cost of engineers’ time – is too high to ignore optimization. Or as Ahmad bluntly puts it, “It’s not OK to say, ‘I’m not in a hurry, I’ll just let it run.’” Competitive teams optimize both to push performance and to control time/cost, achieving results that are fast, scalable, and economically sensible.
Many developers first encounter algorithms as leetcode-style puzzles or theoretical exercises for interviews. But how can they move beyond rote knowledge to true mastery? Ahmad’s answer: practice on real problems. “Learning algorithms for interviews is a good start… it shows initiative,” he acknowledges. “But in interview prep, you're not solving real-world problems… To truly make algorithmic knowledge stick, you need to use algorithms to solve actual problems.”
In the artificial setting of an interview question, you might code a graph traversal or a sorting function in isolation. The scope is narrow and hints are often provided by the problem constraints. Real projects are messier and more holistic. When you set out to build something end-to-end, you quickly uncover gaps in your knowledge and gain a deeper intuition. “That’s when you'll face real challenges, discover edge cases, and realize that you may need to know other algorithms just to get your main one working,” Ahmad says. Perhaps you’re implementing a network flow algorithm but discover you need a good data structure for priority queues to make it efficient, forcing you to learn or recall heap algorithms. Or you’re training a machine learning model and hit a wall until you implement a caching strategy to handle streaming data. Solving real problems forces you to integrate multiple techniques, and shows how classical and modern methods complement each other in context. Ahmad puts it succinctly: “There’s an entire ecosystem – an algorithmic community – that supports every solution. Classical and modern algorithms aren’t separate worlds. They complement each other, and a solid understanding of both is essential.”
So, what’s the best way to gain this hands-on experience? Ahmad recommends use-case-driven projects, especially in domains that matter to you. He suggests tapping into the wealth of public datasets now available. “Governments around the world are legal custodians of citizen data… If used responsibly, this data can change lives,” he notes. Portals like data.gov host hundreds of thousands of datasets spanning healthcare, transportation, economics, climate, and more. Similar open data repositories exist for other countries and regions. These aren’t sanitized toy datasets – they are real, messy, and meaningful. “Choose a vertical you care about, download a dataset, pick an algorithm, and try to solve a problem. That’s the best way to solidify your learning,” Ahmad advises. The key is to immerse yourself in a project where you must apply algorithms end-to-end: from data cleaning and exploratory analysis, to choosing the right model or algorithmic approach, through optimization and presenting results. This process will teach more than any isolated coding puzzle, and the lessons will stick because they’re tied to real outcomes.
Yes, 2025 is “the year of the AI agent”, but as the industry shifts from standalone models to agentic systems, engineers must learn to pair classical algorithmic foundations with real-world pragmatism, because in this era of AI agents, true intelligence lies not only in models, but in how wisely we orchestrate them.
If Ahmad’s perspective on real-world scalability and algorithmic pragmatism resonated with you, his book 50 Algorithms Every Programmer Should Know goes deeper into the practical foundations behind today’s AI systems. The following excerpt explores how to design and optimize large-scale algorithms for production environments—covering parallelism, cloud infrastructure, and the trade-offs that shape performant systems.
Large-scale algorithms are specifically designed to tackle sizable and intricate problems. They distinguish themselves by their demand for multiple execution engines due to the sheer volume of data and processing requirements. Examples of such algorithms include Large Language Models (LLMs) like ChatGPT, which require distributed model training to manage the extensive computational demands inherent to deep learning. The resource-intensive nature of such complex algorithms highlights the requirement for robust, parallel processing techniques critical for training the model.
In this chapter, we will start by introducing the concept of large-scale algorithms and then proceed to discuss the efficient infrastructure required to support them. Additionally, we will explore various strategies for managing multi-resource processing. Within this chapter, we will examine the limitations of parallel processing, as outlined by Amdahl’s law, and investigate the use of Graphics Processing Units (GPUs).
50 Algorithms Every Programmer Should Know by Imran Ahmad (Packt, September 2023) is a practical guide to algorithmic problem-solving in real-world software. Now in its second edition, the book covers everything from classical data structures and graph algorithms to machine learning, deep learning, NLP, and large-scale systems.
For a limited time, get the eBook for $9.99 at packtpub.com — no code required.
OSS Vizier — Production-Grade Black-Box Optimization from Google
OSS Vizier is a Python-based, open source optimization service built on top of Google Vizier—the system that powers hyperparameter tuning and experiment optimization across products like Search, Ads, and YouTube. Now available to the broader research and engineering community, OSS Vizier brings the same fault-tolerant, scalable architecture to a wide range of use cases—from ML pipelines to physical experiments.
Highlights:
That’s all for today. Thank you for reading this issue of Deep Engineering. We’re just getting started, and your feedback will help shape what comes next.
Take a moment to fill out this short survey we run monthly—as a thank-you, we’ll add one Packt credit to your account, redeemable for any book of your choice.
We’ll be back next week with more expert-led content.
Stay awesome,
Divya Anne Selvaraj
Editor-in-Chief, Deep Engineering
If your company is interested in reaching an audience of developers, software engineers, and tech decision makers, you may want toadvertise with us.