How classical algorithms and real-world trade-offs will shape the next generation of software#6Imran Ahmad on Algorithmic Thinking, Scalable Systems, and the Rise of AI AgentsHow classical algorithms, system constraints, and real-world trade-offs will shape the next generation of intelligent softwareWorkshop: Unpack OWASP Top 10 LLMs with SnykJoin Snyk and OWASP Leader Vandana Verma Sehgal on Tuesday, July 15 at 11:00AM ET for a live session covering:✓ The top LLM vulnerabilities✓ Proven best practices for securing AI-generated code✓ Snyk’s AI-powered tools automate and scale secure dev.See live demos plus earn 1 CPE credit!Register todayHi Welcome to the sixth issue of Deep Engineering.A recent IBM and Morning Consult survey found that 99% of enterprise developers are now exploring or developing AI agents. Some have even christened 2025 as “the year of the AI agent”. We are experiencing a shift from standalone models to agentic systems.To understand what this shift means for developers we spoke with Imran Ahmad, data scientist at the Canadian Federal Government’s Advanced Analytics Solution Center (A2SC) and visiting professor at Carleton University. Ahmad is also the author of 50 Algorithms Every Programmer Should Know (Packt, 2023) and is currently working on his highly anticipated next book with us, 30 Agents Every AI Engineer Should Know, due out later this year. He has deep experience working on real-time analytics frameworks, multimedia data processing, and resource allocation algorithms in cloud computing.You can watch the full interview and read the transcript here—or keep reading for our take on the algorithmic mindset that will define the next generation of agentic software.Sign Up |AdvertiseFrom Models to Agents with Imran AhmadAccording to Gartner by 2028, 90% of enterprise software engineers will use AI code assistants (up from under 14% in early 2024). But we are already moving beyond code assistants to agents: software entities that don’t just respond to prompts, but plan, reason, and act by orchestrating tools, models, and infrastructure independently.“We have a lot of hope around AI – that it can eventually replace a human,” Ahmad says. “But if you think about how a person in a company solves a problem, they rely on a set of tools… After gathering information, they create a solution. An ‘agent’ is meant to replace that kind of human reasoning. It should be able to discover the tools in the environment around it, and have the wisdom to orchestrate a solution tailored to the problem. We're not there yet, but that's what we're striving for.”This vision aligns with where industry leaders are headed. Maryam Ashoori, Director of Product Management, IBM watsonx.ai concurs that 2025 is “the year of the AI agent”, and a recent IBM and Morning Consult survey found 99% of enterprise developers are now exploring or developing AI agents. Major platforms are rushing to support this paradigm: for instance, at Build 2025 Microsoft announced an Azure AI Agent Service to orchestrate multiple specialized agents as modular microservices. Such developments underscore the momentum behind agent-based architectures – which Igor Fedulov, CEO of Intersog, in an article for Forbes Technology Council, predicts will be a defining software trend by the end of 2025. Ahmad’s predicts this to be “the next generation of the algorithmic world we live in.”What is an agent?An AI agent is more than just a single model answering questions – it’s a software entity that can plan, call on various tools (search engines, databases, calculators, other models, etc.), and execute multi-step workflows to achieve a goal. “An agent is an entity that has the wisdom to work independently and autonomously,” Ahmad explains. “It can explore its environment, discover available tools, select the right ones, and create a workflow to solve a specific problem. That’s the dream agent.” Today’s implementations only scratch the surface of that ideal. For example, many so-called agents are basically LLMs augmented with function-calling abilities (tool APIs) – useful, but still limited in reasoning. Ahmad emphasizes that “a large language model is not the only tool. It’s perhaps the most important one right now, but real wisdom lies outside the LLM – in the agent.” In other words, true intelligence emerges from how an agent chooses and uses an ecosystem of tools, not just from one model’s output.The Practitioner’s Lens: Driving vs. Building the EngineEven as new techniques emerge, software professionals must decide how deep to go into theory. Ahmad draws a line between researchers and practitioners when it comes to algorithms. The researcher may delve into proofs of optimality, complexity theory, or inventing new algorithms. The practitioner, however, cares about applying algorithms effectively to solve real problems. Ahmad uses an analogy to explain this:“Do you want to build a car and understand every component of the engine? Or do you just want to drive it? If you want to drive it, you need to know the essentials – how to maintain it – but not necessarily every internal detail. That’s the practitioner role.”A senior engineer doesn’t always need to derive equations from scratch, but they do need to know the key parameters, limitations, and maintenance needs of the algorithmic “engines” they use.Ahmad isn’t advocating ignorance of theory. In fact, he stresses that having some insight under the hood improves decision-making. “If you know a bit more about how the engine works, you can choose the right car for your needs,” he explains. Similarly, knowing an algorithm’s fundamentals (even at a high level) helps an engineer pick the right tool for a given job. For example: Is your search problem better served by a Breath-First Search (BFS) or Depth-First Search (DFS) approach? Would a decision tree suffice, or do you need the boost in accuracy from an ensemble method? Experienced engineers approach such questions by combining intuition with algorithmic knowledge – a very practical kind of expertise. Ahmad’s advice is to focus on the level of understanding that informs real-world choices, rather than getting lost in academic detail irrelevant to your use case.Algorithm Choices and Real-World ScalabilityIn the wild, data is messy and scale is enormous – revealing which algorithms truly perform. “When algorithms are taught in universities… they’re usually applied to small, curated datasets. I call this ‘manicured pedicure data.’ But that’s not real data,” Ahmad quips. In his career as a public-sector data scientist, he routinely deals with millions of records and offers three key insights that shape how engineers should approach algorithm selection in production environments:Performance at scale requires different choices than in theory: Ahmad uses an example from his experience when he applied the Apriori algorithm (a well-known method for association rule mining). “When I used Apriori in practice, I found it doesn’t scale,” he admits. “It generates thousands of rules and then filters them after the fact. There’s a newer, better algorithm called (Frequent Pattern) FP-Growth that does the filtering at the source. It only generates the rules you actually need, making it far more scalable” A theoretically correct algorithm can become unusable when faced with big data volumes or strict latency requirements.Non-functional requirements often determine success: Beyond just picking the right algorithm, non-functional requirements like performance, scalability, and reliability must guide engineering decisions. “In academia, we focus on functional requirements… ‘this algorithm should detect fraud.’ And yes, the algorithm might technically work. But in practice, you also have to consider how it performs, how scalable it is, whether it can run as a cloud service, and so on.” Robust software needs algorithms that meet functional goals and the operational demands of deployment (throughput, memory, cost, etc.).Start simple, escalate only as needed:Simpler algorithms are easier to implement, explain, and maintain – valuable qualities especially in domains like finance or healthcare where interpretability matters. While discussing predictive models, Ahmad describes an iterative approach – perhaps begin with intuitive rules, upgrade to a decision tree for more structure, then if needed move to a more powerful model like XGBoost or an SVM. Jumping straight to a deep neural net can be overkill for a simple classification. “It’s usually a mistake to begin with something too complex – it can be overkill, like using a forklift to lift a sheet of paper,” he says.However, Algorithmic choices don’t occur in a vacuum – they influence and are influenced by software architecture. Modern systems, especially AI systems, have distinct phases (training, testing, inference) and often run in distributed cloud environments. Engineers therefore must integrate algorithmic thinking into high-level design and infrastructure decisions.Bridging Algorithms and Architecture in PracticeTake the example of training a machine learning model versus deploying it. “During training, you need a lot of data... a lot of processing power – GPUs, ideally. It’s expensive and time-consuming,” Ahmad notes. This is where cloud architecture shines. “The cloud gives you elastic architectures – you can spin up 2,000 nodes for 2 or 10 hours, train your model, and then shut it down. The cost is manageable…and you’re done.” Cloud platforms allow an elastic burst of resources: massive parallelism for a short duration, which can turn a week-long training job into a few hours for a few hundred dollars. Ahmad highlights that this elasticity was simply not available decades ago in on-prem computing. Today, any team can rent essentially unlimited compute for a day, which removes a huge barrier in building complex models. “If you want to optimize for cost and performance, you need elastic systems. Cloud computing… offers exactly that” for AI workloads, he says.Once trained, the model often compresses down to a relatively small artifact (Ahmad jokes that the final model file is “like the tail of an elephant – tiny compared to the effort to build it”). Serving predictions might only require a lightweight runtime that can even live on a smartphone. Thus, the hardware needs vary drastically between phases: heavy GPU clusters for training; maybe a simple CPU or even embedded device for inference. Good system design accommodates these differences – e.g., by separating training pipelines from inference services, or using cloud for training but edge devices for deployment when appropriate.So, how does algorithm choice drive architecture? Ahmad recommends evaluating any big design decision on three axes:CostPerformanceTime-to-deliverIf adopting a more sophisticated algorithm (or distributed processing framework, etc.) will greatly improve accuracy or speed and the extra cost is justified, it may be worth it. “First, ask yourself: does this problem justify the additional complexity…? Then evaluate that decision along three axes: cost, performance, and time,” he advises. “If an algorithm is more accurate, more time-efficient, and the cost increase is justified, then it’s probably the right choice.” On the flip side, if a fancy algorithm barely improves accuracy or would bust your budget/latency requirements, you might stick with a simpler approach that you can deploy more quickly. This trade-off analysis – weighing accuracy vs. expense vs. speed – is a core skill for architects in the age of AI. It prevents architecture astronautics (over-engineering) by ensuring complexity serves a real purpose.Classical Techniques: The Unsung Heroes in AI SystemsAhmad views classical computer science algorithms and modern AI methods as complementary components of a solution.“Take search algorithms, for instance,” Ahmad elaborates. “When you're preparing datasets for AI… you often have massive data lakes – structured and unstructured data all in one place. Now, say you're training a model for fraud detection. You need to figure out which data is relevant from that massive repository. Search algorithms can help you locate the relevant features and datasets. They support the AI workflow by enabling smarter data preparation.” Before the fancy model ever sees the data, classical algorithms may be at work filtering and finding the right inputs. Similarly, Ahmad points out, classic graph algorithms might be used to do link analysis or community detection that informs feature engineering. Even some “old-school” NLP (like tokenization or regex parsing) can serve as preprocessing for LLM pipelines. These building blocks ensure that the complex AI has quality material to work with.Ahmad offers an apt metaphor:“Maybe AI is your ‘main muscle,’ but to build a strong body – or a performant system – you need to train the supporting muscles too. Classical algorithms are part of that foundation.”Robust systems use the best of both worlds. For example, he describes a hybrid approach in real-world data labeling. In production, you often don’t have neat labeled datasets; you have to derive labels or important features from raw data. Association rule mining algorithms like Apriori or FP-Growth (from classical data mining) can uncover patterns. These patterns might suggest how to label data or which combined features could predict an outcome. “If you feed transaction data into FP-Growth, it will find relationships – like if someone buys milk, they’re likely to buy cheese too… These are the kinds of patterns the algorithm surfaces,” Ahmad explains. Here, a classical unsupervised algorithm helps define the inputs to a modern supervised learning task – a symbiosis that improves the overall system.Foundational skills like devising efficient search strategies, using dynamic programming for optimal substructure problems, or leveraging sorting and hashing for data organization are still extremely relevant. They might operate behind the scenes of an AI pipeline or bolster the infrastructure (e.g., database indexing, cache eviction policies, etc.) that keeps your application fast and reliable. Ahmad even notes that Google’s hyperparameter tuning service, Vizier, is “based on classical heuristic algorithms” rather than any neural network magic – yet it significantly accelerates model optimization.Optimization: The (Absolute) Necessity of Efficiency“Math can be cruel,” Ahmad warns. “If you’re not careful, your problem might never converge… If you accidentally introduce an exponential factor in the wrong place, it might take years – or even centuries – for the solution to converge. The sun might die before your algorithm finishes!” This colorful exaggeration underscores a serious point: computational complexity can explode quickly, and engineers need to be vigilant. It’s not acceptable to shrug off inefficiencies with “just let it run longer” if the algorithmic complexity is super-polynomial. “Things can spiral out of control very quickly. That’s why optimization isn't a luxury – it’s a necessity,” Ahmad says.Ahmad talks about three levels at which we optimize AI systems:Hardware: Choosing the right compute resources can yield massive speedups. For example, training a deep learning model on a GPU or TPU vs. a CPU can be orders of magnitude faster. “For deep learning especially, using a GPU can speed up training by a factor of 1,000,” Ahmad notes, based on his experience. So, part of an engineer’s algorithmic thinking is knowing when to offload work to specialized hardware, or how to parallelize tasks across a cluster.Hyperparameter tuning and algorithmic settings: Many algorithms (especially in machine learning) have knobs to turn – learning rate, tree depth, number of clusters, etc. The wrong settings can make a huge difference in both model quality and compute time. Traditionally, tuning was an art of trial and error. But now, tools like Google’s Vizier (and open-source libraries for Bayesian optimization) can automate this search efficiently.Ensuring the problem is set up correctly: A common mistake is diving into training without examining the data’s signal-to-noise ratio. Ahmad recommends the CRISP-DM approach – spend ample time on data understanding and preparation. “Let’s say your dataset has a lot of randomness and noise. If there's no clear signal, then even a Nobel Prize–winning scientist won’t be able to build a good model,” he says. “So, you need to assess your data before you commit to AI.” This might involve using statistical analysis or simple algorithms to verify that patterns exist. “Use classical methods to ensure that your data even has a learnable pattern. Otherwise, you’re wasting time and resources,” Ahmad advises.The cost of compute – and the opportunity cost of engineers’ time – is too high to ignore optimization. Or as Ahmad bluntly puts it, “It’s not OK to say, ‘I’m not in a hurry, I’ll just let it run.’” Competitive teams optimize both to push performance and to control time/cost, achieving results that are fast, scalable, and economically sensible.Learning by Doing: Making Algorithms StickMany developers first encounter algorithms as leetcode-style puzzles or theoretical exercises for interviews. But how can they move beyond rote knowledge to true mastery? Ahmad’s answer: practice on real problems. “Learning algorithms for interviews is a good start… it shows initiative,” he acknowledges. “But in interview prep, you're not solving real-world problems… To truly make algorithmic knowledge stick, you need to use algorithms to solve actual problems.”In the artificial setting of an interview question, you might code a graph traversal or a sorting function in isolation. The scope is narrow and hints are often provided by the problem constraints. Real projects are messier and more holistic. When you set out to build something end-to-end, you quickly uncover gaps in your knowledge and gain a deeper intuition. “That’s when you'll face real challenges, discover edge cases, and realize that you may need to know other algorithms just to get your main one working,” Ahmad says. Perhaps you’re implementing a network flow algorithm but discover you need a good data structure for priority queues to make it efficient, forcing you to learn or recall heap algorithms. Or you’re training a machine learning model and hit a wall until you implement a caching strategy to handle streaming data. Solving real problems forces you to integrate multiple techniques, and shows how classical and modern methods complement each other in context. Ahmad puts it succinctly: “There’s an entire ecosystem – an algorithmic community – that supports every solution. Classical and modern algorithms aren’t separate worlds. They complement each other, and a solid understanding of both is essential.”So, what’s the best way to gain this hands-on experience? Ahmad recommends use-case-driven projects, especially in domains that matter to you. He suggests tapping into the wealth of public datasets now available. “Governments around the world are legal custodians of citizen data… If used responsibly, this data can change lives,” he notes. Portals like data.gov host hundreds of thousands of datasets spanning healthcare, transportation, economics, climate, and more. Similar open data repositories exist for other countries and regions. These aren’t sanitized toy datasets – they are real, messy, and meaningful. “Choose a vertical you care about, download a dataset, pick an algorithm, and try to solve a problem. That’s the best way to solidify your learning,” Ahmad advises. The key is to immerse yourself in a project where you must apply algorithms end-to-end: from data cleaning and exploratory analysis, to choosing the right model or algorithmic approach, through optimization and presenting results. This process will teach more than any isolated coding puzzle, and the lessons will stick because they’re tied to real outcomes.Yes, 2025 is “the year of the AI agent”, but as the industry shifts from standalone models to agentic systems, engineers must learn to pair classical algorithmic foundations with real-world pragmatism, because in this era of AI agents, true intelligence lies not only in models, but in how wisely we orchestrate them.If Ahmad’s perspective on real-world scalability and algorithmic pragmatism resonated with you, his book 50 Algorithms Every Programmer Should Know goes deeper into the practical foundations behind today’s AI systems. The following excerpt explores how to design and optimize large-scale algorithms for production environments—covering parallelism, cloud infrastructure, and the trade-offs that shape performant systems.🧠Expert Insight: Large-Scale Algorithms by Imran AhmadThe complete “Chapter 15: Large‑Scale Algorithms” from the book 50 Algorithms Every Programmer Should Know by Imran Ahmad (Packt, September 2023).Large-scale algorithms are specifically designed to tackle sizable and intricate problems. They distinguish themselves by their demand for multiple execution engines due to the sheer volume of data and processing requirements. Examples of such algorithms include Large Language Models (LLMs) like ChatGPT, which require distributed model training to manage the extensive computational demands inherent to deep learning. The resource-intensive nature of such complex algorithms highlights the requirement for robust, parallel processing techniques critical for training the model.In this chapter, we will start by introducing the concept of large-scale algorithms and then proceed to discuss the efficient infrastructure required to support them. Additionally, we will explore various strategies for managing multi-resource processing. Within this chapter, we will examine the limitations of parallel processing, as outlined by Amdahl’s law, and investigate the use of Graphics Processing Units (GPUs).Read the Complete Chapter50 Algorithms Every Programmer Should Know by Imran Ahmad (Packt, September 2023) is a practical guide to algorithmic problem-solving in real-world software. Now in its second edition, the book covers everything from classical data structures and graph algorithms to machine learning, deep learning, NLP, and large-scale systems.For a limited time, get the eBook for $9.99 at packtpub.com — no code required.Get the Book🛠️Tool of the Week⚒️OSS Vizier — Production-Grade Black-Box Optimization from GoogleOSS Vizier is a Python-based, open source optimization service built on top of Google Vizier—the system that powers hyperparameter tuning and experiment optimization across products like Search, Ads, and YouTube. Now available to the broader research and engineering community, OSS Vizier brings the same fault-tolerant, scalable architecture to a wide range of use cases—from ML pipelines to physical experiments.Highlights:Flexible, Distributed Architecture: Supports RPC-based optimization via gRPC, allowing Python, C++, Rust, or custom clients to evaluate black-box objectives in parallel or sequentially.Rich Integration Ecosystem: Includes native support for PyGlove, TensorFlow Probability, and Vertex Vizier—enabling seamless connection to evolutionary search, Bayesian optimization, and cloud workflows.Research-Ready: Comes with standardized benchmarking APIs, a modular algorithm interface, and compatibility with AutoML tooling—ideal for evaluating and extending new optimization strategies.Resilient and Extensible: Fault-tolerant by design, with evaluations stored in SQL-backed datastores and support for retry logic, partial failure, and real-world constraints (e.g., human-evaluated objectives or lab settings).Learn more about OSS Vizier📰 Tech BriefsAI agents in 2025: Expectations vs. reality by Ivan Belcic and Cole Stryker, IBM Think: In 2025, AI agents are widely touted as transformative tools for work and productivity, but experts caution that while experimentation is accelerating, current capabilities remain limited, true autonomy is rare, and success depends on governance, strategy, and realistic expectations.Agent Mode for Gemini added to Android Studio: Google has introduced Agent Mode for Gemini in Android Studio, enabling developers to describe high-level goals that the agent can plan and execute—such as fixing build errors, adding dark mode, or generating UI from a screenshot—while allowing user oversight, feedback, and iteration, with expanded context support via Gemini API and MCP integration.Google’s Agent2Agent protocol finds new home at the Linux Foundation: Google has donated its Agent2Agent (A2A) protocol—a standard for enabling interoperability between AI agents—to the Linux Foundation, aiming to foster vendor-neutral, open development of multi-agent systems, with over 100 tech partners now contributing to its extensible, secure, and scalable design.Azure AI Foundry Agent Service GA Introduces Multi-Agent Orchestration and Open Interoperability: Microsoft has launched the Azure AI Foundry Agent Service into general availability, offering a modular, multi-agent orchestration platform that supports open interoperability, seamless integration with Logic Apps and external tools, and robust capabilities for monitoring, governance, and cross-cloud agent collaboration—all aimed at enabling scalable, intelligent agent ecosystems across diverse enterprise use cases.How AI Is Redefining The Way Software Is Built In 2025 by Igor Fedulov, CEO of Intersog: AI is transforming software development by automating tasks, accelerating workflows, and enabling more intelligent, adaptive systems—driving a shift toward agent-based architectures, cloud-native applications, and advanced technologies like voice and image recognition, while requiring developers to upskill in AI, data analysis, and security to remain competitive.That’s all for today. Thank you for reading this issue of Deep Engineering. We’re just getting started, and your feedback will help shape what comes next.Take a moment to fill out this short survey we run monthly—as a thank-you, we’ll add one Packt credit to your account, redeemable for any book of your choice.We’ll be back next week with more expert-led content.Stay awesome,Divya Anne SelvarajEditor-in-Chief, Deep EngineeringTake the Survey, Get a Packt Credit!If your company is interested in reaching an audience of developers, software engineers, and tech decision makers, you may want toadvertise with us.*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more