





















































Hi ,
Welcome to the third issue of Deep Engineering.
AI agents are no longer just code generators, they’re becoming active users of codebases, APIs, and developer tools. From semantic documentation protocols to agent-readable APIs, the systems we design must increasingly expose structure, context, and intent. Software now needs to serve two audiences—humans and machines.
This issue explores what that means in practice, through the lens of MoonBit—a new language built from the ground up for WebAssembly (Wasm)-native performance and AI-first tooling.
Our feature article examines how MoonBit responds to this dual-audience challenge: not with flashy syntax, but with a tightly integrated toolchain and a runtime model designed to be both fast and machine-consumable. And in a companion tutorial, MoonBit core contributor Zihang YE walks us through building a diff algorithm as a Wasm-ready CLI—an instructive example of the language’s design philosophy in action.
Sponsored:
Build the knowledge base that will enable you to collaborate with AI for years to come
💰 Competitive Pay Structure
⏰ Ultimate Flexibility
🚀 Technical Requirements (No AI Experience Needed)
Weekly payouts + remote work: The developer opportunity you've been waiting for!
The flexible tech side hustle paying up to $50/hour
The mainstream dominance of Python, JavaScript, and Rust might suggest the age of new programming languages is over. A new breed of languages including MoonBit prove otherwise—not by reinventing syntax, but by responding to two tectonic shifts in software development: AI-assisted workflows, and the rise of Wasm-native deployment in cloud and edge environments.
In edge computing and micro-runtime environments, developers need tools that start instantly, consume minimal memory, and run predictably across platforms. MoonBit’s design responds directly to this: it produces compact Wasm binaries optimized for streaming data, making it suitable for CLI tools, embedded components, and other low-overhead tasks.
At the same time, AI workloads are exposing the limitations of dynamic languages like Python in large-scale systems. MoonBit’s founders note that Python’s “easy to learn” nature can become a double-edged sword for complex tasks. Even with optional annotations, its dynamic type system can hinder static analysis, complicating maintainability and scalability as codebases grow. In response, MoonBit introduces a statically typed, AI-aware language model with built-in tooling—formatter, package manager, VSCode integration—designed to support both human and machine agents.
Rather than replacing Python, MoonBit takes a pragmatic approach. It explicitly embraces an “ecosystem reuse” model: it uses AI-powered encapsulation to lower the barrier for cross-language calls, avoiding reinvention of existing Python tools, and it aims to “democratize” static typing by coupling a strict type system with AI code generation.
MoonBit is a toolchain-native languages, designed from the start to work smoothly with modern build, editing, and AI workflows. Unlike older languages that were retrofitted with new tools, MoonBit bundles its compiler, package manager, IDE, language server, and even an AI assistant as a cohesive whole. As the MoonBit team puts it, they “integrate a comprehensive toolchain from the start” to provide a streamlined coding experience.
This stands in contrast to older systems languages like C/C++ and even to modern ones like Rust, which, despite its safety guarantees, still requires extra configuration to target Wasm. MoonBit by design treats Wasm as its primary compilation target – it is “Wasm-first”, built “as easy as Golang” but generating very compact Wasm output.
Similarly, MoonBit was conceived to work hand-in-hand with AI tools. It offers built-in hooks for AI code assistance (more on this below) and even considers AI protocols like Anthropic’s Model Context Protocol (MCP) as first-class integration points. In MoonBit, the language + toolchain combo is now a single product, not an afterthought.
MoonBit is not alone. Other new languages like Grain, Roc, and Hylo (formerly Val) each explore different priorities—from functional programming for the web to safe systems-level design and simplified developer experience.
Grain prioritizes JS interop and functional ergonomics; Roc favors simplicity and speed, though it’s still pre-release; and Hylo experiments with value semantics and low-level control. MoonBit and these other languages make it clear that language design is soon going to become inseparable from its runtime, developer experience, and AI integration.
MoonBit’s architecture reflects a deliberate focus on toolchain integration and cross-platform performance. It is a statically typed, multi-paradigm language influenced by Go and Rust, supporting generics, structural interfaces, and static memory management. The compiler is designed for whole-program optimization, producing Wasm or native binaries with minimal overhead. According to benchmarks cited by the team, MoonBit compiled 626 packages in 1.06 seconds—approximately 9x faster than Rust in the same test set. Its default Wasm output is compact: a basic HTTP service compiles to ~27 KB, which compares favorably to similar Rust (~100 KB) and JavaScript (~8.7 MB) implementations. This is partly due to MoonBit’s support for Wasm GC, allowing it to omit runtime components that Rust must include.
The syntax and structure are also optimized for machine parsing. All top-level definitions require explicit types, and interface methods are defined at the top level rather than nested. This flatter structure reportedly improves LLM performance by reducing key–value cache misses during code generation. The language includes built-in support for JSON, streaming data processing via iterators, and compile-time error tracking through control-flow analysis.
Tooling is tightly coupled with the language. The moon CLI handles compilation, formatting, testing, and dependency management via the Mooncakes registry. The build system, written in Rust, supports parallel, incremental builds. A dedicated LSP server (distributed via npm) integrates MoonBit with IDEs, enabling features like real-time code analysis and completions. Debugging is supported via the CLI with commands like moon run --target js --debug, which link into source-level tools.
A browser-based IDE preview is also available. It avoids containers in favor of a parallelized backend and includes an embedded AI assistant capable of generating documentation, suggesting tests, and offering inline explanations. According to the team, this setup is designed to support both developer productivity and AI agent interaction.
MoonBit’s performance profile extends beyond Wasm. A recent release introduced an LLVM backend for native compilation. In one example published by the team, MoonBit outperformed Java by up to 15x in a numeric loop benchmark. The language also supports JavaScript as a compilation target, expanding deployment options across web and server contexts.
LLMs are no longer just helping developers write code—they’re starting to read, run, and interact with it. This shift requires rethinking what it means for a language to be “usable.”
MoonBit anticipates this by treating AI systems as first-class consumers of code and tooling. Its team has adopted the MCP, an emerging open standard developed by Anthropic to enable LLMs to interface with external tools and data sources. MCP defines a JSON-RPC server architecture, allowing programs to expose structured endpoints that LLMs can query or invoke. MoonBit’s ecosystem includes a work-in-progress MCP server SDK written in MoonBit and compiled to Wasm, enabling MoonBit components to act as MCP-capable endpoints callable by models such as Claude.
This integration reflects a broader shift in tooling. Modern documentation tools like Mintlify now expose semantically indexed content explicitly for AI retrieval. UIs and APIs are being annotated with machine-readable metadata. Even version control is evolving: newer workflows track units of change like (prompt + schema + tests), not just line diffs, enabling intent-aware versioning usable by humans and machines alike.
MoonBit’s example agent on GitHub demonstrates this in practice, combining Wasm components (e.g. via Fermyon Spin), LLMs (such as DeepSeek), and MoonBit logic to automate development tasks. Under this model, protocols like MCP enable developers to publish AI-accessible functions directly from their codebases. MoonBit’s support for this workflow—via Wasm and first-party libraries—illustrates a growing view in language design: that AI systems are not just tools for writing code, but active consumers of it.
Three years ago, William Overton, a Senior Serverless Solutions Architect, said, Wasm "starts incredibly quickly and is incredibly light to run," making it well-suited to execute code across CDNs, edge nodes, and lightweight VMs with low startup latency and near-native speed. Today, the growing adoption of Wasm is reshaping expectations for both performance and cross-platform deployment.
For MoonBit, Wasm is the default compilation target—not an optional backend. Its tooling is built around producing compact, portable Wasm modules. A simple web server in MoonBit compiles to a 27 KB Wasm binary—significantly smaller than equivalent builds in Rust or JavaScript. This reduction in size translates directly to faster load times and reduced memory usage, making MoonBit viable for constrained environments like embedded systems, CLI tools, and edge deployments.
Standardized but still-emerging features like Wasm GC—and experimental ones like the Component Model—further reinforce this model. MoonBit has adopted both: its use of interface types and Wasm GC helps minimize runtime footprint. In a published comparison, MoonBit’s Wasm output was roughly an order of magnitude smaller than that of Rust, largely due to differences in memory management.
Taken together, these developments suggest that Wasm is becoming a practical universal format for lightweight applications. For teams building portable utilities or latency-sensitive services, languages with Wasm-native support—such as MoonBit—offer tangible advantages over traditional container- or VM-based approaches.
MoonBit offers concrete lessons even if you never write MoonBit code. Key takeaways include:
To see these ideas in practice—especially MoonBit’s type system, performance model, and Wasm-native tooling—Zihang YE, one of MoonBit’s core contributors, offers a hands-on walkthrough. His article walks us through the implementation of a diff algorithm using MoonBit, building a CLI tool that’s usable both by developers and AI systems via the MCP.
MoonBit is an emerging programming language that has a robust toolchain and relatively low learning curve. As a modern language, MoonBit includes a formatter, a plugin supporting VSCode, an LSP server, a central package registry, and more. It offers the friendly features of functional programming languages with manageable complexity.
To demonstrate MoonBit’s capabilities, we’ll implement a core software development tool—a diff algorithm. Diff algorithms are essential in software development, helping identify changes between different versions of text or code. They power critical tools in version control systems, collaborative editing platforms, and code review workflows, allowing developers to track modifications efficiently. If you have ever used git diff then you are already familiar with such algorithms.
The most widely used approach is Eugene W. Myers Diff algorithm, proposed in the paper “An O(ND) Difference Algorithm and Its Variations”. This algorithm is widely used for its optimal time complexity. Its space-efficient implementation and ability to find the shortest edit script make it superior to alternatives like patience diff or histogram diff and make it the standard in version control systems like Git and many text comparison tools such as Meld.
In this tutorial, we’ll implement a version of the Myers Diff algorithm in MoonBit. This hands-on project is ideal for beginners exploring MoonBit, offering insight into version control fundamentals while building a tool usable by both humans and AI through a standard API.
We will start by developing the algorithm itself, then build a command line application that integrates the Component Model and the MCP, leveraging MoonBit’s WebAssembly (Wasm) backend. Wasm is a blooming technology that provides privacy, portability, and near-native performance by running assembly-like code in virtual machines across platforms —qualities that MoonBit supports natively, making the language well-suited for building efficient cross-platform tools.
By the end of this tutorial, you’ll have a functional diff tool that demonstrates these capabilities in action.
Let’s first create a new moonbit project by running:
moon new --lib diff
The following will be the project structure of the code. The moon.mod.json
contains the configuration for the project, while the moon.pkg.json
contains the configuration for each package. top.mbt
is the file we'll be editing throughout this post.
├── LICENSE
├── moon.mod.json
├── README.md
└── src
├── lib
│ ├── hello.mbt
│ ├── hello_test.mbt
│ └── moon.pkg.json
├── moon.pkg.json
└── top.mbt
We will be comparing two pieces of text, divided each into lines. Each line will include its content and a line number. The line number helps track the exact position of changes, providing important context about the location of changes when displaying the differences between the original and modified files.
MCP Python SDK 1.9.2 — Structured Interfaces for AI-Native Applications
The MCP is a standard for exposing structured data, tools, and prompts to language models. The MCP Python SDK brings this to production-ready Python environments, with a lightweight, FastAPI-compatible server model and first-class support for LLM interaction patterns. The latest release, v1.9.2 (May 2025), introduces:
mcp install
.Ideal for teams designing LLM-facing APIs, building AI-autonomous agents, or integrating prompt-based tools directly into Python services. It’s the protocol MoonBit already supports—and the interface LLMs increasingly expect.
That’s all for today. Thank you for reading this issue of Deep Engineering. We’re just getting started, and your feedback will help shape what comes next.
Take a moment to fill out this short survey—as a thank-you, we’ll add one Packt credit to your account, redeemable for any book of your choice.
We’ll be back next week with more expert-led content.
Stay awesome,
Divya Anne Selvaraj
Editor-in-Chief, Deep Engineering
If your company is interested in reaching an audience of developers, software engineers, and tech decision makers, you may want toadvertise with us.