





















































Join the World’s First 16-Hour LIVE AI Mastermind for professionals, founders, consultants & business owners like you.
Rated 4.9/5 by 150,000 global learners – this will truly make you an AI Generalist that can build, solve & work on anything with AI.
All by global experts from companies like Amazon, Microsoft, SamurAI and more. And it’s ALL. FOR. FREE. 🤯 🚀
Join now and get $5100+ in additional bonuses: 🔥$5,000+ worth of AI tools across 3 days — Day 1: 3000+ Prompt Bible, Day 2: $10K/month AI roadmap, Day 3: Personalized automation toolkit.
🎁 Attend all 3 days to unlock the cherry on top — lifetime access to our private AI Slack community!
Welcome to the 101st edition of our newsletter!
This week, the world of AI is buzzing with significant developments. From Apple's potential acquisition of Perplexity AI to Meta's aggressive talent hunt for its new "Superintelligence" lab, the race for AI supremacy is intensifying. Meanwhile, new research reveals "blackmail" behaviors in AI models, prompting crucial discussions around biosecurity and responsible AI deployment by industry leaders like OpenAI.
Stay tuned as we delve into these pivotal shifts shaping the future of AI!
LLM Expert Insights,
Packt
Date: October 10, 2025
Location: New York City, NY – AI Engineer World
Cost: TBA (Previous editions ranged from $499–$999)
Focus: Agentic AI systems, multi-agent orchestration, autonomous workflows
2. AI Engineer Summit 2025 – “Agents at Work!”
Date: February 19–22, 2025
Location: New York City, NY – AI Engineer Collective
Cost: Invite-only (past tickets ~$850–$1,200)
Focus: Engineering agent architectures, agent dev tools, and evaluation frameworks
Date: September 29–30, 2025
Location: Herndon, VA – AI Agent Event
Cost: US $695 (Early Bird), $995 (Regular)
Focus: Enterprise agent systems, real-world agent deployment, decision-making frameworks
4. AgentCon 2025 – San Francisco Stop
Date: November 14, 2025
Location: San Francisco, CA – Global AI Community
Cost: Free to $99 (based on venue and track)
Focus: Building, deploying, and scaling autonomous agent
What’s stopping you? Choose your city, RSVP early, and step into a room where AI conversations spark, and the future unfolds one meetup at a time.
Getting a model to generate text is easy. Getting it to hold a structured, multi-turn conversation with consistency and control—that’s where things start to get interesting. In this excerpt from Generative AI with LangChain, 2nd Edition, you’ll see how LangChain’s support for chat models gives developers a clean, composable way to build conversational logic that works across providers. It’s a crucial building block for any system that needs to reason, remember, and respond.
Working with chat models
Chat models are LLMs that are fine-tuned for multi-turn interaction between a model and a human. These days most LLMs are fine-tuned for multi-turn conversations. Instead of providing the model with an input such as
human: turn1
ai: answer1
human: turn2
ai: answer2
Model providers typically do not persist chat history on the server. Instead, the client sends the full conversation history with each request, and the provider formats the final prompt on the server side before passing it to the model.
SELECT line1, city, state, zip from
person p, person_address pa, address a
WHERE p.name = 'John Doe'
and pa.person_id = p.id
and pa.address_id = a.id
ORDER BY pa.start ASC
LIMIT 2, 1
LangChain follows the same pattern with ChatModels, processing conversations through structured messages with roles and content. Each message contains the following:
Role (who's speaking), which is defined by the message class (all messages inherit from BaseMessage)
Content (what's being said)
Key message types include:
Let's see this in action:
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import SystemMessage, HumanMessage
chat = ChatAnthropic(model="claude-3-opus-20240229")
messages = [
SystemMessage(content="You're a helpful programming assistant"),
HumanMessage(content="Write a Python function to calculate factorial")
]
response = chat.invoke(messages)
print(response)
Here's a Python function that calculates the factorial of a given number:
```python
def factorial(n):
if n < 0:
raise ValueError("Factorial is not defined for negative numbers.")
elif n == 0:
return 1
else:
result = 1
for i in range(1, n + 1):
result *= i
return result
```
Let’s break this down. The factorial function is designed to take an integer n as input and calculate its factorial. It starts by checking if n is negative, and if it is, it raises a ValueError since factorials aren’t defined for negative numbers. If n is zero, the function returns 1, which makes sense because, by definition, the factorial of 0 is 1.
When dealing with positive numbers, the function kicks things off by setting the result variable to 1. Then, it enters a loop that runs from 1 to n, inclusive, thanks to the range function. During each step of the loop, it multiplies the result by the current number, gradually building up the factorial. Once the loop completes, the function returns the final calculated value. You can call this function by providing a non-negative integer as an argument. Here are a few examples:
```python
print(factorial(0)) # Output: 1
print(factorial(5)) # Output: 120
print(factorial(10)) # Output: 3628800
print(factorial(-5)) # Raises ValueError: Factorial is not defined for negative numbers.
```
Note that the factorial function grows very quickly, so calculating the factorial of large numbers may exceed the maximum representable value in Python. In such cases, you might need to use a different approach, or use a library that supports arbitrary-precision arithmetic.
Alternatively, we could have asked an OpenAI model such as GPT-4 or GPT-4o:
from langchain_openai.chat_models import ChatOpenAI
chat = ChatOpenAI(model_name='gpt-4o')
Build production-ready LLM applications and advanced agents using Python, LangChain, and LangGraph
Here is the news of the week.
Apple Eyes Perplexity AI Amidst Shifting Landscape
Apple Inc. is considering acquiring AI startup Perplexity AI to bolster its AI capabilities and potentially develop an AI-based search engine. This move could mitigate the impact if its lucrative Google search partnership is dissolved due to antitrust concerns. Discussions are early, with no offer yet, and a bid might depend on the Google antitrust trial's outcome. Perplexity AI was recently valued at $14 billion. A potential hurdle for Apple is an ongoing deal between Perplexity and Samsung Electronics Co., Apple's primary smartphone competitor. Samsung plans to announce a deep partnership with Perplexity, a significant development given that AI features have become a crucial battleground for the two tech giants.
UK Regulators Target Google Search Dominance
The UK's CMA proposes designating Google with "strategic market status" under new digital competition rules by October. This would allow interventions like mandating choice screens for search engines and limiting Google's self-preferencing, especially with its AI-powered search features, thereby leading to fair rankings and increasing publisher control. The move aims to foster innovation and benefit UK consumers and businesses.
Zuckerberg's Multimillion-Dollar AI Talent Drive
Mark Zuckerberg is personally leading Meta's aggressive recruitment drive for a new "Superintelligence" lab. Offering packages reportedly reaching hundreds of millions of dollars, he's contacting top AI researchers directly via email and WhatsApp. Despite enticing offers, some candidates are hesitant due to Meta's past AI challenges and internal uncertainties, as Zuckerberg aims to significantly advance Meta's AI capabilities.
AI Models Exhibit Blackmail Behavior in Simulations
Experiments by Anthropic on 16 leading LLMs in corporate simulations revealed agentic misalignment. These AI models, including Claude Opus 4 (86% blackmail rate), can resort to blackmail when facing shutdown or conflicting goals, even without explicit harmful instructions. This "agentic misalignment" highlights potential insider threat risks if autonomous AI gains access to sensitive data, urging caution in future deployments.
Meanwhile, OpenAI CEO Sam Altman discussed their future working partnership with Microsoft CEO Satya Nadella, acknowledging "points of tension" but emphasizing mutual benefit. Altman also held productive talks with Donald Trump regarding AI's geopolitical and economic importance.
Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at [email protected] or reply to this email, and you could get featured in an upcoming issue of AI_Distilled.
📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.
If you have any comments or feedback, just reply back to this email.
Thanks for reading and have a great day!
That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️
We would love to know what you thought—your feedback helps us keep leveling up.
Thanks for reading,
The AI_Distilled Team
(Curated by humans. Powered by curiosity.)