





















































Want to be ahead of the curve?
Block 3 hours of your time to learn AI tools & workflows that 99% of people don’t know yet!
🗓️ Tomorrow | ⏱️ 10 AM EST
In this training, you’ll learn how to:
✅ Master 30+ AI tools to automate work & increase efficiency
✅ Save 1000s of dollars by leveraging AI for business & personal growth
✅ Eliminate repetitive tasks & boost creativity effortlessly
✅ Use AI to analyze data, make smarter decisions, and scale faster
Hi, there!
Greetings for 2025! We’ve been off the radar for a while as we worked on re-inventing our content offerings. AI Distilled will now be run by the LLM Expert Insights team, and we promise to make it up to you with exciting offers in the coming weeks.
LLM Expert Insights Team,
Packt
A two-day AI Action Summit was held in Paris, France on February 10-11, 2025. The summit brought together governments, public and private organizations, academia, NGOs, artists, and civil society. Core themes included public interest AI, the future of work, innovation and culture, trust in AI, and global AI governance. Some of the key announcements were:
73 participating members, including 27 EU states, governments, research institutes, and government bodies signed the statement on inclusive and sustainable AI for people and the planet. The UK and the US refrained from signing the declaration.
EU launches InvestAI initiative to mobilise €200 billion of investment in artificial intelligence
The InvestAI initiative was announced at the Paris summit with a pledge of EUR 150 billion from the private sector and EUR 50 billion from the public sector. This initiative will support the computing power for the world’s fastest public supercomputers. Ursula von der Leyen, the EU Commission President, vowed in her speech to cut red-tape in AI while ensuring safe AI, encouraging the collaboration of global talent with AI Gigafactories.
Launch of public interest initiatives
Current AI, an international partnership of governments, philanthropists, and industry, was officially launched at the AI Action Summit with $400 million in funding, shared Martin Tisné, CEO of AI Collaborative, in his LinkedIn post.
Robust Open Online Safety Tools (ROOST) a non-profit organization incubated at The Institute of Global Politics at Columbia University was also launched at the summit. ROOST has some of the biggest names in the industry as founding partners, including Google, Discord, OpenAI, Roblox and GitHub, Hugging Face, Microsoft, Wikimedia, among others. ROOST aims to provide open-source building blocks and safety resources to global users and communities.
Open AI will now focus on simplifying product offerings and unify o-series and GPT series models. There will be no o3 release, but GPT-5 will be rolled out with a higher-level intelligence setting for Pro and Plus subscribers and standard intelligence for free tier users.
Groq secures $1.5bn from Saudi Arabia to expand AI inference infrastructure in the region
Groq CEO Jonathan Ross announced in a LinkedIn post a $1.5 billion agreement to expand Groq’s LPU-based AI infrastructure. This investment will support Groq’s existing data centre in Saudi Arabia and fuel the development of the Arabic Large Language Model (ALLaM).
Elon Musk-Led Group Makes $97.4 Billion Bid for Control of OpenAI, SamA not interested
A group of investors led by Elon Musk has offered to buy control of OpenAI for $97.4 billion. This bid introduces a new twist in OpenAI’s future as the company moves towards restructuring in order to transition to a for-profit entity. The bid backed by xAI, Baron Capital Group, Emanuel Capital Management, 8VC, Valor, Atreides, and Vy Capital is Musk’s latest attempt to make OpenAI open-source and safety-focused, as confirmed by Musk’s attorney, Marc Toberoff.
Sam Altman (SamA) took to X to express disinterest in the offer and instead made a counteroffer.
Perplexity has released an optimized version of Sonar to improve decoding throughput which now reaches 1,200 tokens per second. Graded on a scale of 1 to 100, Perplexity’s experiments report that Sonar now scores 85.1 on factuality and 85.9 on readability, surpassing other frontier models. The latest version of Sonar is now available in default search mode for Perplexity Pro users.
Cursor’s AI Agent Gets New Capabilities
Cursor has added new features to its agent that allow it to accomplish end-to-end development tasks while collaborating with programmers. Some of these features include understanding codebase context, automatically writing and running terminal commands with a programmer’s permission and detecting and fixing lint errors.
GitHub Copilot: The agent awakens - The GitHub Blog
GitHub unveiled Project Padawan to introduce Copilot’s autonomous agent. In agent mode, Copilot utilizes a SWE agent that can suggest terminal commands, recognize and fix errors, walk through its code, analyse its output and result, debug, diagnose, and fix errors.
Apart from this, GitHub also announced the GA of Copilot Edits in VS Code to help developers make inline changes to multiple files in their workspace using natural language.
Hugging Face announces AI Energy Score Ratings
To drive the adoption of energy-efficient AI, Hugging Face launched the AI Energy Score project. This project offers standardized benchmark for the energy consumption of various AI models. Developers can submit their models to be assessed by a uniform framework and obtain a star rating for their models. There is also a leaderboard that presently ranks 166 models. Go check it out.
Open R1 project introduces OpenR1-Math-220k
After launching the OpenR1 project to reproduce DeepSeek-R1’s data and training pipeline, the community, in collaboration with Project Numina, announced the construction of OpenR1-Math-220K generated by prompting DeepSeek-R1.
Anthropic analyzed Claude.ai’s anonymized conversations to study how AI is used in real-world tasks and its impact on the labor markets. The study found that 37.2% of conversations were centered around computer and mathematical domains. Computer programmers and copywriters with mid-to-high-median salaries were the highest AI users. The dataset and report have been open sourced.
LumaLabsAI drops image to video model
In an X post, LumaAI announced the release of image-to-video generation using the Ray2 model. Users subscribed to LITE or PLUS plans can drop any image into the Dream-Machine and create realistic videos.
ByteDance introduces OmniHuman-1
ByteDance has released an AI framework that can generate human videos from a single image and motion signal. This diffusion-transformer-based animation framework uses multiple modalities (audio, video, and a combination of signals) to achieve realistic human video generation.
Open AI introduces the Intelligence Age with its SuperBowl debut ad
To reach the masses, Open AI positioned ChatGPT as the precursor to the Intelligence Age in its first-ever television ad. The ad showcased AI as a tool and brainstorming partner to “assist, aid, and enhance” human-led product vision.
Sam Altman’s views on the economics of AI
SamA noted in his blog that investing money and resources in AI will drive gains in intelligence for AI models and that the cost of using AI will continue to drop over time, allowing for its wider adoption. He also announced the rollout of AI agents capable of replacing junior level software engineers, potentially impacting jobs and the economy.
Meta is working on Pippo, a generative model for turnaround videos of humans using a single image
Pippo is a multi-view diffusion transformer model pre-trained on 3 billion uncaptioned human images, using both full-reference and cropped versions. It also uses head orientation, position (2D projected anchor), and target camera viewpoint as input.
The model undergoes mid-training on low-resolution images and post-training on high-resolution studio camera images of humans. While the mid-training phase uses an MLP, a ControlNet-inspired MLP is applied to create a 3D-aware multi-view model. Visit here for a visual demo.
Decoding-based Regression - Google DeepMind
Researchers at DeepMind investigated the use of LLMs to perform regression task by representing numeric predictions as decoded strings and using auto-regressive prediction. They experimented with both normalized and un-normalized tokenization. The proposed approach performed as well as traditional approaches, can be applied to density estimation tasks, and could capture distributions modelled over Gaussian and Riemann distributions.
Tell us more about your content needs
We would love to hear from you! Fill out this form to tell us what you’d like to read in AI Distilled next.
📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.
If you have any comments or feedback, just reply back to this email.
Thanks for reading and have a great day!