Never miss a new edition of The Variable, our weekly newsletter featuring a top-notch selection of editors’ picks, deep dives, community news, and more.
The universe of large language model optimization tools continues to expand at a rapid clip. If you’re an MLOps specialist, data engineer, or a generalist looking to branch into AI applications, you need to stay abreast of the latest developments in the field — and we’re here to help.
We’ve selected some of our top articles from the past couple of weeks, focusing on cutting-edge workflows, new libraries, and more, and invite you to explore this dynamic field through the lens of our authors’ expertise.
Before we dive in, we’re thrilled to share our latest Author Spotlight Q&A, with AI entrepreneur and data scientist Claudia Ng, who discusses her career path and shares insights on the importance of picking the right “lane” for growth. Be sure to bookmark it!
Introducing Google’s LangExtract tool
A powerful new NLP and data extraction library is always a cause for celebration—and exploration. Thomas Reid walks us through the inner workings of LangExtract, and shows how it can “perform RAG-like operations without the need for traditional RAG processing.”
Can LangExtract Turn Messy Clinical Notes into Structured Data?
If you’re interested in digging deeper into LangExtract’s potential use cases, Parul Pandey presents a comprehensive guide using the example of unstructured medical data.
Advanced Prompt Engineering for Data Science Projects
Data practitioners looking for ways to streamline their daily tasks shouldn’t miss Sara Nobrega’s new guide to prompt design for features, modeling, and evaluation.
Maximizing AI/ML Model Performance with PyTorch Compilation
Chaim Rand’s deep dive on torch.compile explains how it works, demonstrates its use, and discusses a few strategies for applying it effectively.
This Week’s Most-Read Stories
The articles our community has been buzzing about in recent days also revolve around LLM-focused tools and workflows. In case you missed them:
LangGraph 101: Let’s Build A Deep Research Agent, by Shuai Guo
LangGraph + SciPy: Building an AI That Reads Documentation and Makes Decisions, by Gustavo Santos
How to Use LLMs for Powerful Automatic Evaluations, by Eivind Kjosbakken
Other Recommended Reads
From ChatGPT’s role in the workplace to modular arithmetic, here are a few more recent must-reads we wanted to highlight:
- Help Your Model Learn the True Signal, by Mena Wang
- Physics-Informed Neural Networks for Inverse PDE Problems, by Marco Hening Tallarico
- Water Cooler Small Talk: Should ChatGPT Be Blocked at Work?, by Maria Mouschoutzi
- Modular Arithmetic in Data Science, by Chinmay Kakatkar
- The Channel-Wise Attention | Squeeze and Excitation, by Muhammad Ardi
Meet Our New Authors
Explore top-notch work from some of our recently added contributors:
- Ahmed Belgacem, an AI research engineer, focuses on fine-grained classification, diffusion models, and edge computing.
- Youssef Farag is a seasoned machine learning engineer and data scientist based in Germany.
- Willem Esterhuizen devoted his first TDS article to an accessible tutorial on model predictive control.
We love publishing articles from new authors, so if you’ve recently written an interesting project walkthrough, tutorial, or theoretical reflection on any of our core topics, why not share it with us?







