Never miss a new edition of The Variable, our weekly newsletter featuring a top-notch selection of editors’ picks, deep dives, community news, and more.
The growing footprint of AI-powered tools comes with numerous challenges and sources of confusion. It often falls to data scientists and ML engineers to unpack these tools’ potential and limitations so other stakeholders can make informed and sustainable decisions.
If you’re not sure how to go about this process—let alone lead it—we invite you to explore this week’s highlights, which focus on the nitty-gritty details of integrating AI workflows into new contexts, from business and product teams to doctors’ practices.
Before we jump in, though, we wanted to make sure that none of our current and prospective authors miss out on an exciting update. We’ve recently introduced a more inclusive, lower-barrier earnings tier to our Author Payment Program: articles that gain 500 engaged views can now earn a minimum payout of $100. We can’t wait to read new, top-notch work from members of our community!
Enterprise AI: From Build-or-Buy to Partner-and-Grow
“If this expertise is not available internally, you need to get it from external partners or providers.” Janna Lipenkova‘s excellent article outlines the different approaches companies can adopt in their quest to develop a sustainable, goal-focused AI strategy, and zooms in on finding the right partners for the projects you’re considering.
The Case for Centralized AI Model Inference Serving
Even tech-forward organizations have to grapple with rapid change and growing complexity. Chaim Rand addresses the challenge of processing large-scale inputs through algorithmic pipelines that include deep learning models.
Google’s New AI System Outperforms Physicians in Complex Diagnoses
As Luciano Abriata points out, medical ML use cases aren’t new. But recent research from Google, covered in this lucid walkthrough, suggests that a powerful new player is entering the arena of medical diagnosis.
Other Recommended Reads
If you feel like exploring a wider set of topics this week, we invite you to read these excellent articles on RAG, LLMs, the human aspects of machine learning, and more.
- If you’re just starting to explore retrieval-augmented generation, don’t miss Carolina Bento‘s beginner-friendly introduction.
- For a more specialized—but equally accessible—primer, we present Shubham Gandhi‘s guide to the inner workings of CatBoost.
- How will large language models shape the concepts of truth and knowledge? Marina Tosic presents a thoughtful reflection on an increasingly timely set of questions.
- From subject-matter experts and executives to end-users, David Martin maps out the different groups of people machine learning engineers have to keep in mind—and interact with.
- Looking for a more hands-on technical read? Learn how to load-test your LLM using LLMPerf by following along Ram Vegiraju‘s step-by-step tutorial.
Contribute to TDS
We love publishing articles from new authors, so if you’ve recently written an interesting project walkthrough, tutorial, or theoretical reflection on any of our core topics, why not share it with us?







