Interview Archives - ML Conference https://mlconference.ai/blog/interview/ The Conference for Machine Learning Innovation Mon, 18 Aug 2025 14:13:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Are AI Tools Hurting Developer Productivity? https://mlconference.ai/blog/ai-developer-productivity-tools/ Wed, 30 Jul 2025 12:23:58 +0000 https://mlconference.ai/?p=108170 A recent study [1] suggests that developers may become less productive when using AI tools. We've asked our experts to weigh in: Is this a temporary setback, a methodological flaw, or a sign of things to come?

The post Are AI Tools Hurting Developer Productivity? appeared first on ML Conference.

]]>
 

Sebastian Springer:

Lately, there have been several studies highlighting the negative aspects of AI: AI makes us less productive, less creative… I believe it really depends on how we use the tools. The same could be said about search engines or platforms like Stack Overflow. If I rely on such channels for every aspect of my work, I’d become less productive as well. With modern AI tools, the risk is naturally greater, since they’re much more integrated into our work environments and are far more intuitive to use.

On the topic of productivity: Personally, I feel more productive thanks to tools like Copilot and similar tools. That’s mainly because I use them to solve repetitive tasks. There are situations where writing a good prompt takes significantly longer than writing the code myself. And of course, working with AI tools comes with the risk of being distracted from the actual problem or heading in the wrong direction. In other cases, the suggestions the AI offers – without any manual prompt – are exactly what I need.

In general, I think: Whether AI makes us unproductive, uncreative, or even dumb – it’s a technology that’s established itself in the market, and one we simply can’t ignore. So, we should focus on leveraging its strengths. And if we already know it has downsides (as almost every technology does), we should try to avoid those pitfalls as much as possible. Besides, AI is in good company: People once claimed that steam engines would never be economical, newspapers would overwhelm us mentally, and written information in general was dangerous – let alone the internet, which supposedly makes people stupid and causes crime to skyrocket. There’s always a grain of truth in every accusation, but in the end, it all comes down to how we deal with it.


Paul Dubs:

Based on my experience with AI tooling for development, which I discussed in a keynote at the JAX conference in May, the impact on productivity is highly dependent on how these tools are used and the developer’s experience level with them. The study actually supports what I’ve observed: there’s a significant learning curve with AI development tools. The one developer in that study who had substantial prior experience with Cursor was notably faster, an anomaly that proves the point. Like any tool, you need to know how to use it effectively to see productivity gains.

During my keynote, I described using agentic AI coding tools as “playing chess with a pigeon”: they would destroy the game and claim victory. Claude Code struggled to navigate projects properly and would even sabotage its own progress by resetting the Git state. The Claude 3.5 / 3.7 models used in the study weren’t well-suited for larger changes or project navigation. However, things changed dramatically with Claude 4’s release at the end of May. Even the smaller, faster Sonnet model became quite capable when used correctly. I now use Roo Code, a Visual Studio Code plugin that allows me to create specialized prompts for different tasks: debugging, programming, documentation, and language-specific work. This customization has made me considerably more productive.

The productivity gains aren’t uniform across all project types. I’m much more productive on greenfield (new) projects. For brownfield (existing) projects requiring major changes, I need to provide extensive additional context, often directly referencing the specific files the AI needs to work with. When I handle the navigation burden myself, the AI can be quite effective. There’s an important caveat: using AI tools creates a knowledge and memory gap. Since I’m not writing every line myself, it feels like delegating to someone else and doing a quick review. When I return to AI-generated code later, I need to reread it because I don’t fully remember the implementation details. It’s similar to working on a project where multiple developers touch every piece: you lose that intimate familiarity with the codebase.

The study’s findings align with my experience: developers unfamiliar with AI tools often see productivity losses, while those with significant experience can achieve net gains. The outlier in the study who was more productive validates this. Success with AI coding tools requires understanding their limitations, using them appropriately for the task at hand, and accepting the trade-off between speed and deep code familiarity.


Christoph Henkelmann:

The issue with AI-assisted coding is the same as with many current AI debates: it’s dominated by hype and quick dismissals, rather than a nuanced understanding. Yes, AI tools can deliver massive productivity gains – but only if you actually learn how to use them. This means understanding the basics of LLMs, knowing your domain, and practicing with the tools until you develop a sense for when they help and when they don’t. Most people just install something like Cursor and expect miracles. Naturally, this leads to disappointment. “Vibe coding” might get you a prototype, but real productivity comes from what Paul Dubs calls “omega coding”: deep domain knowledge, familiarity with your tools, and persistent practice. These tools don’t replace thinking; they amplify skill. Managers hoping for instant results will see the opposite at first: initial productivity drops, much like switching to a new IDE. But if you invest the time to learn and adapt, the gains are real and substantial. Most don’t (or better: aren’t given the time to do so), which is why recent studies show lackluster results.


Melanie Bauer:

As an informatics student, I spend a lot of time researching and learning about new tools and topics, especially in the field of software development. AI tools have made this process significantly easier and faster for me. For example, when I have a question, I can get direct and precise answers without having to scroll through extensive documentation.

That’s why tools like GitHub Copilot, Cursor, and ChatGPT have become a regular part of my workflow as a future software developer. Of course, at the end of the day, AI doesn’t think for me, and I am still responsible for reviewing and validating the generated output. But overall, I’ve noticed a clear increase in my productivity, especially when it comes to routine tasks, reducing the ramp-up time when learning new technologies, or understanding code snippets and programming concepts by having them broken down and explained step by step.


Rainer Hahnekamp:

Based on my experience, the use of AI in software development can be divided into three levels:

  1. Code Completion in the IDE: Here, AI offers valuable support by suggesting small code snippets that boost productivity without taking control away from the developer.
  2. Automated Code Generation: In this area – where the AI generates larger code blocks or even entire files – I’ve found that the time required to correct and adapt the output often outweighs the immediate benefit. Still, I see this as an investment in learning how to work with AI effectively. While it may currently slow things down, I’m confident that the technology will improve – and when it does, I want to be ready to make the most of it.
  3. AI-Supported Research and Conceptual Work: Using AI as a sparring partner for brainstorming, idea generation, and problem-solving has proven extremely helpful. It supports creativity and often leads to productive insights.

Personally, I can’t confirm a loss in productivity – quite the opposite. While I haven’t read the details of the referenced study, I suspect the reasons might be due to the current lack of best practices and the necessary intuition for using AI effectively. And, of course, to be transparent, this statement reflects my personal opinion, but the wording was created with the assistance of AI 😉.


Pieter Buteneers:

I use the following AI tools:
Cursor (the agents are a big step forward) (my go-to tool)
ChatGPT (GPT 4.1) (if cursor makes mistakes)
Claude 4 Sonnet if the above don’t know the answer which is once every 2-3 weeks or so.

In terms of advantages, I started using typescript (ts) instead of python and it really helps me understand the syntax and convert python code into ts much faster. It writes fewer errors in ts than in python, allows me to write more code, writes unit tests for me and allows me to use new packages/technology much faster. It helps me with the DevOps side of things which I am a real noob at. Overall, it makes me about 2 times faster

I use it a lot to brainstorm ideas and figure out best/bad practices, but it comes with a huge list of caveats. There is a lot of code duplication since it doesn’t know your entire code. So, your code becomes hard to maintain and turns into ‘spaghetti’ fast. Cursor often fixes a bug by just writing some code to cover an edge case, but it doesn’t always go deep into the underlying problem, so you think you end up with a fix, where it is just an ugly patch and you don’t understand the code. Ultimately, you still need a senior dev to tell you what good code practices are. I spend more time debugging than writing code, so tests are even more important.


Tam Hanna:

At Tamoggemon Holdings, we currently use AI systems mainly for menial tasks. Using them to write stock correspondence (think cover letters, etc.) has shown to be a significant performance booster, allowing us to refocus on more productive tasks. As for line work (EE or SW), we – so far – have not seen the systems as a valid replacement to classic manual work.


Rainer Stropek:

After gaining extensive experience with modern AI tools, I can’t imagine my daily work as a developer without their support. My productivity has noticeably increased because I have consistently aligned my entire workflow around collaboration with AI. This goes far beyond classic code completion: autocomplete suggestions are convenient but often too generic and sometimes break my flow. Chat agents, by default, start every conversation from scratch. To work efficiently with them, one must formulate complete, consistent requirements in the prompt, prompt context, document the architecture, establish coding guidelines, and provide meaningful test data with expected results. This level of diligence would be advisable anyway; working with AI makes it essential.

Spec-Driven Development instead of Vibe Coding
Many developers underestimate prompting and context management. A few buzzwords are only enough as long as the goal remains vague. As soon as I face concrete customer requirements, I rely on Spec-Driven Development:

  1. I invest significant time in detailing the requirements.
  2. The AI questions and discusses the specification with me.
  3. Only once a sufficient level of maturity is reached do I let the AI implement the solution and review the result.
    It’s crucial to create clarity before I let the AI write code.

From Coder to AI Orchestrator
My role is shifting. Instead of primarily writing code, I define work packages that I delegate to AI agents. This is similar to delegating to human team members. I see my future in the role of a product developer with a strong focus on requirements engineering and software architecture – structuring complex requirements in a way that makes them executable by AIs.

Limitations of Today’s AI Systems (especially in large projects)
Despite larger context windows and advanced retrieval (e.g., using MCP servers or function tools integrated in IDEs), AI still lacks a holistic overview of large projects. Humans remain responsible for slicing and documenting tasks so they can be worked on without requiring knowledge of the entire project. If this is done successfully, project size becomes almost irrelevant to the use of AI.

What Companies Need to Do Now
The tool landscape is evolving in months, not years. Instead of committing to a single tool long-term, companies should:

  • Allocate budgets and create space for teams to experiment with various AI tools.
  • Deploy pilot groups to quickly gain hands-on experience.
  • Embrace usage-based pricing models and make their cost-benefit ratio transparent.

From my perspective, those who don’t start building practical experience now risk losing competitiveness. AI is no longer just a nice-to-have add-on – it is fundamentally changing the way we develop software. Those who ignore these new ways of working risk losing productivity and, in the medium term, competitiveness. Now is the time to sharpen specifications, rethink roles, and encourage experimental team setups.


Christian Weyer:

“Never trust a study you didn’t fake yourself” 😉.
Just kidding, of course.
But seriously: At Thinktecture, we’ve seen an unprecedented productivity boost across the team. Personally, I feel significantly more creative – which directly translates into being faster and producing better results.

The key? I don’t let AI tools disrupt my natural flow. Instead, I deliberately configure them to fit my individual thinking and working style. Tools like GitHub Copilot, Windsurf, Cursor, or Cline all offer great ways to customize the experience with your own guardrails.

Maybe many developers don’t yet fully leverage these configuration options – or don’t even know they exist. Used right, these tools amplify productivity instead of hindering it.


Veikko Krypczyk:

In my experience, artificial intelligence can be meaningfully applied throughout all phases of the software development process – from early ideation and UI design to architectural decisions and the implementation of complex algorithms. AI is by no means flawless, but it acts as a virtual work partner that can complete many tasks faster, more diversely, and sometimes even more creatively than would be possible alone.

The actual productivity gain strongly depends on two key factors: the quality of the prompts and the critical evaluation of the generated content. Those who can formulate clearly and have solid domain knowledge will greatly benefit from AI tools – whether it’s generating boilerplate code, writing test cases, supporting refactoring, or systematically exploring technical options.

Of course, AI outputs should never be accepted without reflection. It remains essential for developers to understand, question, and, if necessary, improve the generated suggestions. Domain expertise is not replaced by AI – quite the opposite: it becomes even more crucial to ensure the quality of the outcomes.

My conclusion: when used properly, AI enhances efficiency and broadens perspectives – both individually and in team processes. I find working with AI tools inspiring, more efficient, and often more focused, as they help offload routine work and spark creative thinking. I only experience a loss in productivity when AI is treated as an autopilot rather than as a co-pilot.


Links & Literature

[1] https://arxiv.org/abs/2507.09089

 

 

Top 10 FAQs About AI Coding Tools & Developer Productivity

 

1. Do AI coding tools like GitHub Copilot and Cursor really improve developer productivity?
Yes, when used correctly. Many developers see faster coding and fewer repetitive tasks with tools like GitHub Copilot, Cursor, and Claude. However, beginners may initially experience slower workflows while learning to use them effectively.

2. Why do some developers become less productive with AI development tools?
A lack of training and experience with AI-powered coding assistants can cause slower progress at first. Without understanding prompt writing, debugging AI output, or configuring tools properly, productivity can drop.

3. What is the learning curve for GitHub Copilot, Cursor, and similar AI coding assistants?
Most developers need time to master AI-assisted development. Success comes from learning prompt engineering, adapting workflows, and knowing when to trust AI suggestions versus manual coding.

4. Can AI coding assistants replace human software developers?
No. AI tools can speed up tasks like code completion, boilerplate generation, and prototyping, but human expertise is essential for architecture design, problem-solving, and ensuring high-quality code.

5. How can developers get the most out of AI coding tools?
Use AI tools for repetitive coding, quick prototypes, and brainstorming. Always review AI-generated code, write clear prompts, and combine AI with strong coding fundamentals for the best results.

6. What are common problems with AI-generated code?
Developers often face duplicated code, messy “spaghetti code,” shallow bug fixes, and the need for extra debugging. Writing unit tests and applying good coding practices remains essential.

7. What is ‘spec-driven development’ and how does it help AI-assisted coding?
Spec-driven development involves writing detailed software specifications before using AI tools. This approach helps ensure that AI-generated code matches the project’s goals and reduces wasted time on rework.

8. What are the best AI coding tools for developers in 2025?
Popular options include GitHub Copilot, Cursor, Claude 4 Sonnet, Roo Code, ChatGPT (GPT-4.1), Windsurf, and Cline. Many developers use a combination of these for different coding tasks.

9. How do AI coding assistants perform in greenfield vs. brownfield projects?
AI assistants tend to be more effective in greenfield (brand-new) projects, where they can help build from scratch. Brownfield (existing) projects often require more manual guidance and context-setting.

10. How should companies prepare before rolling out AI-powered coding tools?
Run pilot programs, give developers time to experiment, avoid locking into one tool too soon, and provide training on prompt engineering and AI best practices for software development.

The post Are AI Tools Hurting Developer Productivity? appeared first on ML Conference.

]]>
RAG & MCP for ML Devs: Practical AI Implementation https://mlconference.ai/blog/generative-ai-large-language-models/rag-mcp-for-ml-devs-practical-ai-implementation/ Mon, 07 Jul 2025 07:21:42 +0000 https://mlconference.ai/?p=108027 This article, featuring insights from Robert Glaser at JAX Mainz, dives into practical AI implementation strategies. We explore why Retrieval-Augmented Generation (RAG) is a foundational technique and how the emerging Model Context Protocol (MCP) is poised to revolutionize AI agent development.

The post RAG & MCP for ML Devs: Practical AI Implementation appeared first on ML Conference.

]]>
Join us at MLCon New York to dive into these topics even deeper and hands on.

Business & Culture: https://mlconference.ai/machine-learning-business-strategy/ai-native-software-organization-technical-operational-practices/

https://mlconference.ai/generative-ai-content/gen-ai-operational-metrics/

https://mlconference.ai/machine-learning-business-strategy/operationalizing-ai-leadership-sprints-workshop/

GenAI & LLM: https://mlconference.ai/generative-ai-content/building-agentic-rag-pipeline/

https://mlconference.ai/generative-ai-content/agentic-ai-workshop-deep-research-llms/

 

devmio: Hello everyone, we’re live from JAX in Mainz, and I am here with Robert Glaser from INNOQ. Hello Robert.

Robert Glaser: Hi Hartmut, I’m glad I could make it.

devmio: Great. You’re here at JAX talking about AI. A controversial topic for some. Some love it, some are a little put off by it. Of course, there are benefits from it. Where do you see the benefits of AI?

Robert Glaser: I get that question a lot, and I usually disappoint people by saying, “I can’t give you a list of use cases in an hour.” Because use cases are a dime a dozen, since we’re dealing with a new kind of technology. Think of electricity, steam engines, or the internet. People didn’t immediately know, “OK, here are 100 use cases we can do with this and not with that.”

Everyone in every department, in every company, has to find out how well or how poorly it fits with the current state of technology. There are formats and possibilities for finding use cases. Then you also evaluate the potential outcome, look at what the big levers are, what is likely to bring us the most benefit when we implement it? And that’s how I would approach it. Whenever someone promises to name AI use cases for your business in a 30-minute conversation, I would always be cautious.

devmio: Now you have a use case, a RAG system, retrieval of augmented generation. Can you explain how a RAG works? Maybe first, what exactly is it? And then, of course, the question. Yes, for you out there, is this a use case that is exciting for you? What are the benefits of a RAG system?

Robert Glaser: Yes, I can explain briefly. A RAG is not a new concept at all. It comes from an academic paper from 2005. At that time, people weren’t talking about foundation models or large language models like we are today.

RAGs come from the domain of Natural Language Programming, which is how it’s used today. How do I get context for my AI model? Because the AI model only becomes interesting when I connect it to my company data. For example, in Confluence or some tables or databases or something like that. Then I can use this technology in conjunction with my data. And RAG is an architecture, if you want to call it that, which allows me to do just that.

That’s changing right now, too. But the basic principle that I’m showing in my presentation is that, in principle, a large part of the RAG architecture is information retrieval. All of us who work in tech have known this for many decades. A search, for example, is an information retrieval system.

This is often at the center of RAG and is the most interesting point, because the classic RAG approach is nothing more than pasting search results into my prompt. That’s all there is to it, it’s also something I mention in my talk. Well, we can end the presentation here. There’s nothing more to it. You have your prompt, which says, for example: Answer this question, solve this problem using the following data.

The data that comes in and that it is also relevant. That’s the crux of the matter. That’s what RAG deals with. For the most part, it’s a search problem: How do I cut my body of data into pieces that fit together? I can’t always just shovel everything in there. Models like this have a context window; they can’t digest endless amounts of data.

It’s like with an intern: if I pile the desk full, they may be a highly qualified intern, but they’ll get lost if the relevant documents aren’t there. So, I have to build a search that gets the relevant document excerpts for the task that the LLM is given. And then it can produce an answer or a result and not fantasize in a vacuum or base it on world knowledge, but rather on my company’s knowledge, which is called grounding. The answers are grounded, citation-based, like how a journalist works with facts. They must be there.

devmio:  -and with references!

 

 

Robert Glaser: Exactly, then I can also say what the answer should look like. Please don’t make any statements without a source. That helps too. I’ll show this best practice in my talk.

devmio: So, you have this company data linked to the capabilities of modern LLMs. How do I ensure that the company data remains secure and is not exploited? It becomes publicly accessible via LLM.

Robert Glaser: Exactly. In principle, you will probably build some kind of system or feature or an internal product. In the rarest of cases, you will probably just connect your corporate customers to ChatGPT—you could do that.

There’s a lot going on right now, for example, Anthropic is currently building a Google Drive connection or a Google Mail connection. This allows you to search your Google Mails, which is incredibly powerful. But maybe that’s not always what you want as a company. That’s why you have to build something, you have to develop something. If it’s a chatbot, then it’s a chatbot. But every AI feature can potentially be augmented with RAG.

You have to develop software, and the software probably uses an AI model via an API. Either this AI model is in the cloud or in your own data center with a semi-open model, and then you must consider: how do I get the data from the search system or from the multiple search systems into the prompt. This involves a relatively large amount of engineering—classic engineering work. We can expect that developers of such systems will simply build them more often in the future.

devmio: Good, and another point that is becoming increasingly important is AI agents. These are agents that can do more than just spit out a result after a prompt. In this context, the MCP protocol, or model context, is often mentioned. How does the model context protocol work exactly? Maybe you could explain MCPs and how they work?

Robert Glaser: We’ve been having a lot of conversations about this at our booth downstairs. It’s not without reason that people are so interested in this topic; there’s enormous hype surrounding it. It’s an open standard developed by the company Anthropic, which also trains the model, and it was released last November. And at the time of release, nobody was really interested in it. Right now, about a month ago, AI time is also somehow compressed in my head– it’s going so fast.

Every day, there are hundreds of new MCP servers for a wide variety of services and data sources. In principle, MCP is nothing more than a very simple specification for a protocol that I use to connect a service or data to my AI model. Some call it “USB-C for AI models”. I personally think that’s a pretty good metaphor, even if it’s not 100% technically correct, but the plug always looks the same.

But it doesn’t matter what I plug into a device, be it a hard drive, USB stick, or alarm siren. The protocol is always the same. What it does is stay in the metaphorical world. It gives AI models hands and feet to do things in the real world. Tools are a concept of this protocol, i.e., tool usage. But there are also several other things.

These things can also simply provide prompts, which means I could write in MCP Server, which acts as a prompt library. Then the LLM can choose for itself which prompts it wants to use.

Something very interesting is tool use, also nothing new, but Foundation models have been trained on tool usage for a long time. But until now, I always had to tell them: “Look, here’s an open API spec for an API that you can use in this case, and you use it like this, and then there’s the next tool with a completely different description.”

This tool might be a search function, which would have a completely different API that you use like this. And then you can operate on the arrow system, and you do that with STANDARD in STANDARD out. Imagine you need an AI agent or an AI feature and have to describe 80 individual tools!

Every time you start from scratch and with each new tool, you must write a new description. The nice thing about MCP is that it standardizes this and acts as a facade, so to speak. The foundation model or the team that builds an AI feature doesn’t need to know how these services and data work or how to use them, because the MCP server takes care of that and exposes the functionality to the AI model using the same protocol.

It’s also nice if, for example, I’m a large company and I have one team building the search function in the e-commerce shop and another team building the shopping cart, then we are very familiar with their domains. This makes it very easy for them to write an MCP server for the respective system. Then I could say, for example: Dear shopping cart team, please build an MCP server for the shopping cart because we want to roll out an AI feature in the shop and need access to these tools.

So, it also fits in nicely with a distributed team architecture like a kind of platform through which I can access a wide variety of tools and LLMs. I could also build a platform for this, but basically, it’s just small services, called servers, that I simply connect to a model. And I don’t always have to describe a different interface, but always the same one.

I don’t have to wait for Open AI to continue training its models so that the MCP can use them, because we all know that I can simply enter the MCP specifications now and then the model can learn from the context on how to use them. I’m not dependent on having to use a model right now. I can teach them everything.

 

 

devmio: Yes, what I’m noticing is that MCPs have really taken off. Among other things, because it’s not just one topic, but also Open AI and Google, I think. And others are now on board and supporting it, so there’s a kind of standardization.

Robert Glaser: It’s like how the industry agreed on USB-C or, back then, A. That was good for everyone because then you no longer had 1,000 cables. It’s like how I’m developing an AI feature now and the possibilities are basically endless. People like to think only in terms of data sources or services, but a service can be anything, it can be an API from a system that returns bank transactions or account transactions to me. Here’s a nice example that I always use: There’s an MCP server for Blender, a 3D rendering software, which I can use and then I can simply ask: render a living room with a nice lamp and a couch with two people sitting on it.

Then you can see how this model uses tools via the PC server and then creates this 3D scene in Blender. That’s the range of possibilities, truly endless.

devmio: Now we’re at a JAX, a Java conference. That’s really an AI topic, so how does it relate to Java? Maybe we’ll come back to that later. Java is a language that isn’t necessarily known for being widely used in machine learning; Python is faster. Is Java well positioned in this area?

Robert Glaser: To use these services, or maybe that’s not even necessary, but these are all just API calls. Yes, in fact, with the foundation models, he has also introduced a huge commodity factor. In the past, I had to train my own machine learning models. I had to have a team that labeled data and so on, and then it was still a case of narrow in, narrow out, now broad in, broad out.

I have models that can utilize everything from textual data to molecular structures or audio and video, because everything can be mapped into a sequential stream. And the models are generalists which can always be accessed via API.

Even the APIs are becoming standardized because all manufacturers are adapting the Open AI API spec. That’s a nice effect. That’s why there’s a great ecosystem in both the Java world and the Ruby or Python world.

I’ll just mention Spring AI. I could have started a year ago and built AI features into my applications. Spring AI, for example, has a range of functions that allow me to configure models flexibly. I can say, go to Open AI or use a semi-open model in our data center. Everything is already there

devmio: But the community there is so agile, creative, and innovative that solutions are emerging and already available.

Robert Glaser: Another aspect with MCPs, if I want to build a feature or a system with Java around my foundation model that other systems want to integrate via MCP, there is a Java SDK, there is an SDK, everything is there.

devmio: We’re excited to see how things progress. It’s sure to continue at a rapid pace.

Robert Glaser: You must read a hundred blogs every day to keep up with all of the AI innovations. It’s breathtaking to see the progress that’s being made. It’s nice to see that things are happening in tech again.

devmio: Thank you very much for that. For the insights here.

Robert Glaser: You’re very welcome. Thank you for letting me be here.

The post RAG & MCP for ML Devs: Practical AI Implementation appeared first on ML Conference.

]]>