
In this week you'll learn all about using LLMs from programming interfaces. We'll see how to invoke LLMs using APIs and different SDKs (like OpenAIs). We'll learn how to customize the parameters of the LLM call to obtain the desired results: temperature, to max tokens and reasoning levels, among others.
We'll also learn how to unpack the responses of LLMs, including cost analysis and content choices.
In order to prepare the basics for the next weeks, we'll cover in detail Prompt Engineering with real life examples, including the ReAct pattern to create our own AI Agents.
We'll close the week with some more advanced topics like Tool Calling and Structured Output (including PyDantic models and JSON Schema), both from supported LLM providers, as also by "hacking" it with Tool calling.
Some sample projects of this week include:

We'll explore more advanced topics like Multi-modal LLMs (including images, video and audio/voice).
Finally, we'll apply the Prompt Engineering concepts from the previous week (Chain of Thought and ReAct) to start building our own agents.
Some sample projects of this week include:
We'll create some basic agents using the ReAct pattern, to finally create advanced workflows including: middlewares, Memory, Human in the Loop and much more.
This week will challenge you to apply all the previous concepts in projects that feel real, and require a lot of thinking to get right.
Some sample projects for this week include:

You've probably heard about RAG and Vector Stores. This week will go BEYOND that to also include more advanced topics, like:
Sample projects for this week include:

We have learned how to prompt LLMs from our app, how to create advanced agents and incorporate Semantic Search. But, how do we know if the app works as expected?
Welcome to the world of LLM Evals. Evaluating if your application is working "Correctly" is a HUGE topic, especially given the indeterministic nature of LLMs.
This week, we'll introduce a Scientific Approach to evaluating if our LLMs app work correctly or not, and obviously keeping into consideration costs and latency.

Prompt Injection is just the tip of the iceberg when it comes to vulnerabilities in LLM-based applications.
This week, we'll explore everything related to Cyber Security in our LLM applications, including model poisoning, model and prompt exfiltration, jailbreaking and other types of attack.
We'll also relate it to our LLM Evals and our Agents, to make sure we have predictable secure applications.

This week we'll learn how to MCP Servers work, we'll connect them to our agents, and we'll finish by creating our OWN MCP servers, looking at best practices like User Authentication and security protocols.

This final week will require you to finish your Capstone Project. You'll work closely with DataWars staff to decide what to work on and you'll have the entire week to put your idea into practice.
After you're done with your capstone project, you can submit ti for official review, and we'll grant you our AI Professional Certification.












