Learn RAG and LangChain — without the demo-day shortcuts.
A course built around the RAG system you actually want to ship: your documents, your retrieval problem, your acceptance bar. Strive shapes the curriculum, streams lessons live, and the recall queue keeps the chunking, embedding, and evaluation choices straight when you revisit them six weeks later.
Hybrid search, rerankers, and the recall you’re losing4 lessons
LangChain and LlamaIndex — what they’re for and when to skip them3 lessons
Evals — retrieval metrics, generation metrics, and what users notice4 lessons
Demonstration outline — your course is generated around your answers, so module count, depth, and difficulty will differ from this. Across the 7 modules above there are 26 lessons.
Frequently asked
Do I need to use LangChain?
No. The course covers LangChain because it is the most common starting point, but every module is honest about when a hand-rolled stack or LlamaIndex is the better call. You leave knowing why you chose what you chose.
Which vector database does it use?
You pick during the wizard — pgvector, Pinecone, Weaviate, Qdrant, or Chroma. The worked pipeline uses your choice; the comparison module covers the others so the decision is informed, not accidental.
Will this teach me to fine-tune embedding models?
Briefly — fine-tuning gets a lesson on when it earns the work and how to set up a run. Deep training-loop content is out of scope; the course is aimed at engineers shipping retrieval, not researchers building new encoders.
Ready to learn RAG & LangChain?
Tell us where you are today. AI builds your course in minutes — and the daily recall queue makes sure you keep what you learn.