Main takeaways
🔤 LLMs see tokens, not words: Understanding tokenisation helps you write better prompts and manage costs.
📐 Embeddings capture meaning: Words become vectors in high-dimensional space where similar concepts cluster together.
📋 The PTCF framework: Persona → Task → Context → Format gives structure to every prompt.
🧠 Chain-of-Thought reasoning: “Think step by step” can improve accuracy by 40%+ on complex problems (Wei et al., 2022).
🎭 System prompts shape behaviour: The hidden instructions behind every AI product define personality, capabilities, and constraints.
🤖 Agents take actions: The shift from chat to tool use brings new capabilities and new risks.
⚠️ Jagged intelligence: LLMs fail unpredictably. Always verify critical outputs.