Main takeaways
Prompt engineering is empirical: Test, measure, iterate. What works for one task may fail for another.
The PTCF framework: Persona → Task → Context → Format gives structure to every prompt.
Chain-of-Thought reasoning: “Think step by step” can improve accuracy by 40%+ on complex problems (Wei et al., 2022).
System prompts shape behaviour: The hidden instructions behind every AI product define personality, capabilities, and constraints.
Agents take actions: The shift from chat to tool use brings new capabilities and new risks.
Jagged intelligence: LLMs fail unpredictably. Always verify critical outputs.