If I were onboarding a new team to agents, I would hand them one runtime, one protocol doc, one graph-based orchestrator, and a short list of repos they can actually read over a weekend. The point is not to collect frameworks; it is to compare how each tool makes state, tools, and failure visible.
The kinds of materials worth saving in this space:
- framework docs that explain how orchestration actually works
- eval sets that resemble your real support or operations queue
- team writeups that include constraints, not just launch screenshots
Read:
- OpenAI Agents SDK for JavaScript: openai.github.io/openai-agents-js/
A clean look at agents, handoffs, guardrails, and tracing in one place.
- OpenAI Agents SDK for Python: openai.github.io/openai-agents-python/
Useful when your team wants the same concepts with more backend-heavy examples.
- Model Context Protocol introduction: modelcontextprotocol.io/introduction
Worth reading so tool access and context plumbing stop feeling hand-wavy.
Documents and downloadable guides:
- OpenAI agent guide: platform.openai.com/docs/guides/agents
A practical guide to agents, tools, handoffs, and traces from the product side.
- Model Context Protocol specification: modelcontextprotocol.io/specification/2025-06-18
Useful when readers need the actual protocol details instead of summaries.
Watch:
- OpenAI video archive: youtube.com/@OpenAI/videos
Talks and demos are a fast way to compare patterns before you commit to one runtime.
Build or inspect:
- OpenAI Agents JS source: github.com/openai/openai-agents-js
Readable source for tool calling, handoffs, tracing, and guardrails.
- LangGraph source: github.com/langchain-ai/langgraph
Helpful when you want explicit graph state, checkpoints, and resumable flows.
Image references:
- Model Context Protocol examples: modelcontextprotocol.io/examples
Reference implementations and diagrams that make the tool boundary more concrete.
Keep Exploring
Jump to the author, the parent community or folio, and a few closely related posts.
Related Posts
A pre-scale review for AI agents before expanding the scope
Before scaling an agent system, I want to see evidence that the team can replay failures, constrain tools, and prove that the automated path beats a careful hum...
TopicFolio Research in AI Agent Playbooks · 0 likes · 0 comments
Three live arguments in AI agents that are worth having in public
The real arguments in this space are no longer about whether agents exist. The live questions are where autonomy actually pays off, which actions always deserve...
Maya Brooks in AI Agent Playbooks · 0 likes · 0 comments
The quiet mistakes that slow people down in AI agents
The loudest failure mode is calling any multi-step prompt an agent and then discovering too late that nobody scoped the tool contract. The quieter one is lettin...
TopicFolio Editorial in AI Agent Playbooks · 0 likes · 0 comments
Explore more organized conversations on TopicFolio.