The Agent Reality Check: What's Actually Working vs. Vaporware
Ground Model — Daily AI Newsletter for Builders
Lead Story: AWS Strands Agents — Solving the Plumbing Problem Nobody Talks About
AWS published a detailed guide on building custom model providers for Strands Agents with LLMs hosted on SageMaker endpoints. The core problem: if you host your own models using SGLang, vLLM, or TorchServe on SageMaker, they spit out OpenAI-compatible responses. Strands agents expect Bedrock Messages API format. Your agent crashes with TypeError: 'NoneType' object is not subscriptable. Elegant.
The fix is a custom parser layer — extend SageMakerAIModel, translate the response format, move on with your life. Three layers: Model Deployment (Llama 3.1 on SGLang), Parser (custom LlamaModelProvider), and Agent (Strands SDK consuming the translated output).
Why This Actually Matters for Builders
This is a deeply unsexy blog post. And that's exactly why it matters.
We're in the phase of agent development where the hard problems aren't "can an LLM use tools?" — they're format incompatibilities, response parsing failures, and integration plumbing between systems that were never designed to talk to each other. The fact that AWS had to publish a tutorial on making their own agent framework work with their own hosting platform tells you everything about the current state of production agent infrastructure.
The real signal here: The agent framework wars are becoming cloud provider lock-in plays. AWS wants you running Strands on SageMaker behind Bedrock's API format. This is the same playbook we've tracked with OpenAI embedding into enterprise workflows — except AWS is doing it at the infrastructure layer. If your agents run on Strands with custom SageMaker parsers, switching to another framework or cloud means rewriting your entire integration layer.
The Builder's Takeaway
If you're deploying agents in production today, the framework choice is less about features and more about which cloud provider's ecosystem you're already married to. Strands makes sense if you're deep in AWS. LangGraph if you're cloud-agnostic and need state machines. OpenAI's new AgentKit if you want the tightest model integration but accept vendor lock-in.
The uncomfortable truth: no agent framework is mature enough to bet your company on. Build your agent logic as framework-agnostic as possible. Use thin adapter layers (like AWS is showing here, ironically). The frameworks will consolidate in 12 months. Your business logic shouldn't have to.
Quick Hits
OpenAI Drops AgentKit, New Evals, and RFT for Agents. OpenAI released AgentKit — a new framework for building agents — alongside agent-specific evaluation tools and reinforcement fine-tuning. Details were light, but the signal is clear: OpenAI wants to own the full agent stack from model to framework to eval. The most important announcement this week for anyone choosing a framework. → OpenAI
Notion Rebuilt for Agentic AI with GPT-5. Notion redesigned its platform to support autonomous AI workflows powered by GPT-5. One of the first major SaaS platforms to rebuild — not just bolt on — agentic capabilities. Watch what they learned about where agents fail inside existing product UX. → OpenAI
Amazon QuickSight Embeddable Chat Agents. AWS launched embeddable conversational AI agents for enterprise apps via Quick Suite. This is AWS making BI dashboards conversational. Useful for internal tools on AWS, not a paradigm shift. → AWS
Salesforce Pushes Agentforce to Nonprofits. Salesforce launched 4 proven agent use cases for nonprofits through their Accelerator program. Same academy-style lock-in playbook, now targeting the nonprofit sector. → Salesforce
OpenAI × Foxconn for U.S. AI Manufacturing. Partnership to strengthen domestic AI manufacturing supply chains. OpenAI embedding into physical infrastructure, not just software. → OpenAI
Company Watch: AWS Strands vs. OpenAI AgentKit
Two major agent frameworks got significant updates this week:
| AWS Strands | OpenAI AgentKit | |
|---|---|---|
| Model Lock-in | Bedrock-native, SageMaker custom models with parser work | OpenAI models primarily |
| Cloud Dependency | Deep AWS | Cloud-agnostic in theory, OpenAI-dependent in practice |
| Maturity | Open source, production-focused, MCP support | New release, eval tooling, RFT integration |
| Best For | AWS-native teams with custom models | Teams already on OpenAI wanting tight integration |
Our take: Neither is the safe bet. Both are lock-in plays dressed as developer tools. Keep your agent logic portable.
Tool of the Day: Strands Agents SDK
strandsagents.com — Open source SDK from AWS for building AI agents. Supports Bedrock and SageMaker providers, MCP integration, multi-agent systems. The custom parser pattern is real production flexibility — host any model, any serving framework, wire it into the agent loop.
Why builders should look: If you're on AWS and need agents connecting to predictive ML models (not just LLMs), the SageMaker + MCP + Strands combo is one of the few production-ready hybrid AI agent patterns. The lock-in is real. Eyes open.
Stat of the Day
AWS had to publish a tutorial on making their own agent framework (Strands) work with their own hosting platform (SageMaker) — highlighting that even first-party agent integrations require custom parser glue code in 2025. Source: AWS ML Blog
Ground Model is a daily newsletter for AI builders. Direct. Opinionated. No fluff.