The Agentic Digest

Mastra adds AI Gateway tools and sturdier agent memory

·5 min read·ai-agentsagent-nativeai-copilotassistant-chat-bots

For engineers, designers & product people. Stay up to date with free daily digest.

TLDR: Mastra tightens the agent loop, AWS ships evals and AI A/B patterns, and the tooling around agentic coding keeps getting sharper.

Mastra 1.14.0 adds AI Gateway tools and stronger memory

The @mastra/core 1.14.0 release adds native support for AI Gateway tools, like gateway.tools.perplexitySearch(), directly in the Mastra agentic loop. The runtime now infers providerExecuted, merges streamed provider results back into the originating tool call, and avoids re-running tools locally when the gateway already produced an answer. The release also improves observational memory stability through dated message boundaries, which should reduce cache weirdness and retrieval drift.

For anyone wiring agents to hosted tool ecosystems or search APIs, this makes Mastra feel more like a first-class orchestration layer instead of glue code. The memory tweaks are subtle but important if you rely on long-lived agents. As of 2026-03-19 there are no public benchmarks, so you will want to watch real traces after upgrading.

Read more →


AWS details Strands Evals for production AI agents

Amazon Web Services used a new AWS Machine Learning Blog post to walk through Strands Evals, a framework for systematically evaluating AI agents before and after production deployment. The guide covers built-in evaluators, multi turn simulations, and patterns for integrating evals into CI or live monitoring. It focuses on concrete workflows instead of abstract metrics.

If you are responsible for shipping agents into regulated or high volume surfaces, this is worth a close read. The examples show how to design task specific evals, run scenario simulations, and feed results back into model or prompt updates. As of 2026-03-19 this is still an AWS centric stack, so it fits best if you are already on Amazon Bedrock or broader AWS infrastructure.

Read more →


AWS shows AI powered A/B testing with Amazon Bedrock

A separate AWS Machine Learning Blog post walks through building an AI powered A/B testing engine using Amazon Bedrock, Amazon Elastic Container Service, Amazon DynamoDB, and the Model Context Protocol. Instead of static bucketing, the system uses user context to assign variants dynamically during experiments, while still tracking metrics per treatment.

Product teams that already lean on large language models for personalization will recognize the pattern: treat the model as a policy that chooses which experience to show. The writeup is useful because it addresses practical pieces such as state storage, latency, and experiment integrity. As of 2026-03-19 this is a reference architecture rather than a turnkey service, so plan on real engineering work to adapt it to your stack.

Read more →


Quick Hits

More from the Digest

For engineers, designers & product people. Stay up to date with free daily digest.

© 2026 The Agentic Digest