0

Deepgram, a leader in advanced Voice AI technology, has announced a major integration that simplifies building and scaling voice-powered applications. The company is now offering its industry-leading, real-time speech-to-text (STT), text-to-speech (TTS), and Voice Agent API natively as Amazon SageMaker AI endpoints.

RELATED: Deepgram and AWS sign strategic collaboration to accelerate enterprise voice AI adoption

This integration eliminates the need for complex custom pipelines, allowing development teams to deploy and manage state-of-the-art Voice AI directly within their established Amazon Web Services (AWS) workflows.

Seamless Deployment within Trusted AWS Environments
The core benefit of this native integration is streamlined deployment. Developers can now leverage Deepgram’s high-fidelity, low-latency voice models as real-time SageMaker endpoints. This means:

  • No Custom Orchestration: Teams can bypass the complexity of building and maintaining custom integration pipelines.
  • Simplified Scaling: Applications can scale seamlessly using familiar AWS and SageMaker tools.
  • Enhanced Security & Compliance: All voice data processing remains within the customer’s own AWS environment, preserving existing security postures, data governance policies, and compliance frameworks.

Unlocking New Voice-Powered Applications

By providing direct access to its APIs via SageMaker, Deepgram enables teams to build a new generation of applications faster. Use cases span:

  • Real-time call analytics and agent assist
  • Interactive voice assistants and avatars
  • Live captioning and audio intelligence
  • Accessible media and voice-driven interfaces

This move significantly lowers the barrier to entry for enterprises seeking to implement production-grade Voice AI, combining Deepgram’s cutting-edge speech recognition and synthesis with the robust, scalable infrastructure of AWS SageMaker.

ADVERTISEMENT

“Deepgram’s integration with Amazon SageMaker represents an important step forward for real-time voice AI. By bringing our streaming speech models directly into SageMaker, enterprises can deploy speech-to-text, text-to-speech, and voice agent capabilities with sub-second latency, all within their AWS environment. This collaboration extends SageMaker’s functionality and gives developers a powerful way to build and scale voice-driven applications securely and efficiently,” said Scott Stephenson, CEO and Co-Founder, Deepgram.

Native streaming via Amazon SageMaker endpoints means no workarounds or hoops to jump through, just clean, real-time inferences through the SageMaker API. The integration enables sub-second latency and enterprise-grade reliability for high-scale use cases like contact centers, trading floors, and live analytics.

“Enterprise developers need to build voice AI applications at scale without compromising on speed, accuracy, or security,” said Stephenson. “

The integration is also backed by a strong relationship with AWS. Deepgram is an AWS Generative AI Competency Partner and has signed a multi-year Strategic Collaboration Agreement (SCA) with AWS to accelerate enterprise adoption.

“Deepgram’s new Amazon SageMaker AI integration makes it simple for customers to bring real-time voice capabilities into their AWS workflows,” said Ankur Mehrotra, general manager for Amazon SageMaker at AWS.

The integration is available to customers building on AWS, with live demonstrations planned at AWS re:Invent in Las Vegas, December 1–5, 2025, in Deepgram Booth #690.

More in News

You may also like