Fastly AI Accelerator Helps Developers Unleash the Power of Generative AI

Insider Brief

  • Fastly AI Accelerator is now generally available, offering semantic caching to optimize performance for Large Language Model (LLM) generative AI applications, reducing response times by an average of 9x and lowering costs for developers with minimal implementation effort.
  • By caching repeated queries on the Fastly Edge Cloud Platform instead of repeatedly querying AI providers, the solution enhances user experience and efficiency, initially supporting OpenAI ChatGPT and Microsoft Azure AI Foundry.
  • Fastly’s innovation positions it as a leader in the edge cloud space, empowering developers to build faster and more cost-effective AI applications while maintaining performance and scalability.

PRESS RELEASE — Fastly Inc. (NYSE: FSLY), a global leader in edge cloud platforms, has announced the general availability of Fastly AI Accelerator. A semantic caching solution created to address the critical performance and cost challenges faced by developers with Large Language Model (LLM) generative AI applications, Fastly AI Accelerator delivers an average of 9x faster response times.1 Initially released in beta with support for OpenAI ChatGPT, Fastly AI Accelerator is also now available with Microsoft Azure AI Foundry.

“AI is helping developers create so many new experiences, but too often at the expense of performance for end-users. Too often, today’s AI platforms make users wait,” said Kip Compton, Chief Product Officer at Fastly. “With Fastly AI Accelerator we’re already averaging 9x faster response times and we’re just getting started.1 We want everyone to join us in the quest to make AI faster and more efficient.”

Fastly AI Accelerator can be a game-changer for developers looking to optimize their LLM generative AI applications. To access its intelligent, semantic caching abilities, developers simply update their application to a new API endpoint, which typically only requires changing a single line of code. With this easy implementation, instead of going back to the AI provider for each individual call, Fastly AI Accelerator leverages the Fastly Edge Cloud Platform to provide a cached response for repeated queries. This approach helps to enhance performance, lower costs, and ultimately deliver a better experience for developers.

“Fastly AI Accelerator is a significant step towards addressing the performance bottleneck accompanying the generative AI boom,” said Dave McCarthy, Research Vice President, Cloud and Edge Services at IDC. “This move solidifies Fastly’s position as a key player in the fast-evolving edge cloud landscape. The unique approach of using semantic caching to reduce API calls and costs unlocks the true potential of LLM generative AI apps without compromising on speed or efficiency, allowing Fastly to enhance the user experience and empower developers.”

Existing Fastly customers can add AI Accelerator directly from their Fastly accounts. To learn more and get started, visit fastly.com/ai.

About Fastly, Inc.

Fastly’s powerful and programmable edge cloud platform helps the world’s top brands deliver online experiences that are fast, safe, and engaging through edge compute, delivery, security, and observability offerings that improve site performance, enhance security, and empower innovation at global scale. Compared to other providers, Fastly’s powerful, high-performance, and modern platform architecture empowers developers to deliver secure websites and apps with rapid time-to-market and demonstrated, industry-leading cost savings. Organizations around the world trust Fastly to help them upgrade the internet experience, including Reddit, Neiman Marcus, Universal Music Group, and SeatGeek. Learn more about Fastly at https://www.fastly.com, and follow us @fastly.

Contacts

Media Contact
Spring Harris
[email protected]

Investor Contact
Vernon Essi, Jr.
[email protected]

SOURCE

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape