As enterprises continue to mull over potential use cases for Generative AI (GenAI), one fact is clear: Nothing is possible without data. AWS Vice President of Data and AI, Swami Sivasubramanian, made that point clear during his Day 2 keynote at AWS re:Invent 2023, delving into new data-focused AWS offerings, capabilities, and features aimed at helping companies build and deploy GenAI use cases that have real-life business impact.

Following AWS CEO Adam Selipsky’s GenAI-focused keynote, Swami detailed for assembled attendees at The Venetian in Las Vegas how data and GenAI co-drive enhanced business processes, innovations, and full-scale enterprise transformations with incredible speed. He observed that we have entered an era where “a powerful relationship between humanity and technology is emerging right before our eyes,” a symbiotic union where each party brings tremendous value.

Massive explosions in enterprise data – no matter the type, where it’s stored and secured, or how it’s being harnessed (if at all) – have organizations of all shapes and sizes racing to figure out what they can make their data work through GenAI foundation models. Swami pointed out that with its long history in machine learning (ML), AWS runs more ML workloads than any other provider and provides the infrastructure, storage, and a variety of data models to power foundation models.

To continue customer momentum for GenAI use cases, AWS wants to ensure that enterprises can access whatever data models are required, whether they leverage one or a combination of different models. Or, as Swami stated more plainly: “No one model will rule them all.”

Swami announced a long slate of new AWS offerings in three primary areas related to utilizing data models to build GenAI apps: providing multiple foundation models for customers’ needs, creating a secure environment on which to run those models, and releasing tools that can be used to build and deploy GenAI use cases. Announcements included new features for two of its core GenAI-related products: Amazon Sagemaker to build and deploy foundation models such as large language models and Amazon Bedrock to develop and scale GenAI apps for text and image generation and other tasks. Here are a few details:

  • Data models: In Amazon Bedrock, AWS already offers data models from Anthropic, AI21 Labs, Cohere, Meta AI, Stability AI, and its own models with Amazon Titan. Swami announced a host of new model versions from those vendors, most notably Anthropic Claude 2.1, the latest version of its language model that the company stated generates 15% fewer hallucinations and a two-time reduction in false statements compared to previous models.

    Overall, these additional models will lead to greater customer choice to run GenAI apps on AWS cloud, but more importantly, it also gives enterprises confidence that they won’t be locked in on any single data model — and they can even switch between and combine models if necessary. Swami also noted that approximately 10,000 customers use Bedrock, including blockbuster brands such as SAP, Georgia Pacific, and United Airlines.

  • Sagemaker features: Swami unveiled new Sagemaker capabilities to accelerate the building, training, and deployment of models to power GenAI, including HyperPod, which can accelerate model training time by 40%, Inference, which allows multiple models on the same virtual server to save on costs and latency, and Clarify which helps customers evaluate, compare and select the best models for specific GenAI use cases. With these enhancements, Sagemaker can turbocharge data model creation so AWS customers can deploy trained, tested, and secure GenAI apps and realize potential value more quickly.
  • Enhancements for Amazon Titan: Multimodal embeddings are now generally available on the Amazon Titan model for Amazon Bedrock. Vector embeddings are numerical translations of natural language text and imagery, and embeddings are critical for machine learning and training data models. Swami said Titan already provides text embeddings and will now support multimodal embeddings so companies can build apps that provide enhanced search, better recommendations, and higher degrees of personalization in their GenAI use cases. 

AWS’ legacy in data and its enthusiasm to create positive outcomes mirrors Persistent’s. With 30-plus years of experience, we are the ultimate partner for companies seeking to build, test, and deploy data models on AWS cloud for GenAI apps while maintaining all necessary security, privacy, and governance requirements.

Contact us today to learn how you can capitalize on your data for GenAI.

Author’s Profile

John Madden

John Madden

Global Thought Leadership Marketing Lead

john_madden@persistent.com

linked In

John has more than 20 years of experience in the IT industry as a writer, editor, content creator, social media manager, market analyst, and thought leadership director. His expertise includes enterprise IT services, cloud, and AI technologies.