Throughout the 2000s, Machine Learning dominated the industry, followed by Deep Learning and Neural Networks in the 2010s. In the 2020s, the emphasis has shifted to the revolutionary concept of Generative AI (Gen-AI). This technology is transforming the landscape at a rapid pace, with an emerging Gen-AI ecosystem composed of startups, IT service companies, hyperscalers and silicon vendors, each with their own agenda and focus areas. With their infrastructure, resources, and expertise to provide multi-cloud services, hyperscalers are among the best positioned to capitalize on Gen-AI — they can provide massively scalable cloud and storage resources required to run Gen-AI tools and they’re also making critical investments to drive Gen-AI and AI overall to a wider set of users and enterprises.

  • Scaling up compute and parallelization: Hyperscalers are investing heavily in hardware such as GPUs and specialized chips to increase the amount of training data that can be processed while accelerating the training time. This leads to larger models, which increase the overall complexity of its decisions and the dimensionality of the data it can handle. Athena from Microsoft, AWS Inf2, Trn2 (Inferentia and Trainium) are examples where hyperscalers are investing in dedicated hardware chips to increase the scalability and compute power.
  • Democratization of AI: Hyperscalers are making their AI models and Gen-AI tools more readily accessible to a broad range of users, including individuals and small businesses. This approach advocates for making advanced AI technology more accessible and affordable, following the principle of AI democratization. Each hyperscaler has developed a range of models and tools which can be reused readily. Bedrock, ChatGPT, and Fabric are examples where hyperscalers have merged multiple tools into one easy-to-use platform. Thanks to these tools, building simple chatbots has become much easier and more intuitive to a larger audience.  
  • Optimization and developer empowerment: Hyperscalers are investing in AI-powered enhancements to existing platforms and developer-focused offerings. This includes tools and resources for developers to help them build and deploy AI models more efficiently. For example, Amazon’s Sagemaker gives developers an easy way to train, test and deploy their models without worrying much about backend resources, and Microsoft’s Security Copilot is a security AI engine which helps developers quickly investigate and respond to security incidents.
  • Improving developer productivity: ISVs are looking to leverage hyperscalers’ solutions and platforms to improve developer productivity by writing clean, quality code. GitHub co-pilot from Microsoft, CodeWhisperer from AWS, and IBM Watson Code Assistant are going to be important tools in this space as they can create test cases and test documents besides writing code. Multiple user surveys are underway to determine the level of productivity gains and other benefits that these tools can deliver.
  • Large Language Models: Hyperscalers are investing in large language models (LLMs), which require robust and highly scalable computing capabilities to process data in real time. Cloud computing offers the perfect solution for processing LLMs, and hyperscalers have announced additional investments and offerings to address this need. Every hyperscaler has its own Large Language Models and Foundation Models available —such as Microsoft’s ChatGPT, Google’s Palm2 and Bard, and AWS’ Titanium.

Hyperscalers play a pivotal role in the Gen-AI ecosystem, and we foresee more and more cloud-run startups and businesses will eventually start adopting hyperscalers’ tools to improve their core business. Standard chatbots, data analytics, and developer productivity will be a simple double-click for users once they start leveraging hyperscalers’ Gen-AI services . They will allow businesses to spend more time for planning and identifying use cases and less time on implementation, leading to faster time-to-market, improved customer satisfaction, and increased revenue.

With a dedicated task force for hyperscalers, Persistent collaborates with all hyperscalers to advance and enhance Gen-AI solutions. Learn more about how we help clients navigate the complexity of Large Language Models and Generative AI.