online gambling singapore online gambling singapore online slot malaysia online slot malaysia mega888 malaysia slot gacor live casino malaysia online betting malaysia mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 How the public clouds are innovating on AI

摘要: Companies without the resources to develop in-house machine learning models are turning to the big cloud providers.

 


images/machine-learning-cloud-ai.jpg

▲圖片標題(來源:InfoWorld)

The three big cloud providers, specifically Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), want developers and data scientists to develop, test, and deploy machine learning models on their clouds. It’s a lucrative endeavor for them because testing models often need a burst of infrastructure, and models in production often require high availability.

These are lucrative services for the cloud providers and offer benefits to their customers, but they don’t want to compete for your business only on infrastructure, service levels, and pricing. They focus on versatile on-ramps to make it easier for customers to use their machine learning capabilities. Each public cloud offers multiple data storage options, including serverless databases, data warehouses, data lakes, and NoSQL datastores, making it likely that you will develop models in proximity to where your data resides. They offer popular machine learning frameworks, including TensorFlow and PyTorch so that their clouds are one-stop shops for data science teams that want flexibility. All three offer Modelops, MLops, and a growing number of capabilities to support the full machine learning life cycle.

A recent study shows that 78% of enterprise artificial intelligence (AI) and machine learning (ML) projects are deployed using hybrid cloud infrastructure, so the public clouds have plenty of room to grow. This implies that they will need to continue innovating with new and differentiating capabilities.

That innovation comes in several categories to help enterprises run machine learning at scale, with more services and easier-to-use platforms. Here are some specifics.

Battle of the AI chips

Machine learning experimentation continues to scale with large and more complex models that require training on vast amounts of data. Microsoft and Nvidia recently announced a massive 530 billion-parameter language processor, while Google claims it trained a 1.6 trillion-parameter model earlier this year.

Training models of this size and complexity can take a long time and become expensive, so the public clouds are innovating with AI chips and infrastructure options. AWS already has Inferentia and Trainium; it recently announced new EC2 instances powered by Habana’s Gaudi that offer 40% better price-performance when compared to the latest GPU-powered EC2.

Meanwhile, Google announced TPU v4 earlier in 2021. Its fourth-generation tensor processing unit is demonstrating an average improvement of 2.7 times over TPU v3 performance. Expect more hardware innovations with AI chips and accelerators from Cerebras, Graphcore, Nvidia, and SambaNova.

轉貼自: InfoWorld

若喜歡本文,請關注我們的臉書 Please Like our Facebook Page:    Big Data In Finance

 


留下你的回應

以訪客張貼回應

0
  • 找不到回應

YOU MAY BE INTERESTED