Zephyrnet Logo

New AI Chips, Managed Services Among Flood from AWS at re:Invent 2020 

Date:

At its re:Invent 2020 event, Amazon AWS announced new AI chips and a wide range of new AI cloud services spanning development, operations and data management. (Credit: Amazon AWS) 

By AI Trends Staff 

Amazon Web Services CEO Andy Jassy delivered a three-hour keynote at a virtual event on Dec. 1, the AWS re:Invent 2020 event. Jassy, who has been with Amazon for over 23 years, and who is now seen as the most likely successor to Amazon founder Jeff Bezos, made many, many announcements.  

Andrew Jassy, CEO, Amazon Web Services

“There is no way to unpack Andy’s entire keynote as there were so many announcements across computer, storage, networking, AI/ML, developer tools, software and more,” according to an account from Futurum Research written by Daniel Newman, book author and principal analyst with the firm.    

Highlights for enterprise applications included new R5 instances for EC2, the Amazon Elastic Compute Cloud, for memory-intensive applications such as high-performance databases, distributed web scale in-memory caches, in-memory databases and real time big data analytics.    

“AWS continues to develop a comprehensive portfolio of Elastic Compute Cloud (EC2) instances to address the varying needs of customers,” Newman stated. “The diverse platforms give the company a wide breadth, and with the continued development of their Arm variants (Graviton2), the company continues to be more of a juggernaut in silicon.” The Graviton2 processor is a 64-core server chip, first announced several years ago.  

Amazon also announced multiple options for deploying containers on-premises with AWS. ECS Anywhere enables customers to run Amazon Elastic Container Services in their own data centers. Amazon EKS Anywhere provides the ability to run Amazon Elastic Kubernetes Services in their own data centers. 

Daniel Newman, author and principal analyst at Futurum Research

Amazon ECS has gained popularity due to the fact that customers see it as simple to use and deploy,” Newman stated. “However, a setback for AWS has been that often user requirements may require deployments beyond AWS-owned infrastructure. Up to this point, AWS hasn’t had an answer for this. Fans of ECS have sought access to a single experience that allows them to achieve the flexibility that they need.” 

Amazon also announced an easier path to migrate from SQL Server databases to Amazon Aurora, a relational database service developed by AWS and first offered in 2014. The announcements included Babelfish for Aurora PostgreSQL, designed to simplify migrations.  

“The reason this announcement is so powerful is in its simplicity and its implications,” Newman stated. “Babelfish enables PostgreSQL to understand both the command and protocol database requests from applications designed for Microsoft SQL Server without material impact to libraries, database schema, or SQL statements.” The developers focused on “correctness,” so that applications designed to use SQL Server will behave the same way on PostgreSQL, increasing the competitiveness of AWS against other SQL databases. Amazon announced that an open source version of Aurora is expected to be available in 2021.  

In other chip news, Amazon announced that new Habana Gaudi-based Amazon EC2 instances for machine learning will be offered in the first half of 2021, through a partnership between AWS and Intel. The Gaudi AI accelerators promise 40% better price-performance than the best performing GPU instances today, according to AWS. 

“It will work with all the main machine learning frameworks, PyTorch as well as TensorFlow,” and will help the company keep pushing the price-performance envelope and machine learning training advancements, stated Jassy, according to an account in EnterpriseAI. The Gaudi accelerators are designed for training deep learning models for workloads that include natural language processing, object detection and machine learning training, classification, recommendation and personalization.  

Intel acquired Habana Labs in 2019. Gaudi-based EC2 instances are designed to deliver increased performance and greater cost efficiencies for customers, while allowing developers to build new or port existing training models from graphics processing units to Gaudi accelerators, according to Intel. 

 

AWS Trainium Chip Announced for Machine Learning 

Amazon announced a new chip, the AWS Trainium chip, for machine learning. The chip is custom-designed by AWS to deliver the most cost-effective training in the cloud, according to Jassy.  

Trainium chips are optimized for deep learning training workloads for applications including image classification, semantic search, translation, voice recognition, natural language processing, and recommendation engines. Trainium should be more cost-effective than the Habana chip, Jassy stated, and will support all the major frameworks including TensorFlow, PyTorch and [Apache] MXnet  

Arun Chandrasekaran, analyst, Gartner

“AWS is expanding its custom chip capabilities for the end-to-end ML lifecycle,” stated Arun Chandrasekaran, an analyst covering cloud native platforms, big data and AI for Gartner, to EnterpriseAI. “Data and analytics is one of the fastest growing use cases in cloud,” and is  a computer-intensive workload.  

Amazon also announced SageMaker Clarify to help reduce bias in machine learning models, according to an account in TechCrunch. “It allows you to have insight into your data and models throughout your machine learning lifecycle,” stated Bratin Saha, Amazon VP and general manager of machine learning.  

The tool aims to analyze the data for bias before data preparation is begun, so bias problems such as varying numbers in different classes, can be identified before the model-building stage. ”We have a set of several metrics that you can use for the statistical analysis so you get real insight into easier data set balance,” Saha stated.  

After the model is built, the developer can run SageMaker Clarify again to check for bias that might have entered the model as it was under construction. “So you start off by doing statistical bias analysis on your data, and then post training you can again do analysis on the model,” he stated. 

Amazon also announced DevOps Guru, a managed operations service that aims to improve application availability by detecting operational issues and recommending fixes in an automated manner, according to an account in AnalyticsIndiaMag. The service applies machine learning to collect and analyze application metrics.  

Cited benefits of the new service included quick alerts to developers and operators, so they can quickly understand the scope of a problem, automated recommendations for how to fix problems, and no specialized hardware required.  

Amazon also announced Lookout for Equipment, an API-based machine learning system that aims to detect abnormal equipment behavior. The system is said to automatically test possible combinations and build an optimal machine learning model to learn the model behavior of the equipment.  

Customers can bring in historical time series data and past maintenance events data generated from industrial equipment that can have up to 300 data tags from components such as sensors and actuators per model. 

Read the source articles from Futurum Research, EnterpriseAI, TechCrunch and AnalyticsIndiaMag 

Source: https://www.aitrends.com/ai-in-business/new-ai-chips-managed-services-among-flood-from-aws-at-reinvent-2020/

spot_img

Latest Intelligence

spot_img