Top 12 AI and machine learning announcements at AWS re:Invent 2021

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

This week during its re:Invent 2021 conference in Las Vegas, Amazon announced a slew of new AI and machine learning products and updates across its Amazon Web Services (AWS) portfolio. Touching on DevOps, big data, and analytics, among the highlights were a call summarization feature for Amazon Lex and a capability in CodeGuru that helps detect secrets in source code.

Amazon’s continued embrace of AI comes as enterprises express a willingness to pilot automation technologies in transitioning their businesses online. Fifty-two percent of companies accelerated their AI adoption plans because of the COVID pandemic, according to a PricewaterhouseCoopers study. Meanwhile, Harris Poll found that 55% of companies accelerated their AI strategy in 2020 and 67% expect to further accelerate their strategy in 2021.

“The initiatives we are announcing … are designed to open up educational opportunities in machine learning to make it more widely accessible to anyone who is interested in the technology,” AWS VP of machine learning Swami Sivasubramanian said in a statement. “Machine learning will be one of the most transformational technologies of this generation. If we are going to unlock the full potential of this technology to tackle some of the world’s most challenging problems, we need the best minds entering the field from all backgrounds and walks of life.”

DevOps

Roughly a year after launching CodeGuru, an AI-powered developer tool that provides recommendations for improving code quality, Amazon this week unveiled the new CodeGuru Reviewer Secrets Detector. An automated tool that helps developers detect secrets in source code or configuration files such as passwords, API keys, SSH keys, and access tokens, Secrets Detector leverages AI to identify hard-coded secrets as part of the code review process.

The goal is to help ensure that all-new code doesn’t contain secrets before being merged and deployed, according to Amazon. In addition to detecting secrets, Secrets Detector can suggest remediation steps to secure secrets with AWS Secrets Manager, Amazon’s managed service that lets customers store and retrieve secrets.

Secrets Detector is included as part of CodeGuru Reviewer, a component of CodeGuru, at no additional cost and supports most of the APIs from providers including AWS, Atlassian, Datadog, Databricks, GitHub, Hubspot, Mailchimp, Salesforce, Shopify, Slack, Stripe, Tableau, Telegram, and Twilio.

Enterprise

Contact Lens, a virtual call center product for Amazon Connect that transcribes calls while simultaneously assessing them, now features call summarization. Enabled by default, Contact Lens provides a transcript of all calls made via Connect, Amazon’s cloud contact center service.

In a related development, Amazon has launched an automated chatbot designer in Lex, the company’s service for building conversational voice and text interfaces. The designer uses machine learning to provide an initial chatbot design that developers can then refine to create conversational experiences for customers.

And Textract, Amazon’s machine learning service that automatically extracts text, handwriting, and data from scanned documents, now supports identification documents including licenses and passports. Without the need for templates or configuration, users can automatically extract specific as well as implied information from IDs, such as date of expiration, date of birth, name, and address.

SageMaker

SageMaker, Amazon’s cloud machine learning development platform, gained several enhancements this week including a visual, no-code tool called SageMaker Canvas. Canvas allows business analysts to build machine learning models and generate predictions by browsing disparate data sources in the cloud or on-premises, combining datasets, and training models once updated data is available.

Also new is SageMaker Ground Truth Plus, a turnkey service that employs an “expert” workforce to deliver high-quality training datasets while eliminating the need for companies to manage their own labeling applications. Ground Truth Plus complements improvements to SageMaker Studio, including a novel way to configure and provision compute clusters for workload needs with support from DevOps practitioners.

Within SageMaker Studio, SageMaker Inference Recommender — another new feature — automates load testing and optimizes model performance across machine learning instances. The idea is to allow MLOps engineers to run a load test against their model in a simulated environment, reducing the time it takes to get machine learning models from development into production.

Developers can gain free access to Sagemaker Studio through the new Studio Lab, which doesn’t require an AWS account or billing details. Users can simply sign up with their email address through a web browser and can start building and training machine learning models with no financial obligation or long-term commitment.

SageMaker Training Compiler, another new SageMaker capability, aims to accelerate the training of deep learning models by automatically compiling developers’ Python programming code and generating GPU kernels specifically for their model. The training code will use less memory and compute and therefore train faster, Amazon says, cutting costs and saving time.

Last on the SageMaker front is Serverless Inference, a new inference option that enables users to deploy machine learning models for inference without having to configure or manage the underlying infrastructure. With Serverless Inference, SageMaker automatically provisions, scales, and turns off compute capacity based on the volume of inference requests. Customers only pay for the duration of running the inference code and the amount of data processed, not for idle time.

Compute

Amazon also announced Graviton3, the next generation of its custom ARM-based chip for AI inferencing applications.  Soon to be available in AWS C7g instances, the company says that the processors are optimized for workloads including high-performance compute, batch processing, media encoding, scientific modeling, ad serving, and distributed analytics.

Alongside Graviton3, Amazon debuted Trn1, a new instance for training deep learning models in the cloud — including models for apps like image recognition, natural language processing, fraud detection, and forecasting. It’s powered by Trainium, an Amazon-designed chip that the company last year claimed would offer the most teraflops of any machine learning instance in the cloud. (A teraflop translates to a chip being able to process 1 trillion calculations per second.)

VentureBeat

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Source: Read Full Article