Amazon is leveraging machine learning to fight fraud, audit code, transcribe calls, and index enterprise data. Today during a keynote at its Amazon Web Services (AWS) re:Invent 2019 conference in Las Vegas, the tech giant debuted Amazon Fraud Detector, a fully managed service that detects anomalies in transactions, and CodeGuru, which automates code review while identifying the most “expensive” lines of code. And those are just the tip of the iceberg.

With Fraud Detector (in preview), AWS customers provide email addresses, IP addressees, and other historical transaction and account registration data, along with markers indicating which transactions are fraudulent and which are legitimate. Amazon takes that information and uses algorithms — along with data detectors developed on the consumer business of Amazon’s business — to build bespoke models that recognize things like potentially malicious email domains and IP address formation. After the model is created, customers can create, view, and update rules to enable actions based on model predictions without relying on others.

Fraud Detector lets admins selectively introduce additional steps or checks based on risk. For example, they can set up a customer account registration workflow to require additional email and phone verification steps only for registrations that exhibit high-risk characteristics. Furthermore, Fraud Detector can identify accounts that are more likely to abuse ‘try before you buy’ programs and flag suspicious online payment transactions before orders are processed and fulfilled.

It’s all exposed through a private endpoint API, which can be incorporated into services and apps on the client side. Amazon claims that Fraud Detector’s machine learning models identify up to 80% more potential bad actors than traditional methods, on average.

As for CodeGuru, which comes in the form of a component that integrates with existing integrated development environments (IDEs), it taps AI models trained on over 10,000 of the most popular open source projects to evaluate code as it’s being written. Where there’s an issue, it proffers a human-readable comment that explains what the issue is and suggests potential remediations. Additionally, CodeGuru finds the most inefficient and unproductive lines of code by creating a profile every five minutes that takes into account things like latency and processor utilization.

It’s a two-part system. CodeGuru Reviewer — which uses a combination of rule mining and supervised machine learning models — detects deviation from best practices for using AWS APIs and SDKs, flagging common issues that can lead to production issues such as detection of missing pagination, error handling with batch operations, and the use of classes that are not thread-safe. As for CodeGuru Profiler, it provides specific recommendations on issues like excessive recreation of expensive objects, expensive deserialization, usage of inefficient libraries, and excessive logging.

Amazon says that CodeGuru — which encodes AWS’ best practices — has been used internally to optimize 80,000 applications, and that it’s led to tens of millions of dollars in savings. In fact, Amazon claims that some teams were able to reduce processor utilization by 325% and save 39% in just a year.

Amazon also took the wraps off of Contact Lens (in preview) today, a virtual call center product for Amazon Connect that transcribes calls while simultaneously assessing them. It offers a full text transcription and captures things like the sentiment of calls and long periods of silence or agent cross-talk. Plus, it lets managers search the aforementioned transcriptions by keyword for specific phrases and other dimensions, and view dashboards and reports that measure trends over time.

And Amazon launched Kendra (in preview), a new AI-powered service for enterprise search. Once configured through the AWS Console, Kendra leverages connectors to unify and index previously siloed sources of information (from file systems, websites, Box, DropBox, Salesforce, SharePoint, relational databases, and elsewhere). Customers answer a few questions about their data and optionally provide frequently asked questions (think knowledge bases and support documentation) and let Kendra build an index using natural language processing to identify concepts and their relationships.

Amazon says its models are optimized to understand language from domains like IT, financial services, insurance, pharmaceuticals, industrial manufacturing, oil and gas, legal, media and entertainment, travel and hospitality, health, HR, news, telecommunications, mining, food and beverage, and automotive. In practice, this means an employee can ask a question like “Can I add children as dependents on HMO?” and Kendra would provide answers related to that person’s health care options.

Queries in Kendra can be tested and refined before they’re deployed, and they self-improve over time as the underlying AI algorithms ingest new data. Companies can manually tune relevance, boosting certain fields in an index such as document freshness, view counts, or specific data sources. And the end-user prebuilt web app is designed to be integrated with existing internal apps, with signal-tracking mechanisms that keep tabs on which links users click and which searches they perform to improve the underpinning models.

Kendra’s preview doesn’t include incremental learning, query auto-completion, custom synonyms, or analytics, Amazon notes. It currently only offers connectors for SharePoint online, JDBC, and Amazon’s Simple Storage Service (S3), and it’s limited to a maximum of 40,000 queries per day, 100,000 documents indexed, and one index per account.

“There’s no machine-learning expertise required for … these services. They’re just plug and play. You don’t have to get into all the weeds and get the training data and label the data and all those sorts of things,” said AWS vice president for AI services Matt Wood onstage today.

The host of unveilings this afternoon followed on the heels of many others, including that of AWS SageMaker Studio, a model training and management workflow tool that collects all the code, notebooks, and project folders for machine learning into one place. Amazon also launched S3 Access Points, which let S3 customers assign access policies for apps, and an instance — AWS Inf1 — for AI inference.