For audio versions: Podcast RSS Feed

Podcast version also available on iTunes

Remember to subscribe to the AWS Podcast too!

Keynotes

Keynote: Andy Jassy Andy Jassy, CEO of Amazon Web Services, delivers his AWS re:Invent 2017 keynote, featuring the latest news and announcements, including the launches of Amazon Elastic Containers for Kubernetes (EKS), AWS Fargate, Aurora Multi-Master, Aurora Serverless, DynamoDB Global Tables, Amazon Neptune, S3 Select, Amazon Sagemaker, AWS DeepLens, Amazon Rekognition Video, Amazon Kinesis Video Streams, Amazon Transcribe, Amazon Translate, Amazon Comprehend, AWS IoT 1-Click, AWS IoT Device Management, AWS IoT Device Defender, AWS IoT Analytics, Amazon FreeRTOS, and Greengrass ML Inference. Guest speakers include Dr. Matt Wood, of AWS; Roy Joseph, of Goldman Sachs; Mark Okerstrom, of Expedia; and Michelle McKenna-Doyle, of the NFL. 02:39:43

Keynote: Werner Vogels Watch Werner Vogels deliver his AWS re:Invent 2017 keynote, featuring the launch of Alexa for Business, AWS Cloud9, new AWS Lambda features, and Serverless App Repository. 02:51:16

Keynote: Tuesday Night Live with Peter DeSantis Watch Peter DeSantis, VP, AWS Global Infrastructure, in the Tuesday Night Live keynote, featuring Brian Mathews, of Autodesk, and Greg Peters, of Netflix. Sessions recommended at the end of this Keynote are: CMP215: Introducing Amazon EC2 P3 Instance - Featuring the Most Powerful GPU for Machine Learning

CMP218: AWS Compute: What's New in Amazon EC2, Containers and Serverless

CMP330: NEW LAUNCH! Amazon EC2 Bare Metal Instances

Amazon EC2 Bare Metal Instances CMP332: C5 Instances and the Evolution of Amazon EC2 Virtualization

NET204: NEW LAUNCH! AWS PrivateLink: Bringing SaaS Solutions into Your VPCs and Your On-Premises Networks

AWS PrivateLink: Bringing SaaS Solutions into Your VPCs and Your On-Premises Networks NET205: Networking State of the Union

NET304: Deep Dive into the New Network Load Balancer

NET310: NEW LAUNCH! AWS PrivateLink Deep Dive Also available as a YouTube playlist. 01:49:13

Analytics & Big Data

ABD201: Big Data Architectural Patterns and Best Practices on AWS In this session, we simplify big data processing as a data bus comprising various stages: collect, store, process, analyze, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architectures, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost. 00:59:55

ABD202: Best Practices for Building Serverless Big Data Applications Serverless technologies let you build and scale applications and services rapidly without the need to provision or manage servers. In this session, we show you how to incorporate serverless concepts into your big data architectures. We explore the concepts behind and benefits of serverless architectures for big data, looking at design patterns to ingest, store, process, and visualize your data. Along the way, we explain when and how you can use serverless technologies to streamline data processing, minimize infrastructure management, and improve agility and robustness and share a reference architecture using a combination of cloud and open source technologies to solve your big data problems. Topics include: use cases and best practices for serverless big data applications; leveraging AWS technologies such as Amazon DynamoDB, Amazon S3, Amazon Kinesis, AWS Lambda, Amazon Athena, and Amazon EMR; and serverless ETL, event processing, ad hoc analysis, and real-time analytics. 00:41:29

ABD203: Real-Time Streaming Applications on AWS: Use Cases and Patterns To win in the marketplace and provide differentiated customer experiences, businesses need to be able to use live data in real time to facilitate fast decision making. In this session, you learn common streaming data processing use cases and architectures. First, we give an overview of streaming data and AWS streaming data capabilities. Next, we look at a few customer examples and their real-time streaming applications. Finally, we walk through common architectures and design patterns of top streaming data use cases. 00:49:39

ABD205: Taking a Page Out of Ivy Tech's Book: Using Data for Student Success Data speaks. Discover how Ivy Tech, the nation's largest singly accredited community college, uses AWS to gather, analyze, and take action on student behavioral data for the betterment of over 3,100 students. This session outlines the process from inception to implementation across the state of Indiana and highlights how Ivy Tech's model can be applied to your own complex business problems. 00:59:35

ABD206: Building Visualizations and Dashboards with Amazon QuickSight Just as a picture is worth a thousand words, a visual is worth a thousand data points. A key aspect of our ability to gain insights from our data is to look for patterns, and these patterns are often not evident when we simply look at data in tables. The right visualization will help you gain a deeper understanding in a much quicker timeframe. In this session, we will show you how to quickly and easily visualize your data using Amazon QuickSight. We will show you how you can connect to data sources, generate custom metrics and calculations, create comprehensive business dashboards with various chart types, and setup filters and drill downs to slice and dice the data. 00:54:37

ABD207: Leveraging AWS to Fight Financial Crime and Protect National Security Banks aren't known to share data and collaborate with one another. But that is exactly what the Mid-Sized Bank Coalition of America (MBCA) is doing to fight digital financial crime—and protect national security. Using the AWS Cloud, the MBCA developed a shared data analytics utility that processes terabytes of non-competitive customer account, transaction, and government risk data. The intelligence produced from the data helps banks increase the efficiency of their operations, cut labor and operating costs, and reduce false positive volumes. The collective intelligence also allows greater enforcement of Anti-Money Laundering (AML) regulations by helping members detect internal risks—and identify the challenges to detecting these risks in the first place. This session demonstrates how the AWS Cloud supports the MBCA to deliver advanced data analytics, provide consistent operating models across financial institutions, reduce costs, and strengthen national security. Session sponsored by Accenture

ABD208: Cox Automotive Empowered to Scale with Splunk Cloud & AWS and Explores New Innovation with Amazon Kinesis Firehose In this session, learn how Cox Automotive is using Splunk Cloud for real time visibility into its AWS and hybrid environments to achieve near instantaneous MTTI, reduce auction incidents by 90%, and proactively predict outages. We also introduce a highly anticipated capability that allows you to ingest, transform, and analyze data in real time using Splunk and Amazon Kinesis Firehose to gain valuable insights from your cloud resources. It's now quicker and easier than ever to gain access to analytics-driven infrastructure monitoring using Splunk Enterprise & Splunk Cloud. Session sponsored by Splunk 00:58:53

ABD209: Accelerating the Speed of Innovation with a Data Sciences Data & Analytics Hub at Takeda Historically, silos of data, analytics, and processes across functions, stages of development, and geography created a barrier to R&D efficiency. Gathering the right data necessary for decision-making was challenging due to issues of accessibility, trust, and timeliness. In this session, learn how Takeda is undergoing a transformation in R&D to increase the speed-to-market of high-impact therapies to improve patient lives. The Data and Analytics Hub was built, with Deloitte, to address these issues and support the efficient generation of data insights for functions such as clinical operations, clinical development, medical affairs, portfolio management, and R&D finance. In the AWS hosted data lake, this data is processed, integrated, and made available to business end users through data visualization interfaces, and to data scientists through direct connectivity. Learn how Takeda has achieved significant time reductions—from weeks to minutes—to gather and provision data that has the potential to reduce cycle times in drug development. The hub also enables more efficient operations and alignment to achieve product goals through cross functional team accountability and collaboration due to the ability to access the same cross domain data. Session sponsored by Deloitte 00:48:42

ABD210: Modernizing Amtrak: Serverless Solution for Real-Time Data Capabilities As the nation's only high-speed intercity passenger rail provider, Amtrak needs to know critical information to run their business such as: Who's onboard any train at any time? How are booking and revenue trending? Amtrak was faced with unpredictable and often slow response times from existing databases, ranging from seconds to hours; existing booking and revenue dashboards were spreadsheet-based and manual; multiple copies of data were stored in different repositories, lacking integration and consistency; and operations and maintenance (O&M) costs were relatively high. Join us as we demonstrate how Deloitte and Amtrak successfully went live with a cloud-native operational database and analytical datamart for near-real-time reporting in under six months. We highlight the specific challenges and the modernization of architecture on an AWS native Platform as a Service (PaaS) solution. The solution includes cloud-native components such as AWS Lambda for microservices, Amazon Kinesis and AWS Data Pipeline for moving data, Amazon S3 for storage, Amazon DynamoDB for a managed NoSQL database service, and Amazon Redshift for near-real time reports and dashboards. Deloitte's solution enabled “at scale” processing of 1 million transactions/day and up to 2K transactions/minute. It provided flexibility and scalability, largely eliminate the need for system management, and dramatically reduce operating costs. Moreover, it laid the groundwork for decommissioning legacy systems, anticipated to save at least $1M over 3 years. Session sponsored by Deloitte 00:59:07

ABD211: Sysco Foods: A Journey from Too Much Data to Curated Insights In this session, we detail Sysco's journey from a company focused on hindsight-based reporting to one focused on insights and foresight. For this shift, Sysco moved from multiple data warehouses to an AWS ecosystem, including Amazon Redshift, Amazon EMR, AWS Data Pipeline, and more. As the team at Sysco worked with Tableau, they gained agile insight across their business. Learn how Sysco decided to use AWS, how they scaled, and how they became more strategic with the AWS ecosystem and Tableau. Session sponsored by Tableau 00:58:33

ABD212: SAP HANA: The Foundation of SAP's Digital Core Learn how customers are leveraging AWS to better position their enterprises for the digital transformation journey. In this session, you hear about: operations and process; the SAP transformation journey including architecting, migrating, running SAP on AWS; complete automation and management of the AWS layer using AWS native services; and a customer example. We also discuss the challenges of migration to the cloud and a managed services environment; the benefits to the customer of the new operating model; and lessons learned. By the end of the session, you understand why you should consider AWS for your next SAP platform, how to get there when you are ready and some best practices to manage your SAP systems on AWS. session sponsored by DXC Technology 00:52:41

ABD213: How to Build a Data Lake with AWS Glue Data Catalog As data volumes grow and customers store more data on AWS, they often have valuable data that is not easily discoverable and available for analytics. The AWS Glue Data Catalog provides a central view of your data lake, making data readily available for analytics. We introduce key features of the AWS Glue Data Catalog and its use cases. Learn how crawlers can automatically discover your data, extract relevant metadata, and add it as table definitions to the AWS Glue Data Catalog. We will also explore the integration between AWS Glue Data Catalog and Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum.

ABD214: Real-time User Insights for Mobile and Web Applications with Amazon Pinpoint With customers demanding relevant and real-time experiences across a range of devices, digital businesses are looking to gather user data at scale, understand this data, and respond to customer needs instantly. This requires tools that can record large volumes of user data in a structured fashion, and then instantly make this data available to generate insights. In this session, we demonstrate how you can use Amazon Pinpoint to capture user data in a structured yet flexible manner. Further, we demonstrate how this data can be set up for instant consumption using services like Amazon Kinesis Firehose and Amazon Redshift. We walk through example data based on real world scenarios, to illustrate how Amazon Pinpoint lets you easily organize millions of events, record them in real-time, and store them for further analysis. 00:57:45

ABD216: NEW LAUNCH! Introducing Amazon Kinesis Video Streams Amazon Kinesis Video Streams makes it easy to securely stream video from connected devices to AWS for analytics, machine learning (ML), and other processing. In this session, we introduce Kinesis Video Streams and its key features, and review common use cases including smart home, smart city, industrial automation, and computer vision. We also discuss how you can use the Kinesis Video Streams parser library to work with the output of video streams to power popular deep learning frameworks. Lastly, Abeja, a leading Japanese artificial intelligence (AI) solutions provider, talks about how they built a deep-learning system for the retail industry using Kinesis Video Streams to deliver better shopping experience. 00:50:28

ABD217: From Batch to Streaming: How Amazon Flex Uses Real-time Analytics to Deliver Packages on Time Reducing the time to get actionable insights from data is important to all businesses, and customers who employ batch data analytics tools are exploring the benefits of streaming analytics. Learn best practices to extend your architecture from data warehouses and databases to real-time solutions. Learn how to use Amazon Kinesis to get real-time data insights and integrate them with Amazon Aurora, Amazon RDS, Amazon Redshift, and Amazon S3. The Amazon Flex team describes how they used streaming analytics in their Amazon Flex mobile app, used by Amazon delivery drivers to deliver millions of packages each month on time. They discuss the architecture that enabled the move from a batch processing system to a real-time system, overcoming the challenges of migrating existing batch data to streaming data, and how to benefit from real-time analytics. 01:00:49

ABD218: How EuroLeague Basketball Uses IoT Analytics to Engage Fans IoT and big data have made their way out of industrial applications, general automation, and consumer goods, and are now a valuable tool for improving consumer engagement across a number of industries, including media, entertainment, and sports. The low cost and ease of implementation of AWS analytics services and AWS IoT have allowed AGT, a leader in IoT, to develop their IoTA analytics platform. Using IoTA, AGT brought a tailored solution to EuroLeague Basketball for real-time content production and fan engagement during the 2017-18 season. In this session, we take a deep dive into how this solution is architected for secure, scalable, and highly performant data collection from athletes, coaches, and fans. We also talk about how the data is transformed into insights and integrated into a content generation pipeline. Lastly, we demonstrate how this solution can be easily adapted for other industries and applications. 00:57:19

ABD222: How to Confidently Unleash Data to Meet the Needs of Your Entire Organization Where are you on the spectrum of IT leaders? Are you confident that you're providing the technology and solutions that consistently meet or exceed the needs of your internal customers? Do your peers at the executive table see you as an innovative technology leader? Innovative IT leaders understand the value of getting data and analytics directly into the hands of decision makers, and into their own. In this session, Daren Thayne, Domo's Chief Technology Officer, shares how innovative IT leaders are helping drive a culture change at their organizations. See how transformative it can be to have real-time access to all of the data that' is relevant to YOUR job (including a complete view of your entire AWS environment), as well as understand how it can help you lead the way in applying that same pattern throughout your entire company. Session sponsored by Domo 00:46:50

ABD223: IT Innovators: New Technology for Leveraging Data to Enable Agility, Innovation, and Business Optimization Companies of all sizes are looking for technology to efficiently leverage data and their existing IT investments to stay competitive and understand where to find new growth. Regardless of where companies are in their data-driven journey, they face greater demands for information by customers, prospects, partners, vendors and employees. All stakeholders inside and outside the organization want information on-demand or in “real time”, available anywhere on any device. They want to use it to optimize business outcomes without having to rely on complex software tools or human gatekeepers to relevant information. Learn how IT innovators at companies such as MasterCard, Jefferson Health, and TELUS are using Domo's Business Cloud to help their organizations more effectively leverage data at scale. Session sponsored by Domo 00:44:04

ABD301: Analyzing Streaming Data in Real Time with Amazon Kinesis Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. In this session, we present an end-to-end streaming data solution using Kinesis Streams for data ingestion, Kinesis Analytics for real-time processing, and Kinesis Firehose for persistence. We review in detail how to write SQL queries using streaming data and discuss best practices to optimize and monitor your Kinesis Analytics applications. Lastly, we discuss how to estimate the cost of the entire system. 00:49:08

ABD302: Real-Time Data Exploration and Analytics with Amazon Elasticsearch Service and Kibana In this session, we use Apache web logs as example and show you how to build an end-to-end analytics solution. First, we cover how to configure an Amazon ES cluster and ingest data using Amazon Kinesis Firehose. We look at best practices for choosing instance types, storage options, shard counts, and index rotations based on the throughput of incoming data. Then we demonstrate how to set up a Kibana dashboard and build custom dashboard widgets. Finally, we review approaches for generating custom, ad-hoc reports. 00:46:30

ABD303: Developing an Insights Platform – Sysco's Journey from Disparate Systems to Data Lake and Beyond Sysco has nearly 200 operating companies across its multiple lines of business throughout the United States, Canada, Central/South America, and Europe. As the global leader in food services, Sysco identified the need to streamline the collection, transformation, and presentation of data produced by the distributed units and systems, into a central data ecosystem. Sysco's Business Intelligence and Analytics team addressed these requirements by creating a data lake with scalable analytics and query engines leveraging AWS services. In this session, Sysco will outline their journey from a hindsight reporting focused company to an insights driven organization. They will cover solution architecture, challenges, and lessons learned from deploying a self-service insights platform. They will also walk through the design patterns they used and how they designed the solution to provide predictive analytics using Amazon Redshift Spectrum, Amazon S3, Amazon EMR, AWS Glue, Amazon Elasticsearch Service and other AWS services. 01:00:27

ABD304: Best Practices for Data Warehousing with Amazon Redshift & Redshift Spectrum Most companies are over-run with data, yet they lack critical insights to make timely and accurate business decisions. They are missing the opportunity to combine large amounts of new, unstructured big data that resides outside their data warehouse with trusted, structured data inside their data warehouse. In this session, we take an in-depth look at how modern data warehousing blends and analyzes all your data, inside and outside your data warehouse without moving the data, to give you deeper insights to run your business. We will cover best practices on how to design optimal schemas, load data efficiently, and optimize your queries to deliver high throughput and performance. 00:49:44

ABD305: Design Patterns and Best Practices for Data Analytics with Amazon EMR Amazon EMR is one of the largest Hadoop operators in the world, enabling customers to run ETL, machine learning, real-time processing, data science, and low-latency SQL at petabyte scale. In this session, we introduce you to Amazon EMR design patterns such as using Amazon S3 instead of HDFS, taking advantage of both long and short-lived clusters, and other Amazon EMR architectural best practices. We talk about lowering cost with Auto Scaling and Spot Instances, and security best practices for encryption and fine-grained access control. Finally, we dive into some of our recent launches to keep you current on our latest features. 00:48:33

ABD307: Deep Analytics for Global AWS Marketing Organization To meet the needs of the global marketing organization, the AWS marketing analytics team built a scalable platform that allows the data science team to deliver custom econometric and machine learning models for end user self-service. To meet data security standards, we use end-to-end data encryption and different AWS services such as Amazon Redshift, Amazon RDS, Amazon S3, Amazon EMR with Apache Spark and Auto Scaling. In this session, you see real examples of how we have scaled and automated critical analysis, such as calculating the impact of marketing programs like re:Invent and prioritizing leads for our sales teams. 00:33:47

ABD309: How Twilio Scaled Its Data-Driven Culture As a leading cloud communications platform, Twilio has always been strongly data-driven. But as headcount and data volumes grew—and grew quickly—they faced many new challenges. One-off, static reports work when you're a small startup, but how do you support a growth stage company to a successful IPO and beyond? Today, Twilio's data team relies on AWS and Looker to provide data access to 700 colleagues. Departments have the data they need to make decisions, and cloud-based scale means they get answers fast. Data delivers real-business value at Twilio, providing a 360-degree view of their customer, product, and business. In this session, you hear firsthand stories directly from the Twilio data team and learn real-world tips for fostering a truly data-driven culture at scale. Session sponsored by Looker 01:00:59

ABD310: How FINRA Secures Its Big Data and Data Science Platform on AWS FINRA uses big data and data science technologies to detect fraud, market manipulation, and insider trading across US capital markets. As a financial regulator, FINRA analyzes highly sensitive data, so information security is critical. Learn how FINRA secures its Amazon S3 Data Lake and its data science platform on Amazon EMR and Amazon Redshift, while empowering data scientists with tools they need to be effective. In addition, FINRA shares AWS security best practices, covering topics such as AMI updates, micro segmentation, encryption, key management, logging, identity and access management, and compliance. 01:01:44

ABD311: Deploying Business Analytics at Enterprise Scale with Amazon QuickSight One of the biggest tradeoffs customers usually make when deploying BI solutions at scale is agility versus governance. Large-scale BI implementations with the right governance structure can take months to design and deploy. In this session, learn how you can avoid making this tradeoff using Amazon QuickSight. Learn how to easily deploy Amazon QuickSight to thousands of users using Active Directory and Federated SSO, while securely accessing your data sources in Amazon VPCs or on-premises. We also cover how to control access to your datasets, implement row-level security, create scheduled email reports, and audit access to your data. 00:44:31

ABD312: Deep Dive: Migrating Big Data Workloads to AWS Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop, Spark, and data warehouse appliances from on-premise deployments to AWS in order to save costs, increase availability, and improve performance. AWS offers a broad set of analytics services, including solutions for batch processing, stream processing, machine learning, data workflow orchestration, and data warehousing. This session will focus on identifying the components and workflows in your current environment; and providing the best practices to migrate these workloads to the right AWS data analytics product. We will cover services such as Amazon EMR, Amazon Athena, Amazon Redshift, Amazon Kinesis, and more. We will also feature Vanguard, an American investment management company based in Malvern, Pennsylvania with over $4.4 trillion in assets under management. Ritesh Shah, Sr. Program Manager for Cloud Analytics Program at Vanguard, will describe how they orchestrated their migration to AWS analytics services, including Hadoop and Spark workloads to Amazon EMR. Ritesh will highlight the technical challenges they faced and overcame along the way, as well as share common recommendations and tuning tips to accelerate the time to production. 00:35:21

ABD315: Building Serverless ETL Pipelines with AWS Glue Organizations need to gain insight and knowledge from a growing number of Internet of Things (IoT), APIs, clickstreams, unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. In this session, we introduce key ETL features of AWS Glue, cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL flows for your data lake. We discuss how to build scalable, efficient, and serverless ETL pipelines using AWS Glue. Additionally, Merck will share how they built an end-to-end ETL pipeline for their application release management system, and launched it in production in less than a week using AWS Glue. 00:50:52

ABD316: American Heart Association: Finding Cures to Heart Disease Through the Power of Technology Combining disparate datasets and making them accessible to data scientists and researchers is a prevalent challenge for many organizations, not just in healthcare research. American Heart Association (AHA) has built a data science platform using Amazon EMR, Amazon Elasticsearch Service, and other AWS services, that corrals multiple datasets and enables advanced research on phenotype and genotype datasets, aimed at curing heart diseases. In this session, we present how AHA built this platform and the key challenges they addressed with the solution. We also provide a demo of the platform, and leave you with suggestions and next steps so you can build similar solutions for your use cases. 00:52:45

ABD318: Architecting a data lake with Amazon S3, Amazon Kinesis, and Amazon Athena Learn how to architect a data lake where different teams within your organization can publish and consume data in a self-service manner. As organizations aim to become more data-driven, data engineering teams have to build architectures that can cater to the needs of diverse users - from developers, to business analysts, to data scientists. Each of these user groups employs different tools, have different data needs and access data in different ways. In this talk, we will dive deep into assembling a data lake using Amazon S3, Amazon Kinesis, Amazon Athena, Amazon EMR, and AWS Glue. The session will feature Mohit Rao, Architect and Integration lead at Atlassian, the maker of products such as JIRA, Confluence, and Stride. First, we will look at a couple of common architectures for building a data lake. Then we will show how Atlassian built a self-service data lake, where any team within the company can publish a dataset to be consumed by a broad set of users. 01:03:29

ABD319: Tooling Up for Efficiency: DIY Solutions @ Netflix At Netflix, we have traditionally approached cloud efficiency from a human standpoint, whether it be in-person meetings with the largest service teams or manually flipping reservations. Over time, we realized that these manual processes are not scalable as the business continues to grow. Therefore, in the past year, we have focused on building out tools that allow us to make more insightful, data-driven decisions around capacity and efficiency. In this session, we discuss the DIY applications, dashboards, and processes we built to help with capacity and efficiency. We start at the ten thousand foot view to understand the unique business and cloud problems that drove us to create these products, and discuss implementation details, including the challenges encountered along the way. Tools discussed include Picsou, the successor to our AWS billing file cost analyzer; Libra, an easy-to-use reservation conversion application; and cost and efficiency dashboards that relay useful financial context to 50+ engineering teams and managers. 00:59:03

ABD320: Netflix Keystone SPaaS: Real-time Stream Processing as a Service Over 100 million subscribers from over 190 countries enjoy the Netflix service. This leads to over a trillion events, amounting to 3 PB, flowing through the Keystone infrastructure to help improve customer experience and glean business insights. The self-serve Keystone stream processing service processes these messages in near real-time with at-least once semantics in the cloud. This enables the users to focus on extracting insights, and not worry about building out scalable infrastructure. In this session, I share the benefits and our experience building the platform. 00:59:22

ABD327: Migrating Your Traditional Data Warehouse to a Modern Data Lake In this session, we discuss the latest features of Amazon Redshift and Redshift Spectrum, and take a deep dive into its architecture and inner workings. We share many of the recent availability, performance, and management enhancements and how they improve your end user experience. You also hear from 21st Century Fox, who presents a case study of their fast migration from an on-premises data warehouse to Amazon Redshift. Learn how they are expanding their data warehouse to a data lake that encompasses multiple data sources and data formats. This architecture helps them tie together siloed business units and get actionable 360-degree insights across their consumer base. 00:43:35

ABD329: A Look Under the Hood – How Amazon.com Uses AWS Services for Analytics at Massive Scale Amazon's consumer business continues to grow, and so does the volume of data and the number and complexity of the analytics done in support of the business. In this session, we talk about how Amazon.com uses AWS technologies to build a scalable environment for data and analytics. We look at how Amazon is evolving the world of data warehousing with a combination of a data lake and parallel, scalable compute engines such as Amazon EMR and Amazon Redshift. 01:01:53

ABD330: Combining Batch and Stream Processing to Get the Best of Both Worlds Today, many architects and developers are looking to build solutions that integrate batch and real-time data processing, and deliver the best of both approaches. Lambda architecture (not to be confused with the AWS Lambda service) is a design pattern that leverages both batch and real-time processing within a single solution to meet the latency, accuracy, and throughput requirements of big data use cases. Come join us for a discussion on how to implement Lambda architecture (batch, speed, and serving layers) and best practices for data processing, loading, and performance tuning. 00:22:02

ABD331: Log Analytics at Expedia Using Amazon Elasticsearch Service Expedia uses Amazon Elasticsearch Service (Amazon ES) for a variety of mission-critical use cases, ranging from log aggregation to application monitoring and pricing optimization. In this session, the Expedia team reviews how they use Amazon ES and Kibana to analyze and visualize Docker startup logs, AWS CloudTrail data, and application metrics. They share best practices for architecting a scalable, secure log analytics solution using Amazon ES, so you can add new data sources almost effortlessly and get insights quickly. 00:40:04

ABD335: Real-Time Anomaly Detection Using Amazon Kinesis Amazon Kinesis Analytics offers a built-in machine learning algorithm that you can use to easily detect anomalies in your VPC network traffic and improve security monitoring. Join us for an interactive discussion on how to stream your VPC flow Logs to Amazon Kinesis Streams and identify anomalies using Kinesis Analytics. 00:35:48

ABD337: Making the Shift from DevOps to Practical DevSecOps Agility is the cornerstone of the DevOps movement. Developers are working to continuously integrate and deploy (CI/CD) code to the cloud, to ensure applications are seamlessly updated and current. But what about secure? Security best practices and compliance are now the responsibility of everyone in the development lifecycle, and continuous security is a critical component of the ongoing deployment process. Discover how to incorporate security best practices into your current DevOps operations, gain visibility into compliance posture, and identify potential risks and threats in your AWS environment. We demonstrate how to leverage the CIS AWS Foundation Benchmarks within Sumo to trigger alerts from your AWS CloudTrail and Amazon CloudWatch log when risks or violations occur, such as unauthorized API calls, IAM policy changes, AWS Config configuration changes, and many more. Session sponsored by Sumo Logic 00:48:37

ABD338: MirrorWeb: Powering Large-scale, Full-text Search for the UK Government Web Archives Using Amazon Elasticsearch Service MirrorWeb offers automated website and social media archiving services with full text search capability for all content. The UK government hired MirrorWeb to provide search services across 20 years of archived data from over 4,800 websites. In this session, MirrorWeb discusses the technology stack they built using Amazon Elasticsearch Service (Amazon ES) to search across the 333 million unique documents (over 120 TB) that they indexed within a 10-hour period. They discuss how they moved data from on-premises to Amazon S3 using AWS Snowball and then processed that data using Amazon EC2 Spot Instances, reducing costs by over 90%. They also talk about how they used AWS Lambda to ingest data into Amazon ES. Finally, they share best practices for building a large-scale document search architecture. 00:36:32

ABD339: Deep Dive and Best Practices for Amazon Athena Amazon Athena is an interactive query service that enables you to process data directly from Amazon S3 without the need for infrastructure. Since its launch at re:invent 2016, several organizations have adopted Athena as the central tool to process all their data. In this talk, we dive deep into the most common use cases, including working with other AWS services. We review the best practices for creating tables and partitions and performance optimizations. We also dive into how Athena handles security, authorization, and authentication. Lastly, we hear from a customer who has reduced costs and improved time to market by deploying Athena across their organization.

ABD401: How Netflix Monitors Applications in Near Real-Time with Amazon Kinesis Thousands of services work in concert to deliver millions of hours of video streams to Netflix customers every day. These applications vary in size, function, and technology, but they all make use of the Netflix network to communicate. Understanding the interactions between these services is a daunting challenge both because of the sheer volume of traffic and the dynamic nature of deployments. In this session, we first discuss why Netflix chose Kinesis Streams to address these challenges at scale. We then dive deep into how Netflix uses Kinesis Streams to enrich network traffic logs and identify usage patterns in real time. Lastly, we cover how Netflix uses this system to build comprehensive dependency maps, increase network efficiency, and improve failure resiliency. From this session, youl learn how to build a real-time application monitoring system using network traffic logs and get real-time, actionable insights. 00:48:59

ABD402: How Esri Optimizes Massive Image Archives for Analytics in the Cloud Petabyte scale archives of satellites, planes, and drones imagery continue to grow exponentially. They mostly exist as semi-structured data, but they are only valuable when accessed and processed by a wide range of products for both visualization and analysis. This session provides an overview of how ArcGIS indexes and structures data so that any part of it can be quickly accessed, processed, and analyzed by reading only the minimum amount of data needed for the task. In this session, we share best practices for structuring and compressing massive datasets in Amazon S3, so it can be analyzed efficiently. We also review a number of different image formats, including GeoTIFF (used for the Public Datasets on AWS program, Landsat on AWS), cloud optimized GeoTIFF, MRF, and CRF as well as different compression approaches to show the effect on processing performance. Finally, we provide examples of how this technology has been used to help image processing and analysis for the response to Hurricane Harvey. 00:58:27

ABD403: Best Practices for Distributed Machine Learning and Predictive Analytics Using Amazon EMR and Open-Source Tools This session, we focus on common use cases and design patterns for predictive analytics using Amazon EMR. We address accessing data from a data lake, extraction and preprocessing with Apache Spark, analytics and machine learning code development with notebooks (Jupyter, Zeppelin), and data visualization using Amazon QuickSight. We cover other operational topics, such as deployment patterns for ad hoc exploration and batch workloads using Spot and multi-user notebooks. The intended audience for this session includes technical users who are building statistical and data analytics models for the business using tools, such as Python, R, Spark, Presto, Amazon EMR, Notebooks. 01:16:15

Alexa

ALX201: Building Alexa-Connected Products and Experiences with Alexa Gadgets Developer Tools In this session, we will teach you the technology behind Alexa Gadgets – a new category of connected products and developer tools that enable you to create your own Alexa-connected product or game skill that work with Echo Buttons. You will hear from the GM of Alexa Gadgets, as well as early Alexa Gadget developers, Musicplode Media (the makers of Beat the Intro) and Gemmy Industries (the makers of Big Mouth Billy Bass). 00:36:19

ALX202: Integrate Alexa voice technology into your product with the Alexa Voice Service (AVS) In this session, we'll teach you how to use the Alexa Voice Service (AVS) and its suite of development tools to bring your first Alexa-enabled product to market. You'll learn how commercial device manufacturers are getting to market faster using the new AVS Device SDK. To ensure your customers have the best voice experience, we'll teach you how to choose an Audio Front End and client-side hardware from a range of commercial-grade Development Kits. You'll walk out of this session with the knowledge required to design products with optimized Alexa-enabled voice experiences around your unique design requirements. 00:35:44

ALX203: How Voice Technology Is Moving Higher Education to a New Era In this presentation, hear from John Rome, Arizona State University's Deputy CIO, and Jared Stein, Instructure's VP of Higher Ed Strategy, on how voice technology is bringing higher education to a new era. Come learn how institutions are adopting Alexa on campus and in their curriculum to serve students in new, innovative ways and how Instructure is rethinking the delivery of education for millions of customers through their Canvas skill for Alexa. 00:53:08

ALX204: NEW LAUNCH!Building Alexa Skills for Businesses Alexa for Business makes it possible for businesses to create Alexa skills designed specifically for employees or customers. With Alexa for Business, devices can be managed and provisioned to be used by employees in conference rooms, at employees' desks, or around the workplace. You can also create skills that can be used by customers, in places like hotel rooms, restaurants, hospitality suites, or even stores. In this session, we'll provide an overview of Alexa for Business, and show you how Alexa for Business creates business value for both customers and employees.

ALX303: The Art and Science of Conversation Applied to Alexa Skills It used to be the case that we only spoke to computers in their language. But more and more often, we're interacting with them in ours. We are moving quickly into a world of computer conversation, and one in which, for many applications, the most natural interactions will be through spoken language. But how do you create engaging narrative and compelling, organic conversational interactions using the imprecise tools of speech recognition and intent resolution? In this session, we look at the experience as a whole and take you through key learnings that you can use when building your skills. We cover issues like knowing your audience, creating compelling storylines, using a cast of characters, integrating voiceover, designing a soundscape, and finding those “magic moments”. For each of these, we share the design pattern, the backing AI or physiological science, and how to implement the experience with Alexa. 00:53:17

ALX317: How Capital One Rethought Multimodal Voice Experiences and Brought Banking to the Kitchen with Echo Show Last year, Capital One joined Alexa on stage to talk about their experience building their successful Alexa skill. Since that time, many lessons have been learned through customer feedback and new enhancements to the Alexa Skills Kit (ASK) such as the skills beta testing tool and the Alexa skill builder. How can you evolve your Alexa skill with more meaningful data sets outside of the existing intents? As the Alexa Skills Kit has grown its built-in library, what does it mean for your skill to support both ordinal (list) and numerical values? How can you handle new specifications without requiring wholesale code changes? Capital One has tackled all of these issues as well as embracing additional programming languages like TypeScript to ensure that response structures are validated against all schemas. With the arrival of multimodal devices such as the Echo Show, the opportunity for seamless customer interaction models across voice and visual has also arrived (big fonts, touch, video). Your customers can now transition back and forth between using their voice and their hands while engaging with your skill. Come learn direct from Capital One on the best way of providing extra contextual information using the new Alexa Skills Kit display directives but in more convenient ways to get things done. 01:03:12

ALX318: Voice Plus Screen: How to Design Multi-Modal Devices with the Alexa Voice Service In this advanced session, learn how to build Alexa-enabled devices that combine voice and visual responses in a meaningful way for consumers. The session covers the design methods and the hardware and software development resources for interactive multi-modal design. We also present some examples of products that are leading with such implementations. 00:21:18

ALX319: It's All in the Data: The Machine Learning Behind Alexa's AI Systems Garbage in, garbage out. The quality of all machine learning solutions depends on the data used in training. Alexa developers are able to use advanced natural language understanding capabilities like built-in slot and intent training, entity resolution, and dialog management. This utterance data behind your skills is the most important contributor to the voice input experience. This session discusses how utterance data is processed by our systems, and what you can do as a developer to improve accuracy. 00:48:24

ALX320: The Science behind the Alexa Prize: Meeting the AI Challenges In this session, scientists from the Alexa team explore and discuss some of the AI challenges behind the Alexa Prize. Learn about the challenges of Automatic Speech Recognition (ASR), Natural Language Understanding (NLU) and conversational interaction through stories from the founding members of the team that also built Amazon Echo and Alexa. We'll address the early difficulties of designing the algorithms for noise reduction for close-talk, near field, and far-field Alexa devices, and methods and frameworks they use for ASR, NLU and conversational interaction. 00:48:17

ALX321: Alexa State of the Science Join us for the Golden Age of AI. The way that humans interact with machines is at an inflection point and conversational artificial intelligence (AI) is at the center of the transformation. Learn how Amazon is using machine learning and cloud computing to help fuel innovation in AI, making Alexa smarter every day. Alexa VP and Head Scientist Rohit Prasad presents the state of the science behind Amazon Alexa. He addresses advances in spoken language understanding and machine learning in Alexa, and shares how Amazon thinks about building the next generation of user experiences. He will announce the inaugural winner of the Alexa Prize and award the winning student team a check for $500,000. 00:49:39

ALX322: Natural Language Processing Plus Natural Language Generation: The Cutting Edge of Voice Design Your Alexa skill could become the voice of your company to customers. How do you make sure that it conveys rich information, delivered with your brand's personality? In this session, Adam Long, VP of Product Management at Automated Insights, discusses natural language generation (NLG) techniques and how to make your Alexa response more insightful and engaging. Rob McCauley, Solutions Architect with Amazon Alexa, shows you how to put those techniques into action. 00:39:09

ALX324: Alexa State of the Union: Amazon's Vision for Alexa and Voice Join Alexa SVP Tom Taylor as we cover the state of the Alexa business, describe some early challenges, and share how we are approaching emerging trends. Voice experiences have transformed the way that customers interact with the world around them. We will introduce new capabilities to help developers better address opportunities in devices, the smart home, and voice. You will leave with an understanding of the vision behind Alexa that ties together the deep dives going on throughout re:Invent. 00:52:49

ALX325: Now Hear This: How Earplay Architects an Alexa Radio Drama This session covers the technical and design challenges that the Earplay team overcame when they built their highly engaging Alexa experience. Leave this session with an understanding of how to use the Alexa Service, AWS Lambda, Amazon DynamoDB, SSML, and testing tools to deliver similar experiences to your customers.

ALX326: Applying Alexa's Natural Language To Your Challenges In this session, we will give you a complete picture of all the tools and techniques required to build complex, production-quality Alexa skills. You will leave this session knowing how to use Alexa's dialog management, entity resolution, and slot elicitation capabilities as well as how to process the results through a microservice with AWS Lambda. 01:03:22

ALX328: Smart Devices Everywhere In this session, we cover Alexa's reach into smart devices integration, both inside and outside the home. Learn how your product can become part of the Alexa smart devices family and how you can easily bring Alexa to your business or home.

Automotive & Manufacturing

AMF301: Big Data & Analytics for Manufacturing Operations Manufacturing companies collect vast troves of process data for tracking purposes. Using this data with advanced analytics can optimize operations, saving time and money. In this session, we explore the latest analytics capabilities to support your goals for optimizing the manufacturing plant floor. Learn how to build dashboards that connect to prediction models driven by sensors across manufacturing processes. Learn how to build a data lake on AWS, using services and techniques such as AWS CloudFormation, Amazon EC2, Amazon S3, AWS Identity and Access Management, and AWS Lambda. We also review a reference architecture that supports data ingestion, event rules, analytics, and the use of machine learning for manufacturing analytics. 00:47:04

AMF302: Alexa, Where's My Car? A Test Drive of the AWS Connected Car Reference Today's trends in auto technology are all about connecting cars and their occupants to the outside world in a seamless and safe manner. In this session, we discuss how automotive companies are leveraging AWS for a variety of connected vehicle use cases. You'll leave this session with source code, architecture diagrams, and an understanding of how to apply the AWS Connected Vehicle Reference Architecture to build your own prototypes. You'll also learn how car companies can leverage Amazon services such as Alexa and AWS services such as AWS IOT, AWS Greengrass, AWS Lambda and Amazon API Gateway to rapidly develop and deploy innovative connected vehicle services. 00:53:36

AMF304: Optimizing Design and Engineering Performance in the Cloud for Manufacturing Manufacturing companies in all sectors—including automotive, aerospace, semiconductor, and industrial manufacturing—rely on design and engineering software in their product development processes. These computationally-intensive applications—such as computer-aided design and engineering (CAD and CAE), electronic design automation (EDA), other performance-critical applications—require immense scale and orchestration to meet the demands of today's manufacturing requirements. In this session, you learn how to achieve the maximum possible performance and throughput from design and engineering workloads running on Amazon EC2, elastic GPUs, and managed services such as AWS Batch and Amazon AppStream 2.0. We demonstrate specific optimization techniques and share samples on how to accelerate batch and interactive workloads on AWS. We also demonstrate how to extend and migrate on-premises, high performance compute workloads with AWS, and use a combination of On-Demand Instances, Reserved Instances, and Spot Instances to minimize costs. 00:55:28

AMF305: Autonomous Driving Algorithm Development on Amazon AI Over the next decade, accelerating autonomous driving technology—including advances in artificial intelligence, sensors, cameras, radar and data analytics—are set to transform how we commute. In this session, you learn how to use Amazon AI for a highly productive, on demand, and scalable autonomous driving development environment. We compare the most popular AI frameworks including TensorFlow and MXNet for use in autonomous driving workloads. You learn about the AWS optimizations on MXNet that yield near linear scalability for training deep neural networks and convolutional neural networks. We demonstrate the ease of getting started on AWS AI by using a sample training dataset for building an object detection model on AWS. This session is intended for audiences who have some exposure to the underlying concepts for AI-based autonomous driving development. After attending the session, you can get started with AI development on AWS by using a sample dataset for building an object detection model. 00:40:52

Architecture

ARC201: Scaling Up to Your First 10 Million Users Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from one to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud. 00:46:40

ARC205: Born in the Cloud, Built like a Startup This presentation compares three modern architecture patterns that startups are building their businesses around. It includes a realistic analysis of cost, team management, and security implications of each approach. It covers AWS Elastic Beanstalk, Amazon ECS, Amazon API Gateway, AWS Lambda, Amazon DynamoDB, and Amazon CloudFront. Attendees will also hear from venture capital investor Third Rock Ventures (TRV) who has launched 40+ biotech startups over the last 10 years. TRV will outline how it launches cloud native startups that turn bleeding edge science into new treatments across the spectrum of disease, with highlights drawn Relay Therapeutics and Tango Therapeutics. 00:50:31

ARC206: Disney's Magic: The Story of Cloud Transformation Creating a comprehensive, accelerated cloud strategy for a complex or federated organization requires a disciplined approach—one that balances the need for centralized governance with the opportunity to innovate across all engineering segments within the enterprise. In this session, will follow the Walt Disney Company's journey to create an initial cloud value hypothesis and cloud business case, and then develop a structured approach towards cloud migrations and a "cloud-first" operating model. Attendees learn more about the key implications, risks, and considerations of the company's cloud transformation program; see examples of reference architectures and implementation guides; and understand the required activities that contributed to the success of the program. The patterns presented are broadly applicable to complex organizations with global aspirations to make the journey to the Cloud. Session sponsored by Accenture

ARC207: Monitoring Performance of Enterprise Applications on AWS: Understanding the Dynamic Nature of Cloud Computing Applications running in a typical data center are static entities. But applications aren't static in the cloud. Dynamic scaling and resource allocation is the norm on AWS. Technologies such as Amazon EC2, AWS Lambda, and Auto Scaling provide flexibility in building dynamic applications and with this flexibility comes an opportunity to learn how an enterprise application functions optimally. New Relic helps manage these applications without sacrificing simplicity. In this session, we discuss changes in monitoring dynamic cloud resources. We'll share best practices we've learned working with New Relic customers on managing applications running in this environment to understand and optimize how they are performing. Session sponsored by New Relic 00:56:44

ARC208: Walking the Tightrope: Balancing Innovation, Reliability, Security, and Efficiency on the Cloud At Netflix, we make explicit tradeoffs to balance our four key focus domains of innovation, reliability, security, and efficiency to ensure our customers, shareholders, and internal engineering stakeholders are happy. In this talk, learn the strategies behind each of our focus domains to optimize for one without detracting from another. 00:46:21

ARC209: A Day in the Life of a Netflix Engineer III Netflix is a large, ever changing ecosystem system serving millions of customers across the globe through cloud-based systems and a globally distributed CDN. This entertaining romp through the tech stack serves as an introduction to how we think about and design systems, the Netflix approach to operational challenges, and how other organizations can apply our thought processes and technologies. In this session, we discuss the technologies used to run a global streaming company, scaling at scale, billions of metrics, benefits of chaos in production, and how culture affects your velocity and uptime. 00:46:36

ARC210: Building Scalable Multitenant Email Sending Programs with Amazon Simple Email Service Many companies use Amazon Simple Email Service (Amazon SES) to build applications that enable their users to send millions of emails every day. In this session, you learn how to build applications using the scalable, reliable Amazon SES infrastructure. You also learn how to monitor email sending and enforce compliance rules on individual accounts without impacting other accounts. Zendesk discusses the architecture of its multitenant email sending platform, the historical challenges it faced, its phased approach to platform migration, and the ways Amazon SES helped them meet their goals. 00:47:35

ARC213: Open Source at AWS—Contributions, Support, and Engagement Startups and enterprises are increasingly using open source projects for architectures. AWS customers and partners also run their own open source programs and contribute key technologies to the industry (see DCS201). At AWS, we engage with open source projects in several ways. Through bug fixes and enhancements to popular projects, including work with the Hadoop ecosystem (see BDM401), Chromium (see BAP305) and Boto, and standalone projects like the security library s2n (see NET405) and machine learning project MXNet (see MAC401). We have services like Amazon ECS for Docker (see CON316) and Amazon RDS for MySQL and PostgreSQL (see DAT305) that make open source easier to use. In this session, learn more about existing AWS open source work and our next steps. 00:45:14

ARC217: Self-Service Analytics with AWS Big Data and Tableau As one of the thought leaders in Expedia's cloud migration, the Expedia Global Payments Business Intelligence group architected, designed and built a complete cloud data mart solution from the ground up using the AWS and Tableau online. In this session, we will discuss our business challenge, the journey to the solution, high-level technical architecture (using S3, EMR, data pipelines, Redshift, Tableau Online) and lessons learned along the way, including best practices and optimization methods, etc. Session sponsored by Tableau 01:01:04

ARC219: Digital Transformation Many industries are going through a digital transformation as their existing business models are being disrupted and new competitors emerge. The key driver is a need for faster time-to-value as a direct relationship with customers provides analytics that drive personalization and rapid product development. There's a cultural aspect to the change, as well as new organizational patterns that go along with a migration to cloud native services. Application architectures are evolving from monoliths to microservices and serverless deployments, and they becoming more distributed, highly available, and resilient. The highly automated practices that have built up around DevOps are moving to the mainstream, and some new techniques are emerging around security red teams and chaos engineering. 01:02:15

ARC303: Running Lean Architectures: How to Optimize for Cost Efficiency Whether you're a cash-strapped startup or an enterprise optimizing spend, it pays to run cost-efficient architectures on AWS. This session reviews a wide range of cost planning, monitoring, and optimization strategies, featuring real-world experience from AWS customers. We cover how to effectively combine Amazon EC2 On-Demand, Reserved, and Spot Instances to handle different use cases; leveraging Auto Scaling to match capacity to workload; and choosing the optimal instance type through load testing. We discuss taking advantage of tiered storage and caching, offloading content to Amazon CloudFront to reduce back-end load, and getting rid of your back end entirely by serverless. Even if you already enjoy the benefits of serverless architectures, we show you how to select the optimal AWS Lambda memory class and how to maximize networking throughput in order to minimize Lambda run-time and therefore execution cost. We also showcase simple tools to help track and manage costs, including Cost Explorer, billing alerts, and AWS Trusted Advisor. This session is your pocket guide for running cost effectively in the AWS Cloud. 00:57:02

ARC304: From One to Many: Evolving VPC Design As more customers adopt Amazon VPC architectures, the features and flexibility of the service are squaring off against evolving design requirements. This session follows this evolution of a single regional VPC into a multi-VPC, multi-region design with diverse connectivity into on-premises systems and infrastructure. Along the way, we investigate creative customer solutions for scaling and securing outbound VPC traffic, securing private access to Amazon S3, managing multi-tenant VPCs, integrating existing customer networks through AWS Direct Connect, and building a full VPC mesh network across global regions. 01:13:00

ARC306: High Resiliency & Availability of online entertainment Communities Using Multiple AWS Regions With increase in popularity of online engagement as a means of entertainment, broad use of wide range of communities have become popular. These communities need to be highly available and resilient at scale. Failure of availability could be fatal to the product that are used by the customer. We will share the process you should use to develop your architectural principles that will allow you to reap the benefits of reduced complexity.

ARC310: Avoiding Groundhog Day - Enabling Transformation on Day 1, 100, or 1000 of your Journey to the Cloud Migrating workloads to the cloud requires detailed planning and execution. When you're an established business with users that rely on your cloud workloads, this can seem like an insurmountable task. A complete migration is a victory often celebrated as the end of the journey, when in reality it is just the first step in a process of continual optimization and evolution. To truly optimize the power of AWS and reap the financial and performance benefits of cloud computing, it is critical that you evaluate your workloads for opportunities to continue to evolve to drive business value and embrace the innovative nature of the AWS Cloud. In this session, join Rackspace to learn about key components to consider in order to execute a successful migration to AWS, the importance of optimizing your AWS environment over time, and how customers are leveraging Rackspace's Fanatical Support for AWS to help them migrate and transform their workloads on AWS. Session sponsored byRackspace 00:56:52

ARC311: Serverless Encoding at Scale with Content Moderation via Deep Learning-Based Video Analysis With more companies entering the OTT market, AWS sees customer demand for ways to decrease the time it takes to get content into their users' hands, while increasing operational efficiency and lowering IT infrastructure costs. Using deep learning-based image analysis can provide users actionable feedback about the content they view. When combining a new serverless architecture approach using Amazon Elastic Transcoder with AWS' deep learning technology Amazon Rekognition, companies can provide near real-time, on-demand encoding of assets and content moderation. This session covers serverless versus virtualized infrastructure, handling encoding jobs with AWS Lambda, encoding dynamic media assets with Elastic Transcoder (or Elemental), moderating content with Amazon Rekognition, and storing metadata with Amazon DynamoDB. We also provide a demo to test a production-ready serverless encoding architecture. 00:42:06

ARC312: Why Regional Reserved Instances Are a Game Changer for Netflix Learn how Netflix efficiently manages costs associated with 150K instances spread across multiple regions and heterogenous workloads. By leveraging internal Netflix tools, the Netflix capacity team is able to provide deep insights into how optimize our end users' workload placements based on financial and business requirements. In this session, we discuss the efficiency strategies and practices we picked up operating at scale using AWS since 2011, along with best practices used at Netflix. Because many of our strategies revolve around Reserved Instances, we focus on the evolution of our Reserved Instance strategy and the recent changes after the launch of regional reservations. Regional Reserved Instances provide tremendous financial flexibility by being agnostic to instance size and Availability Zone. However, it's anything but simple to adopt regional Reserved Instances in an environment with over 1,000 services, that have varying degrees of criticality combined with a global failover strategy. 00:49:40

ARC313: Exploring Blockchain Technology, Risks, and Emerging Trends Blockchain has become a hot topic for enterprises, start-ups, entrepreneurs, and regulatory bodies. Born from bitcoin in 2008, blockchain's promise of a distributed ledger has far greater implications than cryptocurrency. Companies are now beginning to understand its disruptive potential and are experimenting with its most promising applications. But, few companies have asked the more fundamental question: Are we ready to adopt a shared public database for financial transactions? In this session, we cover the concepts of blockchain and use cases in the enterprise. We also demonstrate blockchain in use and show how to implement it using AWS services.

ARC314: Bringing the Superpower of Bots to Your Company with a Serverless Bot Solution Built on AWS Bots are leading the next disruptive wave of how people and companies communicate. Companies can use bots for internal communications, such as facilities management or support, or for external communications, such as selling products, helping customers with searches, and acting as a trusted advisor in other ways. In this session, we show how easy it is to deploy a bot and how it improves customer interactions. Further, most bot solutions operate with a single language. We show how to build a language-agnostic bot solution using AWS Lambda and other AWS services. 00:46:01

ARC315: The Enterprise Fast Lane - What Your Competition Doesn't Want You to Know about Enterprise Cloud Transformation Fed up with stop and go in your data center? Shift into overdrive and pull into the fast lane! Learn how AutoScout24, the largest online car marketplace Europe-wide, is building its Autobahn in the Cloud. The secret ingredient? Culture! Because “cloud” is only half of the digital transformation story. The other half is how your organization deals with cultural change as you transition from the old world of IT into building microservices on AWS, with agile DevOps teams in a true „you build it, you run it“ fashion. Listen to stories from the trenches, powered by Amazon Kinesis, Amazon DynamoDB, AWS Lambda, Amazon ECS, Amazon API Gateway and much more, backed by AWS Partners, AWS Professional Services, and AWS Enterprise Support. Learn how to become cloud native, evolve your architecture, drive cultural change across teams, and manage your company's transformation for the future. 00:56:18

ARC316: Getting from Here to There: A Journey from On-premises to Serverless Architecture In this session, go on a journey from traditional, on-premises applications and architecture to pure cloud-native environments. This transformative approach highlights the steps required to incrementally move to AWS technologies while increasing resiliency and efficiency and reducing operational overhead. We challenge traditional understanding and show you how different types of workloads can be migrated using real-world examples. Additionally, we demonstrate how you can assemble and use the AWS building blocks available today to bolster your success and position yourself to inherit the power of our managed services, such as Amazon API Gateway, AWS Lambda, Amazon Cognito, Amazon S3, Amazon Simple Queue Service (SQS), Amazon SNS and our AWS CodeStar suite. You leave this session armed with the knowledge you need to begin your own voyage towards serverless architecture. 00:54:50

ARC317: Application Performance Management on AWS Cloud is the new normal, and organizations are deploying different types workloads on AWS. Understanding the performance efficiency and overall application performance is critical to ensuring that you can scale your workload to meet the demands of your customers. Understanding how well your application performs over time helps you to continuously improve and innovate your software to get the most out of the AWS platform. If you aren't measuring custom application metrics, you are operating your software blindly and cannot pinpoint areas of improvement. Learn how to use Amazon CloudWatch custom metrics, alerts, dashboards and AWS X-Ray to architect an application monitoring service to provide insight to your workload's performance. 00:57:27

ARC318: Building .NET-based Serverless Architectures and Running .NET Core Microservices in Docker Containers on AWS In this session, we first look at common approaches to refactoring common legacy .NET applications to microservices and AWS serverless architectures. We also look at modern approaches to .NET-based architectures on AWS. We then elaborate on running .NET Core microservices in Docker containers natively on Linux in AWS while examining the use of AWS SDK and .NET Core platform. We also look at the use of the various AWS services such as Amazon SNS, Amazon SQS, Amazon Kinesis, and Amazon DynamoDB, which provide the backbone of the platform. For example, Experian Consumer Services runs a large ecommerce platform that is now cloud based in the AWS. We look at how they went from monolithic platform to microservices, primarily in .NET Core. With a heavy push to move to Java and open source, we look at the development process, which started in the beta days of .NET Core, and how the direction Microsoft was going allowed them to use existing C# skills while pushing themselves to innovate in AWS. The large, single team of Windows based developers was broken down into several small teams to allow for rapid development into an all Linux environment. 01:01:22

ARC319: How to Design a Multi-Region Active-Active Architecture Many customers want a disaster recovery environment, and they want to use this environment daily and know that it's in sync with and can support a production workload. This leads them to an active-active architecture. In other cases, users like Netflix and Lyft are distributed over large geographies. In these cases, multi-region active-active deployments are not optional. Designing these architectures is more complicated than it appears, as data being generated at one end needs to be synced with data at the other end. There are also consistency issues to consider. One needs to make trade-off decisions on cost, performance, and consistency. Further complicating matters is the variety of data stores used in the architecture results in a variety replication methods. In this session, we explore how to design an active-active multi-region architecture using AWS services, including Amazon Route 53, Amazon RDS multi-region replication, AWS DMS, and Amazon DynamoDB Streams. We discuss the challenges, trade-offs, and solutions. 01:19:39

ARC320: Reinforcement Learning – The Ultimate AI Reinforcement Learning (RL) can be used to solve real-world problems in robotics and conversational engines without supervision. AI algorithms that observe their surroundings and learn are considered to be the ultimate forms of AI. The RL use cases shines in multi-agent scenarios where each agent reacts in real-time to the changing situation. In this session, we explain RL, the theory, and the algorithms used. We show an MXNet-based demo that will automatically learn to play a game. We use a game and show how an agent powered by MXNet takes actions to win. Initially, you notice that the agent making very little progress, but after a few dozen iterations, it can play the game better than any human being. You can generalize this to real world problems. RL is currently used today in robotics, gaming, autonomous vehicle control, spoken language systems and many more. In this talk, I will be using Amazon EC2 P2 instances, AWS deep learning AMI, MXnet deep learning framework, Amazon EBS, and Amazon S3. 01:00:00

ARC321: Models of Availability When engineering teams take on a new project, they often optimize for performance, availability, or fault tolerance. More experienced teams can optimize for these variables simultaneously. Netflix adds an additional variable: feature velocity. Most companies try to optimize for feature velocity through process improvements and engineering hierarchy, but Netflix optimizes for feature velocity through explicit architectural decisions. Mental models of approaching availability help us understand the tension between these engineering variables. For example, understanding the distinction between accidental complexity and essential complexity can help you decide whether to invest engineering effort into simplifying your stack or expanding the surface area of functional output. The Chaos team and the Traffic team interact with other teams at Netflix under an assumption of Essential Complexity. Incident remediation, approaches to automation, and diversity of engineering can all be understood through the perspective of these mental models. With insight and diligence, these models can be applied to improve availability over time and drift into success. 00:56:16

ARC329: Optimizing Performance and Efficiency for Amazon EC2 and More with Turbonomic Every day, systems architects and cloud architects have to size cloud workloads for performance and efficiency. Do you choose T2, C3, C4, M3, or something else for your Amazon Elastic Compute Cloud (Amazon EC2) instance type? Do you need more CPUs, memory, or both? What about distributed applications across regions and Availability Zones? How do IT teams determine the right instance family and size for AWS workloads? Turbonomic solves these challenges with you. Their real-time hybrid cloud management platform can ensure that your workloads get the right resources in real time to assure performance across the compute, storage, network, application, and database layers of AWS, and across your hybrid cloud infrastructure. Get a crash course in understanding workload performance characteristics, and how Turbonomic matches to AWS resources to assure real-time, efficient performance for your AWS environment, with the ability to fully automate these processes. Whether you're new to the platform or regular users of Amazon EC2, learn to take the guesswork out of what makes each Amazon EC2 instance family unique and appropriate for your business and technical requirements. Session sponsored by Turbonomic, Inc. 00:58:42

ARC330: How the BBC Built a Massive Media Pipeline Using Microservices The BBC iPlayer is the biggest audio and video-on-demand service in the UK. Over one-third of the country submits 10 million video playback requests every day, and the service publishes over 10,000 hours of media every week. Moving iPlayer to the cloud has enabled the BBC to shorten the time-to-market of content from 10 hours to 15 minutes. In this session, the BBC's lead architect describes the approach behind creating iPlayer architecture, which uses Amazon SQS and Amazon SNS in several ways to improve elasticity, reliability, and maintainability. You see how BBC uses AWS messaging to choreograph the 200 microservices in the iPlayer pipeline, maintain data consistency as media traverses the pipeline, and refresh caches to ensure timely delivery of media to users. This is a rare opportunity to see the internal workings and best practices of one of the largest on-demand content delivery systems operating today. 00:52:46

ARC331: How I Made My Motorbike Talk, or How to Mix Amazon Lex, Amazon Lambda, and IoT to Give Life to Everyday Objects This talk includes a story and a recipe. The story is about a nerd who bought his first motorbike, got a license for it, and started hacking to make it interact and talk, all in two months. The recipe is a technical one that explains how to use Amazon Lex and Amazon Lambda to quickly prototype and deploy a serverless chatbot connected with an embedded device in order to realize an Internet of Things (IoT) application. We discuss how you can integrate your IoT application with Amazon Lex using AWS Lambda and the Amazon API Gateway, how to exchange session data to have a contextual conversation, and how to provide a successful bot experience. Expect to leave this session knowing how to build, deploy, and publish a bot, and how to attach it to an IoT device—with the potential to bringing to life any object that surrounds you. 00:44:19

ARC401: Serverless Architectural Patterns and Best Practices As serverless architectures become more popular, customers need a framework of patterns to help them identify how they can leverage AWS to deploy their workloads without managing servers or operating systems. This session describes reusable serverless patterns while considering costs. For each pattern, we provide operational and security best practices and discuss potential pitfalls and nuances. We also discuss the considerations for moving an existing server-based workload to a serverless architecture. The patterns use services like AWS Lambda, Amazon API Gateway, Amazon Kinesis Streams, Amazon Kinesis Analytics, Amazon DynamoDB, Amazon S3, AWS Step Functions, AWS Config, AWS X-Ray, and Amazon Athena. This session can help you recognize candidates for serverless architectures in your own organizations and understand areas of potential savings and increased agility. What's new in 2017: using X-Ray in Lambda for tracing and operational insight; a pattern on high performance computing (HPC) using Lambda at scale; how a query can be achieved using Athena; Step Functions as a way to handle orchestration for both the Automation and Batch patterns; a pattern for Security Automation using AWS Config rules to detect and automatically remediate violations of security standards; how to validate API parameters in API Gateway to protect your API back-ends; and a solid focus on CI/CD development pipelines for serverless, which includes testing, deploying, and versioning (SAM tools). 00:57:47

ARC402: Architectural Patterns and Best Practices with VMware Cloud on AWS The recent launch of VMware Cloud on AWS gives customers new options for addressing several use cases, including cloud migration, hybrid deployments, and disaster recovery. We introduce and describe design patterns for incorporating VMware Cloud on AWS into existing architecture and detail how the service's capabilities can influence future architectural plans. We explore design considerations and nuances for integrating VMware Cloud on AWS Software Defined Data Centers (SDDCs) with native AWS services, enabling you to use each platform's benefits. Architects, system operators, and anyone looking to understand VMware Cloud on AWS will walk away with examples and options for solving challenging use cases with this new, exciting service. 00:56:34

ARC403: Encoding Artifacts to the Oscars: Taking on Terabyte-Scale, 1-Gbps, 4K Video Processing in the Cloud 4K video has resulted in a huge uptick in resource requirements, which is difficult to scale in a traditional environment. The cloud is perfect to handle problems of this scale. However, many unanswered questions remain around best practices and suitable architectures for dealing with massive, high-quality assets. We define problem cases and discuss practical architectural patterns to handle these challenges by using AWS services such as Amazon EC2 (graphical instances), Amazon EMR, Amazon S3, Amazon S3 Transfer Acceleration, Amazon Glacier, AWS Snowball, and magnetic Amazon EBS volumes. The best practices we discuss can also help architects and engineers dealing with non-video data. Also, Amazon Studios presents how, powered by AWS, they solved many of these problems and can create, manage, and distribute Emmy and Oscar Award-winning content. 00:54:58

ARC404: Metering the Hybrid Cloud AWS Metering provides customers with detailed usage information (down to a specific Amazon EC2 instance or Amazon S3 bucket used in a single hour), enabling them to gain deep insights into their utilization of cloud resources. However, this level of transparency is not available across most customers' traditional IT infrastructure, making it difficult to understand what resources are being used, when, and by whom. Join us in this session to learn how to meter, measure, and understand your usage from AWS, on-premises data centers, containers, serverless compute, even other clouds across your IT infrastructure. We show you how to meter your non-AWS resources to make smarter decisions about your business and investment in the cloud. 00:45:52

ARC405: Building a Photorealistic Real-Time 3D Configurator with Server-Side Renderings on AWS WebGL has made great improvements over the past years. However, it still can't provide photorealistic experiences alone. In order to provide products with the best look and feel, we decided to use server-side 3D rendering. In this session, we show you how we built our real-time 3D configurator stack using Amazon EC2 Elastic GPUs, RESTful microservices, Lambda@Edge, Amazon CloudFront and other services. 00:53:20

ARC406: Amazon.com - Replacing 100s of Oracle DBs with Just One: DynamoDB When customers across the globe place orders on Amazon.com, those orders are processed through many different backend systems, including Herd, a workflow-orchestration engine developed by the Amazon eCommerce Foundation team. A mission-critical system used by more than 300 Amazon engineering teams, Herd executes over four billion workflows every day. Beginning in 2013, Herd's workflow traffic was doubling year-over-year, and scaling its then dozens of horizontally-partitioned Oracle databases was becoming a nightmare, and this number kept increasing. To support Herd's increasing scaling needs, and to provide a better customer experience, the Herd team had to re-architect its storage system and move its primary data storage from Oracle to Amazon DynamoDB. In this session, we discuss how we moved from Oracle to Amazon DynamoDB, walk through the biggest challenges we faced and how we overcame them, and share the lessons we learned along the way. 00:56:40

ARC407: Deconstructing SaaS: A Deep Dive into Building Multi-tenant Solutions on AWS SaaS presents developers with a unique blend of architectural challenges. While the concepts of multi-tenancy are straightforward, the reality of making all the moving parts work together can be daunting. In this session, we move beyond the conceptual bits of SaaS and look under the hood of an SaaS application. Our goal is to examine the fundamentals of identity, data partitioning, and tenant isolation through the lens of a working solution and to highlight the challenges and strategies associated with building a next generation SaaS application on AWS. We look at the full lifecycle of registering new tenants, applying security policies to prevent cross-tenant access, and leveraging tenant profiles to effectively distribute and partition tenant data. We intend to connect many of the conceptual dots of an SaaS implementation, highlighting the tradeoffs and considerations that can shape your approach to SaaS architecture. 00:56:31

AdTech

ATC301: 1 Million bids in 100ms – using AWS to power your Real Time Bidder Real-time bidding applications are designed for very high scale and performance. A typical RTB deployment needs to be designed to handle at least a million queries per second with TP99 query processing latency of 25 ms. In this session, we feature Bidder-as-a-Service™ by Beeswax and discover how AWS enables their core technology. We will begin by examining the end to end architecture of a real-time bidder application on AWS. Next, we will talk about the challenges and best practices for implementing a durable and high-performing system. Finally, we will conclude the talk with some recommendations on minimizing infrastructure cost while operating a RTB platform at a very large scale. 00:49:38

ATC302: How to Leverage AWS Machine Learning Services to Analyze and Optimize your Google DoubleClick Campaign Manager Data at Scale In this session, you'll learn how AdTech companies use AWS services like Glue, Athena, Quicksight, and EMR to analyze your Google DoubleClick Campaign Manager data at scale without the burden of infrastructure or worries about server maintenance. We'll live-process a click stream so you can see how Machine Learning can help maximize your revenue by finding the most optimal path of a campaign and we'll look at a real world demo from A9's Advertising Science Team of how they use the data to build Look-alike Model in their projects. 00:45:48

ATC303: Cache Me If You Can: Minimizing Latency While Optimizing Cost Through Advanced Caching Strategies From CloudFront to ElastiCache to DynamoDB Accelerator (DAX), this is your one-stop shop for learning how to apply caching methods to your AdTech workload: What data to cache and why? What are common side effects and pitfalls when caching? What is negative caching and how can it help you maximize your cache hit rate? How to use DynamoDB Accelerator in practice? How can you ensure that data always stays current in your cache? These and many more topics will be discussed in depth during this talk and we'll share lessons learned from Team Internet, the leading provider in domain monetization. 00:58:46

ATC304: RFID (Really Freaking Indispensable and Decisive) Advertising Interested in learning how to integrate the Internet of Things into your advertising platform and combine it with AWS Greengrass, AWS Lambda, Amazon DynamoDB, and Amazon API Gateway to send context-aware advertisements to users at the point of buying? In this session, Mobiquity, the leader in digital engagements servicing the world's top brands, and their Innovation Partner Flomio discuss how they've been able to use AWS to create compelling digital experiences for their clients. We deep-dive on the technology behind Mobiquity's innovative shopping system that uses RFID, Bluetooth, captive Wifi, and a mobile app to provide real-time context for understanding how and where your customers interact with your products and services, allowing you to better tailor your ads to their particular preferences. 00:38:42

Business Apps

BAP201: KAR Auction Services' Journey To The Cloud You want to build something innovative. You want to deliver applications in a flexible and agile environment. Most of all, you want to embrace the performance, efficiency, and cost benefits of cloud services. Sounds amazing, but many still struggle with the challenges of getting there. KAR Auction Services, together with its subsidiaries, has embraced a cloud-native approach to providing services in a quick, innovative, and simplified way. Their latest greenfield project? Build an end-to-end vehicle auction website on the AWS Cloud. Join Capgemini, AWS and Gary Watkins, chief information officer for KAR Auction Services' IT Shared Services department, to hear real-life examples on how to get started, how to overcome the struggles, and how to take advantage of the cloud for added benefits. Session sponsored by Capgemini

BAP202: Amazon Connect Delivers Personalized Customer Experiences for Your Cloud-Based Contact Center Join us for an overview and demonstration of Amazon Connect, a self-service, cloud-based contact center based on the same technology used by Amazon customer service associates worldwide to power millions of conversations. The self-service graphical interface in Amazon Connect makes it easy to design contact flows for self and assisted call-handling experiences, manage agents, and track performance metrics – no specialized skills required. In this session, you will hear from Capital One and T-Mobile on how they are using Amazon Connect to provide their customers with dynamic, natural, and personalized experiences. See how quickly you can get started with Amazon Connect and build your contact center. 00:55:29

BAP203: Secure File Collaboration and Management, Simplified with Amazon WorkDocs The rate at which employees collaborate and create content continues to grow. With this, organizations are challenged to make collaboration easy, keep file management simple, and maintain a secure and compliant environment. Amazon WorkDocs is a fully managed, secure collaboration and file management service with rich feedback capabilities, strong administrative controls, and an extensible API. In this session, we demonstrate how you can use Amazon WorkDocs as a full-fledged collaboration tool for users and easily secure and manage files across your organization. 01:08:36

BAP204: How Amazon Is Moving to Amazon Chime Amazon is a global company with over 300,000 employees worldwide. Easy and efficient communication is critical, so earlier this year, we made Amazon Chime available company-wide. Amazon Chime is a modern communications service that runs securely on AWS. It simplifies online meetings, video conferencing, and chats in one straightforward application. In this session, we provide an overview of Amazon Chime and follow with a discussion on how Amazon is rolling out this service. 00:55:55

BAP206: NEW LAUNCH! Bring Alexa to Work! Voice-enable Your Organization with Alexa for Business In this session, we'll introduce you to the voice-enabled workplace, and show you how Alexa can help employees work smarter by acting as their personal digital assistant. We'll also show you how Alexa transforms your conference rooms, and provides a better telephony experience. And we'll talk through how custom voice skills can be used by employees and customers alike. Finally, we'll explain how Alexa for Business allows you to do all this in a scalable and secure way.

BAP301: Bring the Power of AI to Contact Centers Amazon Connect is a cloud-based contact center service that allows you to create dynamic contact flows and personalized caller experiences by using their history and responses to anticipate their needs. Learn how with Amazon Lex, an AI service that allows you to create intelligent conversational “chatbots,” turning your contact flows into natural conversations using the same technology behind Amazon Alexa. Routine tasks such as password resets, order status, and balance inquiries can be automated without an agent. In this session, you will hear from Asurion about their Amazon Connect contact center environment and how they enhanced the customer and agent experience with Amazon Lex. 00:45:34

BAP302: User Self-Service and Admin Portals for Amazon WorkSpaces You've successfully moved your desktops to AWS using Amazon WorkSpaces. Now, you'd like to start automating your operations. In this session, we show you how to use the Amazon WorkSpaces APIs to automate common tasks, such as provisioning and deprovisioning WorkSpaces, building self-service portals to allow your users to perform basic support tasks themselves, and integrating WorkSpace operations into your existing workflow and helpdesk tools. 01:15:49

BAP303: Migrate Your Desktops to Amazon WorkSpaces Are you tired of maintaining and upgrading the PC infrastructure for your organization? Do you want to provide your users with a fast, fluid desktop that is accessible from anywhere, on any device? With Amazon WorkSpaces, you can do both simultaneously by running your desktops on AWS. In this session, we demonstrate the flexibility of Amazon WorkSpaces and show you how easy it is to get started. We also cover more advanced topics, including using Microsoft Active Directory for end-user management and authentication, and using Amazon WorkSpaces to implement a bring-your-own-device policy. 00:42:04

BAP304: How To Use AWS IoT and Amazon Connect to Drive Proactive Customer Service Learn how to use Amazon Connect with AWS IoT and AWS Lambda to proactively resolve customer issues before they occur. In this session, we show you how to configure an AWS IoT device to proactively place a phone call to a customer using the Amazon Connect API when an impending problem is detected. From there, we demonstrate how Amazon Connect contact flows make the customer interaction personal, more satisfying, and less costly. 00:35:16

BAP308: NEW LAUNCH! Deploying and Managing Voice Skills in your Organization with Alexa for Business With Alexa for Business, your employees and customers can access a variety of different voice skills that relate to your business. Alexa for Business allows you to easily manage where and how these voice skills can be accessed, and by whom. In this session, we'll walk through how you can use Alexa for Business to deploy and manage access to the custom skills you build for your organization. We'll walk through how employees "enroll" to use Alexa at work, and how the permissions model for your voice skills works. This session will include a demo showing the deployment of a pre-built custom skill, and the enrollment process for employees. 00:53:25

BAP309: NEW LAUNCH! Building Smart Conference Rooms with Alexa for Business Alexa for Business allows you to use Alexa-enabled devices to transform your conference rooms. Using simple voice skills, you can control the conference room environment, start online meetings, turn on video projectors, and more. In thiAlexa for Business allows you to use Alexa-enabled devices to transform your conference rooms. Using simple voice skills, you can control the conference room environment, start online meetings, turn on video projectors, and more. In this session, we'll walk through the Alexa-enabled conference room, and show you how you can use Alexa for Business to specify device locations, connect to conference room calendars, and provide access to meeting-specific skills. session, we'll walk through the Alexa-enabled conference room, and show you how you can use Alexa for Business to specify device locations, connect to conference room calendars, and provide access to meeting-specific skills. 00:48:23

BAP310: Move Your Virtualized Desktop Apps to the Cloud with Amazon AppStream 2.0 In this session, you'll learn how to migrate your virtualized desktop apps to the cloud using Amazon AppStream 2.0, and stream them to a desktop browser. We discuss how to assess your existing virtualized application environment, map to concepts in Amazon AppStream 2.0, and start the planning and architecture process. We demo the building blocks you use to create your AppStream 2.0 environment, and provide tips for achieving the best performance and user experience. 01:01:55

BAP311: Rethink Your Graphics Workstation Strategy with Amazon AppStream 2.0 In this session, we explore how enterprises are rethinking their graphics workstation strategy, and moving their 3D apps to the cloud using Amazon AppStream 2.0. We discuss common use cases for delivering 3D apps to users and how to implement them. You'll learn about the benefits of integration with other AWS resources for driving simulations and storing data, while lowering your costs by avoiding upfront investments, and only paying for what you use. Our guest speaker from Cornell University will share his experience delivering industry-standard simulation tools such as ANSYS FLUENT into courses. We will also demonstrate popular 3D Graphics apps running on AppStream 2.0 using newer graphics design and pro instances. 01:00:53

Compute

CMP201: Auto Scaling: The Fleet Management Solution for Planet Earth Auto Scaling allows cloud resources to scale automatically in reaction to the dynamic needs of customers. This session shows how Auto Scaling offers an advantage to everyone—whether it's basic fleet management to keep instances healthy as an Amazon EC2 best practice, or dynamic scaling to manage extremes. We share examples of how Auto Scaling helps customers of all sizes and industries unlock use cases and value. We also discuss how Auto Scaling is evolving to scaling different types of elastic AWS resources beyond EC2 instances. Data Scientist & Principal Investigator, Hook Hua, from NASA Jet Propulsion Laboratory (JPL) / California Institute of Technology will share how Auto Scaling is used to scale science data processing of remote sensing data from earth-observing satellite missions, and reduce response times during hazard response events such as those from earthquakes, hurricanes, floods, and volcanoes. JPL will also discuss how they are integrating their science data systems with the AWS ecosystem to expand into NASA's next two large-scale missions with remote-sensing radar-based observations. Learn how Auto Scaling is being used at a global scale – and beyond! 00:50:02

CMP202: Optimizing EC2 for Fun and Profit #bigsavings #newfeatures What if I told you that you could improve your EC2 performance and availability and save money… Interested? Want to learn how to use all the latest functionality including [NEW] EC2 features launched at re:Invent to optimize your spend… How about now? In this session, you'll learn how to seamlessly combine On-Demand, Spot and Reserved Instances, and how to use the best practices deployed by customers all over the world for the most common applications and workloads. After just one hour you'll leave armed with multiple ways to grow your compute capacity and to enable new types of cloud computing applications - without it costing you an arm and a leg. 00:50:39

CMP203: Amazon EC2 Foundations Amazon EC2 provides resizable compute capacity in the cloud and makes web scale computing easier for customers. It offers a wide variety of compute instances is well suited to every imaginable use case, from static websites to high performance supercomputing on-demand, all available via highly flexible pricing options. This session covers the latest EC2 features and capabilities, including new instance families available in Amazon EC2, the differences among their hardware types and capabilities, and their optimal use cases. We also will cover some best practices on how you can optimize your spend on EC2 to make the most of your EC2 instances, saving time and money. 00:53:58

CMP207: High Performance Computing on AWS High-performance computing (HPC) in the cloud enables high scale compute- and graphics-intensive workloads across a range of industries—from aerospace, automotive, and manufacturing to life sciences, financial services, and energy. AWS provides application developers and end users with unprecedented computational power for massively parallel applications in areas such as large-scale fluid and materi