Along with the launch of the all-new Echo Show, the Alexa-powered device with a screen, Amazon also introduced a new design language for developers who want to build voice skills that include multimedia experiences.

Called Alexa Presentation Language, or APL, developers will be able to build voice-based apps that also include things like images, graphics, slideshows and video, and easily customize them for different device types – including not only the Echo Show, but other Alexa-enabled devices like Fire TV, Fire Tablet, and the small screen of the Alexa alarm clock, the Echo Spot.

In addition, third-party devices with screens will be able to take advantage of APL through the Alexa Smart Screen and TV Device SDK, arriving in the months ahead. Sony and Lenovo will be putting this to use first.

Voice-based skill experiences can sometimes feel limited because of their lack of a visual component. For example, a cooking skill would work better if it just showed the steps as Alexa guided users through them. Other skills could simply benefit from visual cues or other complementary information, like lists of items.

Amazon says it found that Alexa skills that use visual elements are used twice as much as voice-only skills, which is why it wanted to improve the development of these visual experiences.

The new language was built from the ground up specifically for adapting Alexa skills for different screen-based, voice-first experiences.

At launch, APL supports experiences that include text, graphics, and slideshows, with video support coming soon. Developers could do things like sync the on-screen text and images with Alexa’s spoken voice. Plus, the new skills built with this language could allow for both voice commands, as well as input through touch or remote controls, if available.

The language is also designed to be flexible in terms of the placement of the graphics or other visual elements, so companies can adhere to their brand guidelines, Amazon says. And it’s adaptable to many different types of screen-based devices, including those with different sized screens or varying memory or processing capabilities.

When introducing the new language at an event in Seattle this morning, Amazon said that APL will feel familiar to anyone who’s used to working with front-end development, as it adheres to universally understood styling practices and using similar syntax.

Amazon is also providing sample APL documents to help developers get started, which can be used as-is or can be modified. Developers can choose to build their own from scratch, as well.

These APL documents are JSON files sent from a skill to a device. The device will then evaluate the document, import the images and other data, then render the experience. Developers can use elements like images, text, and scrollviews, pages, sequences, layouts, conditional expressions, speech synchronization, and other commands. Support for video, audio and HTML5 are coming soon.

“This year alone, customers have interacted with visual skills hundreds of millions of times. You told us you want more design flexibility -in both content and layout – and the ability to optimize experiences for the growing family of Alexa devices with screens,” said Nedium Fresko, VP of Alexa Devices and Developer Technologies, in a statement. “With the Alexa Presentation Language, you can unleash your creativity and build interactive skills that adapt to the unique characteristics of Alexa Smart Screen devices,” he said.

A handful of skills have already put APL to use, including a CNBC skill that shows a graph of stock performance; Big Sky that shows images to accompany its weather forecasts; NextThere, which lets you view public transit schedules; Kayak, which shows slideshows of travel destinations; Food Network, which shows recipes, and several others.

Alexa device owners will be able to use these APL-powered skills starting next month. The Developer Preview for APL starts today.

Check out our full coverage from the event here.