Apple has just released an augmented reality (AR) framework. This will allow millions of app developers to make AR experiences on iOS for hundreds of millions of devices – making ARKit + iOS the biggest AR platform. Even Pokemon GO will use it.

Visual Inertial Odometry

The magic that powers ARKit is called Visual Inertial Odometry – VIO from now on. It uses VIO to accurately track the world around it and make an engaging experience. VIO combines information from the iOS device’s motion sensing hardware with computer vision analysis of the scene visible to the device’s camera. These two inputs allow the device to sense how it moves within a room with a high degree of accuracy, and without needing to calibrate anything. It just works!

This technology helps fulfill the basic requirement for any AR experience—and the defining feature of ARKit— and that is the ability to create and track a correspondence between the real-world the user inhabits and a virtual space where you have visual content. When your app displays that content together with a live camera image, the user experiences augmented reality: the illusion that your virtual content is part of the real world. Read more about this here: Understanding Augmented Reality – Discover concepts, features, and best practices for building great AR experiences in Apple’s Developer Reference.

Scene Understanding and Lighting Estimation

With ARKit, iPhone and iPad can analyze the scene presented by the camera view and find horizontal planes in the room. ARKit can detect horizontal planes like tables and floors, and can track and place objects on smaller feature points as well. ARKit also makes use of the camera sensor to estimate the total amount of light available in a scene and applies the correct amount of lighting to virtual objects.

Every AR experience built with ARKit requires a single ARSession object. If you use an ARSCNView or ARSKView object to easily build the visual part of your AR experience, the view object includes an ARSession instance. If you build your own renderer for AR content, you’ll need to instantiate and maintain an ARSession object yourself.

In order to to receive captured video frame images and tracking state from an AR session you need to implement and the ARSessionDelegate protocol. Which in essence helps you keep track of the real world giving you ARFrame objects that encapsulate an image of the world + some information, including some ARAnchors which are virtual objects that represent positions in the real world. One of the already made anchors is the ARPlaneAnchor which you receive when the session detects a plane like a table or the ground.

Read more by exploring the documentation.

High Performance Hardware

Because ARKit runs on the Apple A9 and A10 processors, you have breakthrough performance for fast scene understanding or building detailed and compelling virtual content on top of real-world scenes.

How to get it?

If you have a payed membership go here and download Xcode 9 – or request acces for Swift Playground 2 that supports Swift 4 and the iOS 11 SDK including ARKit and access to the camera.