Today the US Patent & Trademark Office published a patent application from Apple that relates to video coding, and also to systems, methods, and devices to produce a 3-dimensional model of a scene from a RGB-D sensor for Virtual Reality, Augmented Reality and Mixed Reality applications for a possible future headset and other Apple devices.

Apple's invention covers devices, systems, and methods that implement simultaneous localization and mapping for RGB-D sensors, such as RGB-D cameras.

Various implementations presented include devices, systems, and methods that estimate a trajectory of a RGB-D sensor or render a 3D reconstruction of the scene (e.g., ongoing video content) captured by the RGB-D sensor. In some implementations, such virtual reconstructions can be accessed by, interact with or be used in combination with virtual reality (VR), mixed reality (MR) applications and /or augmented reality (AR) applications.

One example implementation involves the device receiving multiple frames of a real world scene within a field of view of a camera at multiple times, the frames including color values and depth values for pixels for the field of view of the camera.

The device selects keyframes from the multiple frames of the real world scene within the field of view of the camera. The keyframes are associated with camera poses defined in a three dimensional (3D) coordinate system.

The device receives a current frame of the real world scene currently within the field of view of the camera. The current frame includes current color values and current depth values for the pixels for the field of view of the camera at a current time. The device determines a current camera pose of the camera in the 3D coordinate system based on the current frame.

The device provides a virtual representation of the current frame based on the current camera pose of the camera and two or more of the keyframes. The virtual representation is provided based on the color values and the depth values of the two or more of the keyframes.

Apple's patent FIG. 1 below is a block diagram of a simplified electronic device; FIG. 2 is a block diagram of a simplified method of fusing depth information from a current frame with an existing keyframe. The devices could include an iPhone, iPad, Mac or head-mounted display (HMD) that has a screen for displaying 2D/3D images or a screen for viewing stereoscopic images and including operations such as a VR display, an MR display, or an AR display.

Apple's patent FIG. 3A above is a diagram showing current RGB-D sensor input, e.g., standard RGB color image from camera video content and an aligned standard camera depth map; FIG. 3B is a diagram showing rendering a current 3D virtual reconstruction into the same pose corresponding to the images depicted in FIG. 3A.

Apple's patent application 20190304170 that was published Thursday by the U.S. Patent Office was filed back in Q1 2019. Considering that this is a patent application, the timing of such a product to market is unknown at this time.

Apple's inventor of this patent is Maxime Meilland who works out Apple France. Maxime is a Computer Vision Engineer. Maxime was the CTO for PIXMAP, a company working to deliver the world's first 8K Headset (each eye at 4K) called Pimax as presented in their video below.

In June Patently Apple posted a report titled "Apple's 8K Foveated Display Technology could apply to both Mini-Displays for a VR Headset & a Wall Mounted Display+."

RBG-D sensors have been used in Microsoft's Kinect, which in the early stages used PrimeSense technology now owned by Apple. The Asus Xtion Pro Live uses RBG-D sensors as well. A 2012 paper on this could found here. Other books on this could be found here.

In 2017 there was the PFTrack 2017 - RGB-D Depth Sensor Support made for MacBooks as presented in the video below.

Apple's patent presented today is about the miniaturization of this technology for a future headset.

About Making Comments on our Site: Patently Apple reserves the right to post, dismiss or edit any comments.