Apple was just granted a new facial recognition patent from the USPTO that will be able to identify faces in a live video even when located at a further distance from the camera.
The patent describes this as “enhanced face detection using depth information” in which the algorithm selects one or more locations to identify a human face. To achieve this, Apple will be using a depth sensor.
A method for face detection includes capturing a depth map and an image of a scene and selecting one or more locations in the image to test for presence of human faces. At each selected location, a respective face detection window is defined, having a size that is scaled according to a depth coordinate of the location that is indicated by the depth map.
Apart of the image that is contained within each face detection window is processed to determine whether the face detection window contains a human face. Similar methods may also be applied in identifying other object types.
This new algorithm uses the depth information in addition to existing infrared motion tracking technology, which was developed by PrimeSense and is already in use with hardware like Microsoft’s original Xbox Kinect sensor. Apple acquired PrimeSense back in 2013.
From the perspective of an iPhone, a human face becomes larger or smaller depending on its relative location to an onboard camera’s objective lens.
We aren’t sure when Apple plans to integrate this invention into their products, but it could be as soon as this year, with the iPhone 8.
The latest predictions from analyst Ming-Chi Kuo point to Apple also using a “revolutionary” front-facing 3D camera in this year’s OLED iPhone model (iPhone 8). This in combination with enhanced facial recognition will be huge for Apple.
What do you think of this new technology? Let us know below!