Kinect was introduced as an accessory for the Xbox in 2010 and retired as a product in 2017 without ever gaining traction in the Home/Gaming market. Microsoft shipped over 35 million units during this time. Alongside this product, Microsoft introduced Kinect for Windows, in 2012, which targeted the development community for use in business facing products, which found a small audience in the development community, allowing them to solve very complex vision problems in real time. In 2017 Microsoft released the HoloLens which leverages the Kinect technology to create a Mixed Reality experience and has spurred additional research and development into the Kinect product. This year Microsoft announced Kinect for Azure which will take this technology on the next logical step target the enterprise and take advantage of the cloud for computing.
The Kinect for Azure also has a Time of Flight (ToF) sensor, RGB camera, 360-degree microphone array and accelerometer. The Time of Flight sensor has the following technical characteristics which are best in class:
- Highest number of pixels (megapixel resolution 1024×1024)
- Highest Figure of Merit (highest modulation frequency and modulation contrast resulting in low power consumption with overall system power of 225-950mw)
- Automatic per pixel gain selection enabling large dynamic range allowing near and far objects to be captured cleanly
- Global shutter allowing for improved performance in sunlight
- Multiphase depth calculation method enables robust accuracy even in the presence of chip, laser and power supply variation.
- Low peak current operation even at high frequency lowers the cost of modules
Kinect for Azure features Depth Sensor Technology that uses time of flight depth sensing capable of creating point clouds of extreme detail allowing for environmental understanding even better than what is available in the HoloLens 1.
Satya Nadella describes project Kinect for Azure as a key advance in the evolution of the intelligent edge; the ability for devices to perceive the people, places, and things around them. The compelling aspect is the integration of Azure AI Services providing much more power than is possible in a small edge device. By using deep learning on depth images, Kinect can use dramatically smaller AI networks for the same quality outcome, making cheaper to deploy AI algorithms and a more intelligent edge.
The vision for this product is to enable new workflows in the enterprise. A great example is the manufacturing floor:
Notice that the data is so detailed that the system is identifying individual palettes of specific products as well as the floor itself. In this example, the user is able to understand how much of the warehouses total capacity is used and how much safety stock they have. This allows for intelligent decision making based on improved data collected from the real world in real time.
Kinect for Azure presents nearly endless possibilities for creating new workflows and allowing for innovation across virtually any enterprise. What will you do in your business with Kinect for Azure?