Google releases a new AR single camera that achieve AR depth of field sensing


Apple and Google Coronavirus

Back in December of last year, Google showed how a single camera could be used to create depth maps for AR augmented reality. Today, the ARcore Deep API is finally available for Android, and several third-party apps have already been Getting Started. Other manufacturers are trying to add AR features to their products. They do this mostly through additional hardware devices such as adding the ToF module. This method uses dual-camera to perceive and create depth maps. However, with a single camera, it is difficult to know how far away from the camera is from the objects. Of course, it is difficult to get this information from an image.

Google AR single camera

Google, on the other hand, uses a dynamic depth algorithm to create a deep-sensing image. The image comes with good accuracy and with just a single shot. It ensures that virtual objects are not floating in space. They are also not placed in locations where it is impossible to place them on a physical plane. According to reports, with the release of ARCore version 1.18 (Google Play Services for AR), the Depth API will be available on “hundreds of millions of compatible Android devices”. Google will first demonstrate this feature through AR Animals in Search, after which new partners will demonstrate the technology.

Gizchina News of the week


Samsung and Snapchat are already testing this technology

However, Snapchat as well as Samsung, have begun to go online to utilize the Depth API features. These companies are partners in the development of the technology. Snapchat has updated several filters to take advantage of the Depth API’s Advantage, AR effect for single-camera phones.

Read Also:  Chrome for Android: Google is moving the address bar to the bottom

Samsung, which will be using the Depth API for its Quick Measure app on the Galaxy Note 10+ and S20 Ultra, already has a ToF sensor. However, the combination of the new algorithm could further improve quality, reduce scan time, and speed up measurements. The company will have to update its camera app in a few months when the feature is ready.

At the same time, Google Creative also showed a new gameplay, such as the use of Depth API to play dominoes. In the Deep Lab application provided by Google, other applications of the Depth API were used during naval warfare, such as realistic physics, surface interaction effects, and environment traversal functions.

Disclaimer: We may be compensated by some of the companies whose products we talk about, but our articles and reviews are always our honest opinions. For more details, you can check out our editorial guidelines and learn about how we use affiliate links.

Source/VIA :
Previous DxOMark: Oppo Find X2 Pro has the worst selfie camera among 2020 flagships
Next The U.S. is considering the acquisition of Ericsson to fight Huawei

1 Comment

  1. Cyril Dieudonne
    June 27, 2020

    I’m glad this technology exists. It looks very useful. I think fighting global warming, cancer, aids, orphan diseases, poverty, and other unimportant domains can wait. I can’t wait to be able to see a mini robot in the middle of my living room thru a 6 inches screen. I’m pretty sure this tech will be used every single day, by every body. The future is bold, can’t wait to live in the future.