New research shows that Apple is studying how to make Siri detect specific sounds and determine their location only through vibration. Two newly leaked patent applications show that Apple is studying different ways for devices to detect people or interact with people. The main one is to allow Siri to recognize individual people and their spoken commands. The device does not require a conventional microphone.
Apple’s project is called “Self-Mixing Interferometry Sensors Used to Sense Vibration of a Structural or Housing Component Defining an Exterior Surface of a Device”. It involves the use of Self-Mixing Interferometry (SMI). SMI involves the device detecting the signal generated by the reflection or backscatter of emitted light. Apple stated in the patent that with the improvement and wider use of voice recognition, the microphone has become more and more important as an input device for interaction with the device. In traditional microphones, sound waves are converted into sound waves on the microphone membrane. Vibration requires a port for air to enter and exit the device under the microphone. This port can make the device vulnerable to water damage, blockages, and moisture, and can interfere with the appearance.
Gizchina News of the week
Therefore, due to higher sensitivity, Apple recommends using SMI sensor arrays. The SMI sensor can sense vibrations caused by sound and/or hitting the surface. Unlike traditional diaphragm-based microphones, SMI sensors can work in a closed (or sealed) environment. The details in the patent show how the SMI sensor can also be used on the back of the Apple Watch. The device can be configured to sense one or more types of parameters, such as but not limited to vibration; light; touch; force; heat; motion; relative motion; user’s biometric data (such as biometric parameters); air quality; proximity; location; connectivity; etc.
How Will Work This Apple Patent?
Apple introduced how devices such as the Apple Watch can use this patent to determine their location and nearby things. For example, after identifying the specific human voice carried in the vibration waveform, the electronic display screen can be transitioned from a low-power or no-power state to a working power state. So you can walk into the living room and ask your watch to turn on the TV. Even if the watch does not have a traditional microphone, it will recognize a spoken command. It will also specifically identify you. Knowing that you are authorized to use the TV and knowing which TV is nearby, the device can turn on that TV.
Apple’s suggestion is to mix different methods to detect user requests, and even calculate the probability that the vibration comes from people. Such a device, whether it is a wearable device or a static device like an Apple TV, will determine that the source of the vibration waveform is likely to be a person. It does this based on the information contained in the vibration waveform. The latter includes a determined source direction or distance. This information will also include any changes in position, such as footsteps indicating that a person is moving to a predetermined viewing or listening position.