Augmented Reality focuses on designing meaningful experiences that leverage technology to extend our human capacity, including bringing sight to those who can’t see.
Imagine your clothing being able to sense the world around you, or your computer responding to your voice and hand commands, or a pair of eyeglasses learning your idiosyncrasies.
Now imagine what this would mean if you were blind.
Far beyond gimmickry and novelty, Augmented Reality (AR) is now using computer vision and sensors to supplement reality for the visually impaired. The great possibilities for our digitally augmented future are beginning to enter the public consciousness in a major way.
“With AR we can provide an electronic overlap of the real world that augments what we would normally see in the real world,” explains Thomas Furness, known to many as the grandfather of Virtual Reality.
Many new tools enable easy ways to augment reality — to make your life a little easier.
Word Lens, which was recently acquired by Google Translate, can translate signs or menus into any language.
Augmented Reality (AR) is still often confused with Virtual Reality (VR). Researcher and computer scientist Ronald Azuma compared AR to VR back in 1997.
Azuma distinguished AR as allowing the user to still see the real world, in contrast to VR, where the user is completely immersed in a “synthetic environment” and cannot see the real world.
He wrote, “AR supplements reality, rather than completely replacing it.”
Azuma’s distinction is very useful and still applicable today, but what happens when you cannot “see” the real world?
Azuma currently leads a team at Intel Labs helping answer the question of how AR can extend human perception and better engage with the physical environment around us.
RealSense Helps People See
Rajiv Mongia, director of the Intel RealSense Interaction Design Group, is making this a reality. Mongia and his team developed a prototype that has the potential to help blind and vision-impaired people gain a better sense of their surroundings.
The system, using RealSense 3D camera technology and vibrating sensors integrated into clothing, was demonstrated live on stage at the 2015 International Consumer Electronics Show (CES) during Intel CEO Brian Krzanich’s keynote.
Krzanich announced that the source code and design tools will be made publicly available later this year for developers to extend and improve on the platform.
“Anyone can use this to help the visually impaired and build upon it,” he told the crowd.
The prototype currently works by seeing depth information to sense the environment around the user. Feedback is sent to the wearer through haptic technology that uses vibration motors placed on the body for tactile feedback.
Mongia compares it to the vibration mode on your phone.
“Now the intensity of that touch is proportional to how close that object is to you,” he said.
“So if it’s very close to you, the vibration is stronger. If it’s further away from you, it’s lower.”
Darryl Adams, a technical project manager at Intel who was diagnosed with Retinitis Pigmentosa 30 years ago, has been testing the wearable system.
Adams said the technology allows him to make the most of the vision he does have by augmenting his peripheral vision with the sensation of touch.
“For me, there is tremendous value in the ability to recognize when change occurs in my periphery,” Adams said.
“If I am standing still and I feel a vibration, I am instantly able to turn in the general direction to see what has changed. This would typically be somebody approaching me, so in this case I can greet them, or at least acknowledge they are there.
“Without the technology, I typically miss this type of change in my social space so it can often be a bit awkward,” he said.
Mongia said the team is exploring making things in a way that is not necessarily one size fits all. The system was tested on three wearers, each with very different needs and levels of vision — from low vision to fully blind.
“I think its going to be something that needs to in a sense either adapt to you or be customizable for each individual to basically meet their particular needs,” Mongia said.
The new AR is all about context-awareness and creating a personalized and responsive experience unique to a user’s environment. The definition of AR will expand to integrate new types of sensors, wearable computing and even artificial intelligence.
As an example of AR-wearable computing, Mongia references the robotic Spider Dress by Anouk Wipprecht, which is powered by Intel’s Edison and made its public debut at CES this year. “The dress understands not only the characteristics and the emotional state of the wearer, but also what was going on around her,” said Wipprecht. Electronic, spider-like arms stretched out of the clothing if the user was feeling uncomfortable, or if people were too close or approaching too quickly.
“That’s the idea: the dress is starting to understand the wearer and understand the environment around it and starting to act accordingly,” she said.
“It’s a beautiful critical design piece that shows us our devices are going to understand us a lot better than they do today.”
OrCam is another device designed for the visually impaired. It uses “machine learning,” a form of Artificial Intelligence, to help users interpret and better interact with their physical surroundings.
The device can read text and recognize things like products, paper currency and traffic lights. OrCam attaches to the side of any pair of glasses. The front contains a camera that continuously scans the user’s field of view. The back of the camera contains a bone conduction speaker to transmit sound to the wearer without blocking the ear.
The camera is connected via a thin cable to a small processing unit that sits in the user’s pocket. With OrCam, the user shows the device what they are interested in by pointing at it.
“Point at a book, the device will read it,” said Yonatan Wexler, head of Research and Development at OrCam.
“Move your finger along a phone bill, and the device will read the lines letting you figure out who it is from and the amount due.”
Wexler says there is no need to point when identifying people and faces.
“The device will tell you when your friend is approaching you. It takes about ten seconds to teach the device to recognize a person,” he said.
“All it takes is having that person look at you and then stating their name.”
Wexler said that in order to teach the system to read, it is repeatedly shown millions of examples, so the algorithms focus on relevant and reliable patterns.
Be My Eyes is an iPhone app that lets blind people contact a network of sighted volunteers for help with via live video connection. The sighted helper is able to see and describe vocally in real time what the blind person is facing.
Though technically not AR (because it depends on human assistance), the Be My Eyes community is quickly growing.
“Since the 15th of January we have had 113,000 volunteers signing up to help 9,800 blind people all over the world. We our connecting people in 80 different languages,” said the app’s inventor Hans Jørgen Wiberg. Perhaps apps like Be My Eyes point to a new altruistic augmentation of reality that builds on the growth of the sharing economy to offer new services that help to extend our humanity.
Augmented Not Replaced
In this seductive technological era of overlays and virtual immersion, how do we ensure that emerging technologies like AR do not replace our ‘humanness’?
“I think the thing that we always need to keep in mind is that it’s not about replacing what a human does,” said Intel’s Mongia.
“What we should really be focused on is what the human perhaps doesn’t do well.”
“We should not just duplicate what is already available in the rich and wonderful complexity of the real world.”
Furness sees a fantastic near-future of imagination awaiting us, one that allows us to see both the world and our humanity anew.