Believing anything is possible led Lama Nachman to become a groundbreaking engineer in predictive computing – and yes, she is on physicist Stephen Hawking’s speed dial.
World-renowned physicist Stephen Hawking had fans laughing when he auditioned people to find a replacement for his trademark computer-generated voice. The spoof for Comic Relief’s Red Nose Day last March revealed for many just how iconic Hawking’s voice is and how its sound is imbedded in the way we think about the universe.
“Stephen’s voice is IP protected,” said Lama Nachman, a principal engineer at Intel leading the team that helps improve Hawking’s computer interface. “He really likes the way that it sounds.”
Chronicled in the film The Theory of Everything, Hawking was struck by a rare early-onset, slow-progressing form of amyotrophic lateral sclerosis (ALS) that deteriorated his speech and motor skills. His computer system is a critical communication tool — allowing him to speak through his speech synthesizer, type up his breakthrough ideas and even search the web.
After meeting Intel cofounder Gordon Moore in 1997, Hawking has relied on Intel engineers to fine tune his customized PCs. When Hawking needed a new system in 2011, Nachman eagerly stepped up to the task.
“There are projects that we do because we love the research and we’re inspired by the research,” said Nachman, an Intel Fellow and director of Intel’s Anticipatory Computing Lab. “Then there are projects that we do because they feed our souls, and that’s one of those types of projects.”
Beyond her work with Hawking, she feeds her soul by working on a range of projects exploring how computing devices can sense their surroundings, learn and adapt to better serve people’s personal needs. She speaks fast without wasting or mincing words, and her certainty is distracted only occasionally by brilliant bursts of humility.
Nachman is a tenacious, fast-thinking computer engineer with advanced degrees from the University of Wisconsin-Madison. She developed a deep understanding of hardware, software, networking and algorithms, and has spent her career pushing the boundaries of what computing can do to enhance human experiences. Her team is currently working on creating context-aware systems that can understand people through sensing and apply this knowledge to assist them in daily life.
“Imagine a computer that can understand a student’s emotions and level of engagement, and tailor content to better engage the student,” she said. “Or imagine a smart manufacturing facility that can watch over technicians’ activities and help them perform their tasks correctly, or a smart home that watches over kids and elderly and engages them accordingly.”
Instrumental in the development of the first internet-connected digital picture frame, early smart TVs and the rapid progression of wireless sensor networks, Nachman has spent the past decade shaping the future of predictive computing, where devices powered by artificial intelligence (AI) can learn and act on their own to assist their owners, businesses or societies.
“We are moving into a world of data-driven, data-rich computing,” said Genevieve Bell, an anthropologist and technologist as well as a professor at Australian National University. Nachman joined Intel in 2003, and seven years later Bell, a lead ethnographer at Intel Labs, asked Nachman to join her team to conduct user experience research.
Bell sees Nachman’s work driving a departure from the era of connected, visible “and even fetishized devices” to a world where computing is ubiquitous, operating in the background, helping people live better lives.
“This is about computing that knows the world around, interprets what it is encountering and makes decisions about how to navigate or negotiate that world,” Bell said. “Nachman’s work has been at the cutting edge of this shift for some time.”
Giving Voice to a Physics Legend
Nachman said she was both excited and terrified to meet Hawking for the first time. Given the opportunity to work with someone of his stature and the spotlight that entailed, she said failure was not an option.
Initially, Nachman considered a complete redesign of Hawking’s system using new technologies such as eye tracking or electroencephalography (EEG) controls. However, Hawking, who has used the same interface for years, preferred familiarity over something revolutionary.
“We realized that he was risk averse and that we should be looking for solutions that were more incremental,” said Nachman, who spent significant time with Hawking to better understand how he used his old system, as well as to get a sense of his actual needs.
“Often, as technologists, we tend to throw technology at a problem, rather than really try to understand where technology fits with that problem,” she said. “We need to understand how humans operate in these spaces and what it is that they really need help with.”
Over the course of the next few years, through trial and error – with Hawking test driving the tech at each step – Nachman and her team created nearly 60 iterations of the new system. They recoded the software from the ground up, adding enhanced features, such as a word prediction from SwiftKey, to enable Hawking to communicate and conduct work more efficiently.
Though Nachman continues to tweak the system periodically to provide Hawking with more capabilities as his condition changes, the new design has already doubled his rate of speech and improved his ability to perform computing tasks tenfold.
Assistive Technology for All
Recognizing the potential benefit to thousands of people suffering from motor neurone disease and quadriplegia, Nachman made the call to share the platform with the international research community via open source.
“One of the nice things about the way we’ve designed it, is that we decoupled sensors from the rest of the system,” said Nachman. “If a person can move anything, any muscle, we can find a way to translate that movement into the equivalent of a push button to control the system.”
While Hawking moves his cheek to trigger an infrared sensor mounted on his glasses, and thus controls the cursor on his computer screen, others can use the trigger that makes the most sense for their condition, be it camera, EEG or other input.
“One person lost the ability to move every muscle except one finger,” said Nachman, “so we created a ring that had accelerometers in it.”
Hundreds of people have already benefited from the open source material, as evidenced by the multitude of emails Nachman receives every day with questions about customizing the technology for the disabled or notes of thanks from loved ones grateful to be able to communicate with a family member once again.
Teaching Tech to Understand Emotions
The desire to solve problems and help others drives an array of tech projects in Nachman’s lab, addressing such issues as health and wellness, automation and improved efficiencies. But Nachman believes one endeavor in particular will have the most impact on assistive computing as a whole: teaching tech to understand human emotion.
Imagine a robot or phone assistant that not only responds to commands, but also anticipates needs and desires, and interacts as another human might. We’re already moving in that direction, she said, as technology can sense contextual data and present an alert to assist its owner. As AI evolves, it will allow devices to better analyze data and learn, allowing it to get smarter and improve how it interacts.
Think C-3PO from Star Wars or Samantha from Her.
With Nachman pushing the boundaries of what active computing can do, this future is not so distant.
“We can detect people’s emotions in very specific instances in controlled environments,” said Nachman. “What we’re trying to do now is to detect people’s emotions in the wild, in uncontrolled settings.”
For AI technology to truly gauge emotional context in situations, Nachman said it requires accurately assessing not only concrete inputs such as location and activity, but also variables such as personal physiology, voice, facial expression and words they uttered.
“Emotion is affected by so many things and is manifested in people in different ways,” Nachman explained.
In the end, the goal is to use technology to make positive differences in the human experience.
“Ultimately, humans are good at certain things and machines are good at other things,” she said. “The way we complement each other will be the best way to get the most out of the combination of human and machine.”
Trailblazer for Women in Tech
Nachman’s was not a typical career path. A Palestinian girl raised in Kuwait, Nachman got an early start beating the odds.
“It’s not necessarily a culture that’s very supportive of women, especially not women in STEM fields,” she said.
But her father shielded her from discrimination.
“At a very early age, my dad really made me believe that I could accomplish anything that I wanted to accomplish if I just worked hard at it and put my energy into it,” she said.
When she graduated from high school, ranked 12th in the country, her father encouraged her to follow her dreams, which – for Nachman – meant heading to the U.S. for college.
“In so many ways, being that naive about what is possible made me think that everything was possible,” she said.
It wasn’t until later that she began to consider why there were so few women in tech, even in the U.S.
“A lot of people jump to the conclusion that girls don’t like science and math, but that is totally untrue,” Nachman said. “I think a big part of the problem is that women don’t see themselves in these types of careers.”
When she talks to young women about the work that she does, Nachman said they are often surprised that such work exists and that women are already part of it.
“We need to do a better job of getting across the message that not all engineers look and act the same way,” she said.
She has strong advice for young women.
“Look for what inspires you,” she said. “Look for things that you would love and that will help you change in the world. A lot of times, passions can lead to changing people’s lives for the better.”