IoT: Smart Connected Planet

How Tech That Learns Is Delivering a Curated Content Stream Just for You

PSFK Labs iQ Content Partner

Tech that asks not just what but why can begin to understand us across a wide range of situations that make up our lives and deliver tailored content personalized to time and place. 

Netflix doesn’t really know what movie you want to watch tonight. At least, not yet. They have a pretty good idea based on what you’ve watched before, but Netflix doesn’t know that tonight is date night, or that you had a bad day and need a little pick-me-up. That’s about to change.

There are plenty of services that offer suggestions based on choices that you, and supposedly similar users, have made in the past, but soon their recommendations will be getting a whole lot more personal. Sensors in everything, from our wearables to the walls around us, are leveraging more and more data in order to understand not just what we want, but why.

As these systems learn more about our individual lives and lifestyles they can begin to deliver contextual experiences tailor-made for any situation and help streamline the way we interact with our devices.

To learn more about this trend, PSFK reached out to Lama Nachman, principal engineer at User Experience Research at Intel Labs.

PSFK: What does it mean for technology to be contextually aware? How will consumers benefit and how can people be sure that their personal data is in good hands?

 

Lama: Technology is continuously improving at observing people over time due to the plethora of sensors embedded in our devices and our continuous use of these devices everywhere. As these observations are analyzed and patterns are recognized through extensive processing and machine learning, our devices can start to really understand us and learn our likes and dislikes, what’s important to us, what we struggle with, etc.

Empowered with such knowledge, our devices can start to take more liberty, personalize to our needs and act on our behalf.

Let’s take the example of a personal assistant. A good (human) personal assistant is not just extremely knowledgeable and resourceful, but also understands you and doesn’t need to ask you tons of questions every time he is asked to do something for you.

Devices have much more knowledge to bring to the table, but they struggle with the other part, understanding the human and inferring the context to act appropriately. So how do humans do it? They do it by observing over time and learning from what they are observing.

The more they observe, the clearer the picture becomes, and their need to get more info from you diminishes. If we can make technology do this, and get closer to this human quality, people will more likely start to rely on these technologies.

Let’s think of how this applies to specific domains. Take the example of a coach, or physical trainer. Today there are many wearable devices and phones that are able to understand your physical activity and your step count. They have been programmed with data from many people to be able to map sensor data to an activity inference.

That’s the first step. But then what’s next?

They typically will display your activity and step count over time. They might ask you to specify a goal and tell you how far you are from the goal and maybe alert you if you are falling behind.

The hope is that by showing you this info, you’re making better decisions. You might say, “OK, I’ve been sedentary this week, maybe I’ll go and walk around the building at lunch,” or, “I’m going to exercise longer tonight,” or whatever.

To truly transform this experience, these devices need to comprehend more in order to be more effective coaches. How can they insert coaching at the right time and place where people are more likely to act? Instead of just telling people that they need to go to the gym more often, observe them, learn their routine.

If today I’m driving to work, and the system knows I haven’t made it to the gym in a while, it could say, “Why don’t you park a little bit further and walk?” — but not if it knows I have a meeting that I’m running late to already, that would be annoying. Or if I’m about to take the elevator like usual suggest, “Why don’t you take the stairs?”

It also needs to learn which techniques are more effective; if you keep ignoring these suggestions, it should change what it is suggesting. If our devices start to comprehend all of these different details about the person and their constraints and situations, it can make suggestions that seem and feel logical to them.

The coaching becomes much more effective because now you’re breaking it into smaller pieces that better fit into their routine and constraints. Now they can more likely accomplish whatever goal they took on.

Finally, this richer context understanding can help people go beyond the “what” to the “why.”  If I got to see not only how much I exercised, but was able to see what was going on in the cases where I haven’t exercised, for example, I can start to hypothesize about these patterns that are possibly contributing to the problem and put these behaviors in context. This applies to many other areas like sleep, stress, asthma attacks, productivity, etc.

 

By learning more about us, these systems will be able to offer us dynamic assistance in real-time, but it’s important for us to learn about them as well. One size doesn’t fit all, and how else can we be sure that the advice they give is the advice we actually want?

Recommendation systems today tend to aggregate all its observations of users over time and make recommendations based on this aggregated knowledge. For example, recommending a restaurant to me based on its observations of restaurants that I have been to and which ones I go back to. It can start to make assumptions about what I like and dislike. It might ask me now and then to give feedback or rate some to give better recommendations.

Maybe it would even take into account the fact that we don’t have a lot of time today, and it needs to be something fast. That’s all good, but what should happen when I’m going with a friend to dinner?

Today we assume that a recommendation will be based on the intersection of interests. I like A & B, she likes B & C, let’s recommend B, but that’s not how people behave in groups. We’ve seen from previous research that people’s behavior will be very much affected by the specific context they are in.

We can’t simply look at your previous behavior and aggregate it all, we need to understand in what context you behave in a certain way and leverage this context when making the recommendation.

Let me give you a simple example: When I go out to dinner with my husband, we have a policy; we never repeat a restaurant. We want to try everything in San Francisco. However if friends from out of town are coming along, we will choose a restaurant that we know and love to make sure they have a great time. If our son is dining with us, we would go to places he likes (so he trumps). So just the context of who we are with completely changes that behavior, and so it should change the recommendation as well.

So while this behavior might seem unpredictable, as devices observe all of the elements that are contextually relevant, it can start to detect that pattern, because people are quite predictable. This notion extends to many contextual elements, including what the weather is like, where have we been the previous night, how early we need to get up the next day, etc.

Devices need to continue to learn over time, and based on the data that it has, it’s going to make a guess. Initially when it doesn’t understand a lot, it should be very conservative. As it starts to predict users better, it can take more liberty. It’s very important for people to be comfortable with this, see that progression of learning and understand why the decisions are being made. The minute the system starts to make decisions for you, you need to understand why.

You want to be able to interrogate the device, understand why it thinks that was the right thing to do and give feedback to correct its understanding, which will clearly happen often. This provides an opportunity for people to teach the device and change its behavior.

Such level of transparency helps people trust their devices and make it more likely to continue to use them and empower them to take actions on their behalf. Different people will have different levels of comfort in relying on their devices and relinquishing control, but when devices have this deeper contextual understanding, transparency and ability to change, I believe people will more likely do so.

More sensors, more devices and more personalized services mean much, much more data will be passively gathered about us. Offering us contextual experiences based on what we’re actually doing, or who we’re with, brings data collection into our personal spheres. We need to know that there are steps being taken, and what steps we need to take ourselves, to make sure that our information is being used with our best interests in mind.

Clearly people struggle with the issue of privacy when it comes to personal data. We know that people worry about their financial information and other sensitive information on their devices.

We have been hearing lately a lot of concerns about personal location data and how the sharing of this data can lead to unintended consequences. As devices start to constantly observe and listen, this will make the issue even more problematic.

Imagine a stress coach app on a phone that is constantly listening and trying to infer from the user’s voice his stress level. What does this mean for the user? Does it understand what he’s saying, who gets to hear what he said? Who gets to know his stress level? These are all very valid concerns and need to be addressed on an individual basis.

If we really want to move the needle in the direction of more contextually aware devices, we really have to make sure that people understand what information is being collected and how it is being utilized. I believe the problem today isn’t a general concern that people are sharing too much information, rather a concern of what is being shared, with whom and what value are people getting back for sharing this data.

People have legitimate concerns that companies are monetizing their data by selling data and services, and they are not getting a proportionate value back. However, when there is value and transparency, people will weigh the cost/benefit and decide accordingly. Take the traffic/navigation application Waze.

People are not only contributing their location data automatically, but they are also manually reporting all sorts of information like accidents, police, obstacles, etc. They are doing so because they get a lot of value back from this system.

They realize that by contributing this information, they are getting access to a great navigation application that will save them time on the road. So whoever finds the value higher than the privacy cost will end up using this application.

There are many examples of value, including free services and applications, direct monetization, etc. The most important part is the transparency of cost versus benefit and having the tools to control what is being shared and with whom.

Share This Article

Related Topics

Tech Innovation

Read This Next