IoT: Smart Connected Planet

How Technology Is Decoding the Secret Language of Nature

by PSFK Labs
iQ Content Partner

Sensors embedded within infrastructure and environment alike are lending a dynamic voice to the world we live in.

Since the dawn of time humans have been working to unravel the mysteries of the natural world. According to tree rings and the layer cake of rock formations, the embedded landscapes around us each have their own unique stories to tell. Thing is, in transcribing its own language, the planet speaks back to us very, very slowly.

Now, with low-cost, high-quality sensors integrated into our natural and built environments, we’re finally able to hear the voice of our surroundings in real time. These environmental whispers are updating us with vital information about the condition and performance of the world we live in, and allowing us to react dynamically to ensure the long-term health and sustained function of our cities, our planet and ourselves.

PSFK recently sat in on a conversation between Richard Beckwith, a research psychologist, and David Prendergast, senior researcher and anthropologist, both for Intel Corporation, as they discussed the past, present and future of this revolutionary trend.

Why is it only now becoming  possible to deploy this technology? What are some potential applications, and how might this change our lives as global citizens?

Richard: Twelve years ago we put out a wireless sensor network measuring temperature variation across a vineyard, trying to see if we could help a vineyard owner develop higher value grapes for making wine.

Before that a lot of the vineyard owners were just using the temperature measurements at the local airport or maybe they had just one sensor already. Just by taking measurements throughout the season we found that we could in fact measure differences within the vineyard, characterize the quality of the fruit and actually help the winemaker make better pick decisions.

We were able to show that putting in a number of different sensors — we needed to have a fairly dense network of sensors in order to do it — really made a difference. Having better control over things like pH, which determines how well yeasts will perform in the juice, improved the quality of grapes going into the vats that the winemakers were using. They actually improved the process by allowing the winemaker to control the compositional chemistry of the grapes.

I said it was 12 years ago, but what I didn’t mention was that it was enormously expensive to do 12 years ago. Now the price is at least an order of magnitude lower, and going lower still. At the time we all said, “This isn’t going to work for a number of years.” It was just too expensive to do back then. It’s not anymore.

Right now — to bring things up to date — we’re working with the Department of Environmental Quality, as well as with university researchers, here in Portland. What they’re used to, the type of sensors they have, are devices that cost $30,000.

The ones that we’re using come from groups like Air Quality Egg, or a group called Smart Citizen, which specialize in inexpensive sensor boards, and cost only a couple of hundred dollars.

So one of the things that we’re all doing now is trying to figure out how do we characterize the different types of sensors? What exactly are their properties when they’re in an uncontrolled environment? Knowing this will give us  a sense of how to build an informative visualization for people to see what’s actually going on in the environment.

One of the problems is that scientists have not been exposed to these kinds of data. We’re asking them to accept data that comes from a lesser device than what they’re used to, and whose deployment is not controlled in the way they typically control things.

For example, if we’re talking about putting a sensor up on at a school, we have to realize that they wouldn’t put their $30,000 sensor there. It would be 16 feet up in the air with a particular flow pattern around it. So we, as emerging technology researchers, look for ways they can use these unfamiliar data to answer the questions that people are asking.

Once you’ve figured that out, and you know how to do it, suddenly the environment won’t just whisper — but shout — and, all of a sudden, people will start to really get a handle on what’s going on around them in a way that just wasn’t possible before.

David: That’s exactly right, absolutely, and as we start thinking about these things in London we also need to consider that different data has different audiences. By translating it and showing how it can be represented as information for those audiences we find that this is incredibly important to cities in general. Citizens, councils — we have groups like the Royal Parks asking us — all beginning to define their own needs.

The readings are particularly necessary within the context of the parks. Somewhere like Hyde Park sees 10 million visitors each year. The five largest urban parks in the U.K. pull in as many people as the three largest theme parks.

They’re asking for things like aqua sensors, and soil sensors to know how much they should be treating the lands. They want to understand the effects of ambient light levels, or of acoustics and noise, on wildlife at different parts across the park, or the impact of the events like the Winter Wonderland, or some of the Rolling Stones concerts, that are held there.

All these visitors mean that these parks are quite expensive to run, but they’re huge income generators as well. They need a balance. They have to mitigate all the damage that’s generated by these huge numbers of people.

Before they had to send maybe ten people out once a week or so to collect readings for the types of data they need. Now they’re getting readings several times a day, or every hour, or even every few minutes according to what they need, and of course it’s much cheaper.

Richard: If you spend $30,000 for each one you’re not going to be putting a large number of sensor systems out.

David: Exactly.

Now that cost is no longer prohibitive, and considering the potential goldmine that big-data represents for governments and corporations, it follows that we will be seeing these networks pop up more and more.

While it means that our lives as citizens can expect to benefit from improvements to infrastructure and efficiency, it also means that much more of our own personal data will be available as well. With that in mind, what dangers exist, and are there steps being taken to protect us? 

Richard: One of the things that we’ve been doing, similar to what David’s group is doing though with a narrower focus, is looking at air quality. Specifically look at air quality monitoring as a tool for people with asthma.

Asthmatics often want to be able to determine what their triggers are, and bad air quality can definitely be one. Same with pollen, and so we’re tracking tree locations, pollination periods for trees, how far pollen travels and wind direction, in addition to pollutants. This way we can show people with asthma the locations to avoid.

The best way to avoid potential triggers is to share their information with other asthmatics, and so there are obvious privacy issues. You might think, well, you’re just sharing information about asthma, but if one of your triggers is maple pollen, and you’re applying for a job at Maple Grove Software, you probably don’t want the HR people to know that you’ll have a potential problem with all the trees right outside their facility.

Or another example — one that’s something I’m not even sure how to deal with — would be, say, if you’re trying to sell your house, but it happens that the air quality in your area is worse than it is around an equivalent property, do you want that data up on Zillow?

I think that there are all sorts of issues. How do you share, who do you share with and how does the sharing work? These need to be addressed.

In addition, the computational power that’s required to actually deliver answers to what people think of as very simple questions — like which street is safest to walk down — that requires, effectively, supercomputing. So if we would end up with a lot of people asking the same question at any given time, even these ‘very simple questions,’ that’s a very significant compute level.

While it’s important to remain aware of the potential dangers inherent in any new technology, it’s equally if not more important to proactively steer it’s development toward the common good. We can only do that if there’s transparency, and an understanding of what these innovations can be used for.

Surprisingly, while cheap sensors are new, there’ve been plenty of data-gathering programs around for quite a while. Maybe we didn’t hear about them because nobody knew what to do with all that information. Richard and David brought us up to speed on the current state of things, and why it took so long to get here:

Richard:  Tasmania is one of the test sites for the National Broadband Network in Australia so they have great connectivity throughout the island. So we just started working with a group there that’s looking to, as they say, “instrument the entire Tasmanian economy”.

They’re doing agriculture and logistics, right now, and they want to work in tourism, and a few other domains as well. Essentially, what they’re trying to do is to bring in data from each of the different major economic sectors and try to look for ways in which they each could be improved.

For example, they’re doing a lot of vineyard work so we’re helping with that because we’ve got a lot of experience there.

We’re trying to figure out how good the temperature monitoring is across the farms that they’re working with. Once we determine that, we could potentially help them monetize that data and sell it to people who would be really interested in a fine‑grained analysis of weather that’s going to influence crop yields, like the people doing commodities trading for instance.

Also, we’re collecting data from the Portland Parks and Rec. department. We take all the street‑tree data from them so we know what trees are where. We have roadway data from Department of Transportation.

We have background air quality data from the Department of Environmental Quality. There are  all these various data from all over the country that is publicly available, weather data that we’re pulling off of Weather Bug, and we’re fusing all those data, combined with our own data, into one service.

It winds up that, while we thought we were just going to do our sensors, once we started talking to people about what services they wanted we realized that we needed to bring in all these different sets in order to deliver that.

I think that, as devices within the ecosystem start to communicate with one another more, it’s obvious that by drawing some aggregation points we can actually start to deliver the services that people expect.

David: True. It’s amazing when you start talking with people at the center of councils with different responsibilities — somebody who is head of water, as opposed to transportation, for instance — and begin to map all the different kinds of information that they receive from different stakeholders.

These data come in many different forms, and many different formats, and a lot of it gets ignored because this huge influx of data is absolutely overwhelming.

For them, part of the exercise in adopting the Internet of Things, was to find ways that the existing forms of information they’re receiving could be better streamlined and fit into their work flows. Where does the data go? How does it intersect with the practices and the resources that our cities have?

Likewise, a lot of cities are changing quite rapidly in how they’re organized. Traditionally, you may have many separate departments — environmental, education, traffic — they’re all gathering data that is then not being shared across.

This data isn’t being made visible — sometimes even within the same department — and I think a lot of the issues that we face has to do with that, and so a lot of the issues are just as social as they are technical.

Richard: Exactly. In Portland we wanted a map of all the trees so we’d know which trees were where. When we asked Portland Parks and Rec. if we could get the data they were thrilled. They were just thrilled that we asked.

They’re like, “Yeah, you can totally have this.” We said, “How do we get it?” and they said, “I don’t really know. No one’s asked for it before.” It actually took a while after that to figure out how to because they were gathering all this data, but nobody had ever thought to use it for anything.

People just don’t know what data are available and they don’t know how to use it. Last year and this year, our group sponsored something called the National Day of Civic Hacking where about 100 hackathons around the country were taking specific government-collected data in the U.S. and developed services around it and made these things relevant for the community.

As David was saying, people who work for cities or counties or states or the feds in the states, their job isn’t to develop services around it. It’s never been a job that somebody had before, but one that we’ve got to shepherd if we want to see it happen in the future.

Share This Article

Read Full Story