About a five-minute walk from the center of London’s shopping hub, Oxford Circus, is the pan-Asian restaurant Inamo. It’s sleek, shiny, and exactly what you’d expect a trendy Soho restaurant to look like, with one exception: Instead of a high-cheek boned, model-like staff, the waiter is the interactive screen on your table.
Customers use Inamo’s E-Tables to order their food, watch it being cooked via a kitchen webcam, and order a taxi home when finished. In addition to being an interactive menu and maître d’ it allows customers to choose the design of the tabletop, play games, and find a venue to head to next.
Interactive tables are not just the domain of hotspots such as Inamo. Last year, Barneys New York opened genes@CO-OP Café. By using one of 30 computers housed in a single glass-topped communal table, customers can order their food, browse Barney’s catalogue, and catch up on the latest news. Meanwhile, a McDonald’s franchise in Richardson, Texas, has taken a more health-conscious approach to interactive restaurant information and installed a touchscreen menu so customers can total up the calories in their meal.
Although new to most consumers, interactive tabletops, or “Horizontal Interactive Displays” as they are known, have gained momentum the past few years, evident in the industry’s very own conference, hosted by the Association for Computing Machinery. The 2012 edition marks the fifth year of the event, and the continued falling of computer prices will only further the acceptance of interactive surfaces outside of labs and sci-fi films. Increasingly, tabletop computing is also being used as educational tools in museums and schools, informational and tracking machines in medical centers, and for personal use, via platforms such as Intel Labs Portico (video below).
But the question remains: how exactly do these tabletops work?
Instead of using a mouse for input, horizontal interactive displays rely on gestures or objects (such as Wii controllers), which match predefined movements the device translates onscreen. Yet, not all movements can be pre-scripted by a programmer since the physical world allows for far more variables than a mouse or handheld controller does. The challenge for programmers and developers is to create devices that accommodate and interpret unforeseen actions into screen-based commands.
To create a touchscreen that offers more functionality than simple menu options requires a powerful computer to process the high number of images required. But there are workarounds. Templeman Automation, a Massachusetts-based firm recently raised over $75,000 on Kickstarter to fund its Playsurface, a multi-touch, open-source, computing table, which foregoes scripted inputs and uses mirrors, cameras, and projectors. While most touch–based projects need a computer to process web-camera video, the Playsurface outsources this to its proprietary processor, called the “Blob Board.” This means you can use the system with an older or slower machine -- no fancy upgrades required.
By pledging $1,550 on Kickstarter, backers will receive the Playsurface multi-touch table in a flat-pack, Ikea-style box ready to be self-assembled. Or at least until someone invents the voice-activated allen wrench.