By Doug Bannister
Over the last few years, media analysts have been talking a lot about the ‘Internet of Things’ (IoT) and ‘big data.’ One may well wonder what these terms mean and how they will affect the way the sign industry does business. Certainly, they have come to represent an accelerated pace of development in information technology (IT) and an increase of options for businesses to deploy. Organizations can choose to stay their current course or adopt these new technologies to differentiate themselves from their competition.
With regard to the sign industry, the lightning-fast pace of change has been clearly seen in today’s visual communications. The medium of digital signage continues to grow as organizations find new applications for it. At the same time, many other technologies have reached the digital signage marketplace, including gesture control, near field communication (NFC), ultra-high-definition (UHD) ‘4K’ video and augmented reality (AR), all vying for attention.
For many of today’s organizations, the goal of engaging customers, employees and/or visitors with valuable information and rich-media experiences is no longer simply an idea tossed around at meetings, but rather an imperative to stay in business. Already, digital signage has helped some of them leapfrog ahead of their competitors.
That said, with so many new technologies promising to improve the viewer experience, it is important to break through the hype and consider how they can actually be used by organizations and what benefits they will deliver.
A lot of people immediately associate ‘interactivity’ with touch screens, but with the rise of IoT, the boundaries of the technology are being stretched much further. Today, interactivity can encompass the connections between sensors or other devices and the screen hardware viewed by individuals or groups.
By way of example, San Francisco International Airport (SFO) installed interactive wayfinding kiosks in its newest Terminal 3 boarding area in early 2014. In addition to being interactive in the traditional sense, i.e. by featuring touch screens that helped users navigate the facility, the kiosks were also integrated with a series of sensors and source systems. This integration allowed users’ wayfinding search results to reflect certain factors—like closed elevators or escalators—that would impact the route a traveller should take to get from point A to point B.
In another non-traditional application of interactive digital signage for wayfinding purposes, a manufacturing facility that used ammonia and had special systems in place to deal with potential leaks began to install screens to provide emergency evacuation notices. In this context, the on-screen content needed to depend on which way the wind was blowing. That is, if there were a leak, the wayfinding information needed to send employees not necessarily to the nearest exit, but to the nearest upwind exit in particular. As such, the digital signage was integrated with wind direction sensors.
Touch screens are certainly a fundamental tool for interactivity today. They have been around for years and are well-accepted by the general population, thanks to the mainstream popularity of smartphones and tablet computers, like the Apple iPad. Most people—from small children to the elderly—know how to access information from a kiosk via touch.
Multi-touch interactivity is more useful than single-touch when (a) the screen needs to span large amounts of data quickly and/or (b) more than one person is addressing the screen at the same time. What much of the public does not realize, however, is each type of device supports its own multi-touch ‘language.’ Apple, for example, has patented the touch patterns for its ‘cut and copy’ and ‘search and replace’ functions. So, it has become problematic to choose the right system for a digital signage deployment.
For that matter, multi-touch has only specialized applications in the digital signage sector. It may provide a ‘wow’ factor the first time someone sees it in action with a large-format display, but its usefulness is very limited.
Interacting with content by hand gestures represents a powerful, cutting-edge user experience based on expressive communications, but while this technology has seen some popularity in the consumer electronics market with devices like Microsoft’s Kinect for Xbox game systems, it is not yet practical for digital signage deployments.
For one thing, despite Kinect, the concept is still in its infancy. As with multi-touch, there is no predefined way or universal language by which the public can interact with a screen by gestures.
For another thing, only specialized uses call for gesture interaction. Digital signs do not usually present Xbox-style games to the people walking past them. Most of the time, it makes more sense to simply use a touch screen instead.
A good example of where gesture-based control would be useful is in an operating room (OR), where doctors and nurses who have already scrubbed in could thus avoid having to touch a screen and risk getting contaminated.