Beyond Telepresence: The Arrival of Infopresence
At Oblong, we spend a lot of time thinking about remote collaboration technology. As we develop Mezzanine, we’ve found there to be a dearth of words that really express the experience of using our product. So, we invented a new one: Infopresence.
The closest word in the current lexicon is telepresence, a term that’s been around since the 1980s. Telepresence was coined by cognitive scientist Marvin Minsky, when he imagined users donning a suit of sensors to control a mechanical device in another room, or on another planet. Users could leave their current context to engage in another.
It wasn’t until the mid-1990s that a company succeeded in taking telepresence to market. TeleSuite created rooms that allowed executives to attend meetings without having to leave the comfort of their vacation resorts. Of course, they looked nothing like the sensor suit that Minsky imagined, but they did begin to provide remote collaboration.
Twenty years later, telepresence is still alive. Although there have been incremental improvements by a range of vendors including Cisco and Polycom, it looks very similar to the patents filed by TeleSuite in the mid–90s. Minsky predicted that within twenty years of concentrated effort, his telepresence vision would become a reality, but we still haven’t achieved his vision.
Like many visionaries, Minsky underestimated the time and resources required to achieve his vision. But Minsky also underestimated the value of combining contexts. There are many situations where it is useful to bring information with you, from one place to another.
Infopresence goes beyond telepresence to not only transport the user, but also their information. We’ve found that the comingling of information, context, and location within Mezzanine produces an experience that telepresence simply doesn’t describe. And so, Mezzanine doesn’t provide telepresence–it provides Infopresence. Infopresence is the next stage in the evolution of remote collaboration that began with telepresence. We must adapt our approaches to unforeseeable opportunities. Mezzanine is our adaptation to the realities of working in today’s information-rich world.
Learn more about Mezzanine™ collaborative conferencing solutions and Infopresence™ capabilities here.
Working with Watson
The goal of each Watson Experience Center—located in New York, San Francisco, and Cambridge—is to demystify AI and challenge visitor’s expectations through more tangible demonstrations of Watson technology. Visitors are guided through a series of narratives and data interfaces, each grounded in IBM’s current capabilities in machine learning and AI. These sit alongside a host of Mezzanine rooms where participants further collaborate to build solutions together.
The process for creating each experience begins with dynamic, collaborative research. Subject matter experts take members of the design and engineering teams through real-world scenarios—disaster response, financial crimes investigation, oil and gas management, product research, world news analysis—where we identify and test applicable data sets. From there, we move our ideas quickly to scale.
Accessibility to the immersive pixel canvas for everyone involved is key to the process. Designers must be able to see their ideas outside of the confines of 15″ laptops and prescriptive software. Utilizing tools tuned for rapid iteration at scale, our capable team of designers, data artists, and engineers work side-by-side to envision and define each experience. The result is more than a polished marketing narrative; it's an active interface that allows the exploration of data with accurate demonstrations of Watson’s capabilities—one that customers can see themselves in.
Under the Hood
Underlying the digital canvas is a robust spatial operating environment, g‑speak, which allows our team to position real data in a true spatial context. Every data point within the system, and even the UI itself, is defined in real world coordinates (measured in millimeters, not pixels). Gestures, directional pointing, and proximity to screens help us create interfaces that more closely understand user intent and more effectively humanize the UI.
This award-nominated collaboration with IBM is prototyped and developed at scale at Oblong’s headquarters in Los Angeles as well as IBM’s Immersive AI Lab in Austin. While these spaces are typically invite-only, IBM is increasingly open to sharing the content and the unique design ideas that drive its success with the public. This November, during Austin Design Week, IBM will host a tour of their Watson Immersive AI Lab, including live demonstrations of the work and a Q&A session with leaders from the creative team.
Can't make it to Austin? Contact our Solutions team for a glimpse of our vision of the future at our headquarters in the Arts District in Los Angeles.