Much Ado About Touch
Share with friends:
Touch is already a part of the Mezzanine user interface on tablets and smart phones, and it’s coming to display screens this fall.
We recognize there’s a love affair with touch. We all use it on the smart phones in our pockets. Indeed it is a part of the user interface for Mezzanine, our flagship product for visual collaboration in the enterprise. On a smart phone or tablet, a collaborator in a Mezzanine session can control the shared digital workspace with familiar touch commands to move / drag / drop / pinch / zoom / slide / delete. For annotation with an iPad, it’s simple to pull up a color selector ‘paint box’ to enable annotation over graphics and video streams with a fingertip. Immediately, a collaborator can contribute a marked up graphic to a session’s workflow from anywhere in real time, even while seated at a conference table.
This powerful touch capability is often overshadowed by the wand, Mezzanine’s remarkable gestural input device that is endlessly gratifying for commanding content across display screens occupying every wall surface from wherever you are in the room. Adding to this, we have heard time and again, “people just don’t want to get up in the middle of a meeting and go to a wall to do everything on a display screen. Standing meetings are not really all that common.” Plus, as soon as you stand in front of the display, you block sight lines to whatever is on the screens.
So you can imagine our surprise at the immediate and enthusiastic reception we received at InfoComm for the beta demonstration of touch annotation and white-boarding directly on the display screens in the Mezzanine 200 Series. Our research had indicated that while touch screen white-boarding is a commonly promoted in the marketplace, it isn’t so commonly utilized in the workplace. So we decided to take a deeper dive on this, as we continue to study human behavior in architectural space for the development of our spatial computing platform, and we discovered two things we think are worth sharing.
- Yes, people will stand up in a meeting…sometimes. Collaborators will do this when they need to command attention to make a point. They don’t want to do it all the time for every little navigation or illustration, but there is certain and valuable ceremony in breaking the flow of table talk of the group by standing up and going over to the wall to illustrate a point. A part of a meeting might even involve multiple people at a wall, but it isn’t the whole meeting.
- Artifacts are important. After said group interaction – commanding attention, annotating content, drawing up a thing – there is an outcome: a new visual asset is created. This asset supports the point that was just made, and very often it will be useful to capture and access this visual piece again.
Taking both of these findings into consideration, we are optimizing touch annotation for the Mezzanine 200 Series, which is our scaled solution for the most intensive team collaboration sessions. Teams utilizing Mezzanine 200 to connect across office locations will enjoy the direct access to make a point with touch annotation right on the display screen, and the easy means of capturing the result directly into the session portfolio. To see this solution for yourself, so that you can evaluate its deployment across your enterprise, schedule a demo with us or contact one of our integration partners.
Working with Watson
The goal of each Watson Experience Center—located in New York, San Francisco, and Cambridge—is to demystify AI and challenge visitor’s expectations through more tangible demonstrations of Watson technology. Visitors are guided through a series of narratives and data interfaces, each grounded in IBM’s current capabilities in machine learning and AI. These sit alongside a host of Mezzanine rooms where participants further collaborate to build solutions together.
The process for creating each experience begins with dynamic, collaborative research. Subject matter experts take members of the design and engineering teams through real-world scenarios—disaster response, financial crimes investigation, oil and gas management, product research, world news analysis—where we identify and test applicable data sets. From there, we move our ideas quickly to scale.
Accessibility to the immersive pixel canvas for everyone involved is key to the process. Designers must be able to see their ideas outside of the confines of 15″ laptops and prescriptive software. Utilizing tools tuned for rapid iteration at scale, our capable team of designers, data artists, and engineers work side-by-side to envision and define each experience. The result is more than a polished marketing narrative; it's an active interface that allows the exploration of data with accurate demonstrations of Watson’s capabilities—one that customers can see themselves in.
Under the Hood
Underlying the digital canvas is a robust spatial operating environment, g‑speak, which allows our team to position real data in a true spatial context. Every data point within the system, and even the UI itself, is defined in real world coordinates (measured in millimeters, not pixels). Gestures, directional pointing, and proximity to screens help us create interfaces that more closely understand user intent and more effectively humanize the UI.
This award-nominated collaboration with IBM is prototyped and developed at scale at Oblong’s headquarters in Los Angeles as well as IBM’s Immersive AI Lab in Austin. While these spaces are typically invite-only, IBM is increasingly open to sharing the content and the unique design ideas that drive its success with the public. This November, during Austin Design Week, IBM will host a tour of their Watson Immersive AI Lab, including live demonstrations of the work and a Q&A session with leaders from the creative team.
Can't make it to Austin? Contact our Solutions team for a glimpse of our vision of the future at our headquarters in the Arts District in Los Angeles.