Thought Leadership

Next-Gen UI Comes Into Focus at Wired Live

A coterie of innovators, strategists and designers gathered at Wired Live 2018 to discuss the changing face of technology.

Global trend intelligence service Stylus was there to report on the findings, with Oblong’s own John Underkoffler signaling the direction.

Among the speakers at Wired Live was Oblong Industries founder and CEO John Underkoffler. He spoke about how, if you really think about it, UI MAKES THE WORLD. The premise: User interface is the language-rudder of technology by which we're either masterful or inept pilots of it. So far society is not yet using a UI that's up to the challenges of the 21st century, but it's not too late to reconsider this essential man+machine relationship. According to Stephen Graves, Senior Editor, Consumer Lifestyle & Technology at global trend intelligence service Stylus, this kind of thinking was a highlight, as were new forms of narrative, innovations in accessibility, agriculture, and the future of work.

With speakers ranging from Charlie Brooker and Annabel Jones, co-creators of Black Mirror to Steve Clayton, Chief storyteller at Microsoft, there was a lot to take in at Wired Live 2018. Stylus senior editor Stephen Graves filed his 16-page report on the summit November 12. The expert trend analysis coming from Stylus equips brands and agencies to make trailblazing decisions and build lucrative futures. By decoding consumer shifts across 20 industries, the Stylus team arms its members with actionable, sector-spanning insights to stay ahead. These insights are available by subscription only, but we’re able to excerpt a few relevant details here at the Ob-log.

In a section of the Stylus report titled Next Gen UI, Stephen noted that the future of work came under the microscope at Wired Live in two bits of technology, one coming from Microsoft and one coming from Oblong Industries. Both the Microsoft Surface Hub 2 (slated for release in 2019 with key upgrades coming in 2020) and Oblong’s Mezzanine (already available with many of the UI features Microsoft has yet to release) are set to transform teamwork with flexible and expansive shared work surfaces, multi-streaming content capabilities, connectivity across distance, and more natural user interfaces. Unique to Oblong are the spatial computing capabilities that enable users to interact simultaneously and to move content across multiple screens, surfaces, and devices, with six degrees of freedom in three-dimensional space.

Surface Hub 2 by Microsoft

Mezzanine 200 Series by Oblong Industries

Stephen highlights that underlying the Mezzanine UI is the basic human gesture of pointing at distant objects. As John Underkoffler explains, "If people know what you're pointing at, then suddenly you've got action at a distance, Anything you can see you can point at, and anything you can see you can control." Taking natural user interface a step further, multi-dimensional UIs in the future could also incorporate contextual voice recognition. John notes, ”If you combine pointing and voice, the pointing gives you space, and the voice gives you very precise depiction and delineation in time. It's like a super-rich button press – like having 10,000 buttons.” Barriers to productivity are erased when the interaction is more natural and intuitive.

Of course the primary concern of UI is seamlessly inferring intention. Stephen notes in another section of the report, context is everything. As artificial intelligence (AI) devices become more prevalent, contextual understanding will assume greater importance. Future user interfaces (UI) could allow users to "peek under the hood" at their workings to gain a better understanding of their decision-making processes. John elaborates: "There's this four-decade-old idea around UI that it should assume and only ever depict a state of perfection. But if we built a graphical depiction of the machine's interpretation of what you're doing, that could be phenomenal. You would make fewer mistakes – and the machine would make fewer mistakes about what you're doing." Imagine what this could mean for our ability to get important work done.

For more on the Wired Live event, visit the site. Also don’t miss these recent Wired articles to help you make videoconferencing suck less, and find more creativity in your future.

Working with Watson

The goal of each Watson Experience Center—located in New York, San Francisco, and Cambridge—is to demystify AI and challenge visitor’s expectations through more tangible demonstrations of Watson technology. Visitors are guided through a series of narratives and data interfaces, each grounded in IBM’s current capabilities in machine learning and AI. These sit alongside a host of Mezzanine rooms where participants further collaborate to build solutions together.

The process for creating each experience begins with dynamic, collaborative research. Subject matter experts take members of the design and engineering teams through real-world scenarios—disaster response, financial crimes investigation, oil and gas management, product research, world news analysis—where we identify and test applicable data sets. From there, we move our ideas quickly to scale.

Accessibility to the immersive pixel canvas for everyone involved is key to the process. Designers must be able to see their ideas outside of the confines of 15″ laptops and prescriptive software. Utilizing tools tuned for rapid iteration at scale, our capable team of designers, data artists, and engineers work side-by-side to envision and define each experience. The result is more than a polished marketing narrative; it's an active interface that allows the exploration of data with accurate demonstrations of Watson’s capabilities—one that customers can see themselves in.

Under the Hood

Underlying the digital canvas is a robust spatial operating environment, g‑speak, which allows our team to position real data in a true spatial context. Every data point within the system, and even the UI itself, is defined in real world coordinates (measured in millimeters, not pixels). Gestures, directional pointing, and proximity to screens help us create interfaces that more closely understand user intent and more effectively humanize the UI.

This award-nominated collaboration with IBM is prototyped and developed at scale at Oblong’s headquarters in Los Angeles as well as IBM’s Immersive AI Lab in Austin. While these spaces are typically invite-only, IBM is increasingly open to sharing the content and the unique design ideas that drive its success with the public. This November, during Austin Design Week, IBM will host a tour of their Watson Immersive AI Lab, including live demonstrations of the work and a Q&A session with leaders from the creative team.

Can't make it to Austin? Contact our Solutions team for a glimpse of our vision of the future at our headquarters in the Arts District in Los Angeles.

Interested in Learning More?

Start a conversation with our team to deliver the best collaboration experience for your teaming spaces