Share with friends:
Writer Tom Ward filed a Long Read with the UK edition of WIRED Magazine. The topic? Our CEO, John Underkoffler, and the tools we build at Oblong Industries.
From rural Philadelphia to MIT Media Lab, to Hollywood to Oblong HQ, the career of John Underkoffler is fascinating. It isn’t often that your ideas for a better human-machine interface enjoy a worldwide test audience in the hundreds of millions. Yet this is exactly the trajectory of our CEO. He’s parlayed that history, and an abiding interest in transforming the computational world around us, into a multi-million dollar going concern. With a list of patents and a longer list of global customers hailing from government, education, business and industry, we just may get our wish to bring more power, utility, precision, and productivity to collaborators solving our most pressing problems.
Tom Ward takes a deep dive on Mezzanine, our collaboration platform that helps teams visualize and analyze information together in most effective fashion. He sits with Padraig Scully in our London office and connects with John in Los Angeles for a tour of the multi-surface, multi-screen, multi-location, multi-stream solution along with the Optical Wand that provides gesture control of the computing environment. They talk about a time where the ubiquity of this kind of capability – where the computing power of the individual is not bounded by the edge of one’s own device – will unleash an exhilarating revolution in the way work gets done.
Later, Tom catches up with John in Los Angeles and visits our warehouse lab where Pete Hawkes demonstrates the immersive and pixel-rich environments for stunning interaction with large-scale data visualizations that source from multiple machines and spread across dozens of screens. (The BBC’s Spencer Kelly made a similar visit a couple years back for BBC Click.) It’s reflective of the prototyping we do for IBM Watson.
“People ask about developing a portable VR version [of the software], but that wouldn’t be a shared experience,” explains B Cavello, from Watson. “When you’re making strategic decisions, and checking people’s facial expression to check everyone is on the same page, that level of disconnect doesn’t really work for us. Having a space where you can have a conversation and navigate the content immersively is really valuable.”
Read up in WIRED and be sure to share the article with colleagues and friends. Then, get in touch when you’re ready to bring multi-share to your teams and advanced technology solutions to your business.
Working with Watson
The goal of each Watson Experience Center—located in New York, San Francisco, and Cambridge—is to demystify AI and challenge visitor’s expectations through more tangible demonstrations of Watson technology. Visitors are guided through a series of narratives and data interfaces, each grounded in IBM’s current capabilities in machine learning and AI. These sit alongside a host of Mezzanine rooms where participants further collaborate to build solutions together.
The process for creating each experience begins with dynamic, collaborative research. Subject matter experts take members of the design and engineering teams through real-world scenarios—disaster response, financial crimes investigation, oil and gas management, product research, world news analysis—where we identify and test applicable data sets. From there, we move our ideas quickly to scale.
Accessibility to the immersive pixel canvas for everyone involved is key to the process. Designers must be able to see their ideas outside of the confines of 15″ laptops and prescriptive software. Utilizing tools tuned for rapid iteration at scale, our capable team of designers, data artists, and engineers work side-by-side to envision and define each experience. The result is more than a polished marketing narrative; it's an active interface that allows the exploration of data with accurate demonstrations of Watson’s capabilities—one that customers can see themselves in.
Under the Hood
Underlying the digital canvas is a robust spatial operating environment, g‑speak, which allows our team to position real data in a true spatial context. Every data point within the system, and even the UI itself, is defined in real world coordinates (measured in millimeters, not pixels). Gestures, directional pointing, and proximity to screens help us create interfaces that more closely understand user intent and more effectively humanize the UI.
This award-nominated collaboration with IBM is prototyped and developed at scale at Oblong’s headquarters in Los Angeles as well as IBM’s Immersive AI Lab in Austin. While these spaces are typically invite-only, IBM is increasingly open to sharing the content and the unique design ideas that drive its success with the public. This November, during Austin Design Week, IBM will host a tour of their Watson Immersive AI Lab, including live demonstrations of the work and a Q&A session with leaders from the creative team.
Can't make it to Austin? Contact our Solutions team for a glimpse of our vision of the future at our headquarters in the Arts District in Los Angeles.