What Will the Future Look Like?
With an elite customer base, roots at MIT Media Lab, and a history in Hollywood forecasting the future of human computer interface, we are sometimes sought for expert opinion. While we can only accept a handful of invitations (we're pretty busy at HQ), we are always happy when we have time to participate with other futurists looking ahead at what's next.
Oblong chief scientist John Underkoffler joined a number of recent gatherings to talk about all things future tech:
From the Science of Fiction to LA 2020, USC associate professor Alex McDowell (and 5D co-founder) led a lively and inspired summit we're still talking about, which earned mentions from Popular Mechanics to the New Yorker.
During a week-long blitz of Hollywood premieres, the Science & Entertainment Exchange hosted a panel discussion with several leading experts, including folks from Microsoft Research, about the science behind Iron Man 3. The goal of the Exchange is to connect entertainment industry professionals with top scientists and engineers, and to inspire the next generation of computer scientists. It seems to be working; our friend Jeremy Morris, who works with the film's star, Robert Downey Jr, attended the event and shared this anecdote: "Behind me in the audience were three teenage girls who, suffice to say, totally lost their minds when they saw your [Oblong] video presentation. “OMG, Carmen sooo has to get herself one of those.” Honestly, if you could have signed them up for a Masters in Comp Sci right then and there, they would have done it on the dotted line."
CNN International's prime time news show, Connect the World, tapped us recently to talk about the path from science fiction to contemporary reality and how the gadgetry we see in film and TV is already making its way out of the lab and into the commercial world. The a/v connection from CNN's LA studio to the show's host in London was a little labored, but the "live via satellite" experience was fun nonetheless.
What will the future look like? Environments that are multi-screen, habits that are multi-device, rooms and objects that are spatially-aware, plus pixels pixels everywhere.
Working with Watson
The goal of each Watson Experience Center—located in New York, San Francisco, and Cambridge—is to demystify AI and challenge visitor’s expectations through more tangible demonstrations of Watson technology. Visitors are guided through a series of narratives and data interfaces, each grounded in IBM’s current capabilities in machine learning and AI. These sit alongside a host of Mezzanine rooms where participants further collaborate to build solutions together.
The process for creating each experience begins with dynamic, collaborative research. Subject matter experts take members of the design and engineering teams through real-world scenarios—disaster response, financial crimes investigation, oil and gas management, product research, world news analysis—where we identify and test applicable data sets. From there, we move our ideas quickly to scale.
Accessibility to the immersive pixel canvas for everyone involved is key to the process. Designers must be able to see their ideas outside of the confines of 15″ laptops and prescriptive software. Utilizing tools tuned for rapid iteration at scale, our capable team of designers, data artists, and engineers work side-by-side to envision and define each experience. The result is more than a polished marketing narrative; it's an active interface that allows the exploration of data with accurate demonstrations of Watson’s capabilities—one that customers can see themselves in.
Under the Hood
Underlying the digital canvas is a robust spatial operating environment, g‑speak, which allows our team to position real data in a true spatial context. Every data point within the system, and even the UI itself, is defined in real world coordinates (measured in millimeters, not pixels). Gestures, directional pointing, and proximity to screens help us create interfaces that more closely understand user intent and more effectively humanize the UI.
This award-nominated collaboration with IBM is prototyped and developed at scale at Oblong’s headquarters in Los Angeles as well as IBM’s Immersive AI Lab in Austin. While these spaces are typically invite-only, IBM is increasingly open to sharing the content and the unique design ideas that drive its success with the public. This November, during Austin Design Week, IBM will host a tour of their Watson Immersive AI Lab, including live demonstrations of the work and a Q&A session with leaders from the creative team.
Can't make it to Austin? Contact our Solutions team for a glimpse of our vision of the future at our headquarters in the Arts District in Los Angeles.