Thought Leadership

Talking Cyberspace and Real Space at Creative Mornings LA

I recently had the pleasure of giving a talk at Creative Mornings Los Angeles. Creative Mornings is a breakfast lecture series for the creative community, and it happens in many cities around the world, once a month.

 

Oblong's ambitions embrace both technology and design, so we were excited for this chance to be in conversation with LA's vibrant community of designers, artists, animators, hackers, and makers. 

What follows is a gloss on the content of the talk. I began with a little background on myself, someone who as a child was more of a reader than a hacker. I loved imaginary worlds. 

But I also ended up being drawn into the imaginary worlds of pixels which were springing up around me as a kid of the 80s, in the form of video games and home computers. I especially loved Neuromancer (1984), a great novel but also an unexpectedly fun video game, and, what's more, the source of a new word of power, a new idea for me: cyberspace.

This word conjured up a vision of a three-dimensional, immersive universe of data, a mental palace which would envelop its users, indeed would envelop and transform all of society.  All this seemed urgent, seemed like it might be just around the corner.

Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts... A graphic representation of data abstracted from banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding...

Computer interfaces at that time were only just emerging from being one-dimensional scrolls—scrolls of blocky text—into their second phase as vivid, reactive 2D surfaces.  This was when the GUI and the mouse became the norm.   

From there, it was just one more step into the third dimension. Through the 1990s, new kinds of imaginary worlds appeared in the glass, worlds which had a genuine third dimension; baby cyberspaces. The first ones were crude and pixelated, and then a kind of race began to make them more real, more colorful, more vivid. 

But even as the realism increased, there remained something insular about the inward-pointing worlds of 3D computer graphics, worlds behind glass. Do you remember the old screensaver from the 1990s, where realistic fish appeared to swim around inside the monitor? It was amazing at first; then it was fun; and then it was boring. We sensed the limits quickly—it's more or less a diorama inside a shoebox. And isn't this true on some level of Halo or Grand Theft Auto as well? They are dioramas, circumscribed worlds.

So it's worth stopping to ask the question: is that where the future of technology is taking us—deeper into the diorama? Further and further down a rabbit hole of imaginary 3D worlds, worlds of blossoming complexity, but limited scope?

I'm told that putting on virtual reality goggles is a conversion experience for some people: here, at last, is cyberspace. 

Well . . . maybe? 

Or maybe it's more of the same: more of the fish tank.  We have to put on this special equipment, this VR scuba mask, so we can dunk our head right into the aquarium

Do people actually want their head in the aquarium? Won't they run out of air at some point? 

At Oblong, we think there are other paths to the future. These paths don’t only lead our minds deeper into the diorama in the screen—they actually point right out of it, back into the 3D space we already had: the world we live and breathe in. 

In this world, there are real bodies, arranging themselves in complex social and spatial arrangements, based on a host of subtle cues, needs, and demands. And to meet these demands, people outfit their environment with technology.

Indeed, the room itself is a technology. Offices, warehouses, cubicles, kitchens, living rooms, operating rooms, workshops, even the inside of a train or a car — these are all useful dioramas we’ve evolved around ourselves, over the course of many years.  

Oblong's hunch is that computing technology can illuminate all these rooms and make them better. To do that, we have to start with a respectful understanding of the myriad ways that people already know how to situate their work in three-dimensional—non-cyber—space. 

This isn't to discount the power and the vision of imaginary worlds and palaces of data. Imaginary worlds are an essential part of us; cyberspace has its uses. 

But the real world is the ultimate interface.  

Working with Watson

The goal of each Watson Experience Center—located in New York, San Francisco, and Cambridge—is to demystify AI and challenge visitor’s expectations through more tangible demonstrations of Watson technology. Visitors are guided through a series of narratives and data interfaces, each grounded in IBM’s current capabilities in machine learning and AI. These sit alongside a host of Mezzanine rooms where participants further collaborate to build solutions together.

The process for creating each experience begins with dynamic, collaborative research. Subject matter experts take members of the design and engineering teams through real-world scenarios—disaster response, financial crimes investigation, oil and gas management, product research, world news analysis—where we identify and test applicable data sets. From there, we move our ideas quickly to scale.

Accessibility to the immersive pixel canvas for everyone involved is key to the process. Designers must be able to see their ideas outside of the confines of 15″ laptops and prescriptive software. Utilizing tools tuned for rapid iteration at scale, our capable team of designers, data artists, and engineers work side-by-side to envision and define each experience. The result is more than a polished marketing narrative; it's an active interface that allows the exploration of data with accurate demonstrations of Watson’s capabilities—one that customers can see themselves in.

Under the Hood

Underlying the digital canvas is a robust spatial operating environment, g‑speak, which allows our team to position real data in a true spatial context. Every data point within the system, and even the UI itself, is defined in real world coordinates (measured in millimeters, not pixels). Gestures, directional pointing, and proximity to screens help us create interfaces that more closely understand user intent and more effectively humanize the UI.

This award-nominated collaboration with IBM is prototyped and developed at scale at Oblong’s headquarters in Los Angeles as well as IBM’s Immersive AI Lab in Austin. While these spaces are typically invite-only, IBM is increasingly open to sharing the content and the unique design ideas that drive its success with the public. This November, during Austin Design Week, IBM will host a tour of their Watson Immersive AI Lab, including live demonstrations of the work and a Q&A session with leaders from the creative team.

Can't make it to Austin? Contact our Solutions team for a glimpse of our vision of the future at our headquarters in the Arts District in Los Angeles.

Interested in Learning More?

Start a conversation with our team to deliver the best collaboration experience for your teaming spaces