It's all about the CONTENT
In the age of data-driven decisions, why do conference room collaboration tools still revolve around voice and video alone? Paul Sprague, Oblong's Director of Sales Engineering, takes a look at the rise of content conferencing.
Voice and video conferencing have come a long way since the introduction of VoIP, and in a business context, it can be important to see facial expressions and body language. So it makes sense that video has become a key component of workplace communication, but is that the best we can do?
Data is the driving force for decision-making in today’s corporate landscape, so for many meetings content is now the most important part of the call.
Most video conferencing platforms still limit content sharing to one person’s connected device. Software platforms like Zoom and Skype follow this model, as do standards-based room endpoints from Cisco and Polycom. Even the latest interactive whiteboards like the Surface Hub still allow only a single device to share content.
According to HBR, the number of data sources that typical organizations use for making decisions can range from five to more than fifteen. In a typical meeting, most participants will bring at least one piece of data to the table, so connecting only a single stream of content leaves the majority of that material unshared. This can lead to uninformed decisions and create problems for teams down the road.
At Oblong, we have a simple proposition: conferencing tools should support content sharing from any number of participants so that all data can be shared in real time. Think of it as content conferencing.
Content conferencing is the next evolution of audio, video and web conferencing. In content-centric meetings, the focus is on the documents, applications, and websites that lead to decisions driven by data. Rather than relying on verbal communication, participants can spread out ideas like spreading out papers on a table.
Oblong’s Mezzanine allows up to ten laptops to share content into a meeting. Video conferencing is an important and integrated component of Mezzanine, but the majority of screen real estate is available for content: live streams, documents, and images. Users control material through a point-and-click interface, and all content can be shared to other rooms and remote participants.
Content conferencing isn’t unique to Oblong – Polycom Pano and Mersive Solstice allow multiple streams of content sharing, and larger systems like Prysm and Bluescape incorporate similar ideas. But Oblong’s Mezzanine offers the most flexible and complete solution, a full product family to scale to an organization's needs, and integration with popular persistent chat clients for the enterprise like Webex Teams. Forrester found that Mezzanine typically delivers a 226% ROI over a three-year period, with over $1.6 Million in savings from improved business processes. From day one, Mezzanine was built to support multiple streams of content simultaneously to help teams solve critical business challenges.
For key business decisions, and for teams driven by data, audio and video alone are no longer enough. In the information age, conferencing tools should support data and our meetings should revolve around content. With Mezzanine, teams can make more informed decisions by letting all the content and data do the talking.
Watch the video below to see how distributed teams can connect with content, or contact us today to find out more.
Working with Watson
The goal of each Watson Experience Center—located in New York, San Francisco, and Cambridge—is to demystify AI and challenge visitor’s expectations through more tangible demonstrations of Watson technology. Visitors are guided through a series of narratives and data interfaces, each grounded in IBM’s current capabilities in machine learning and AI. These sit alongside a host of Mezzanine rooms where participants further collaborate to build solutions together.
The process for creating each experience begins with dynamic, collaborative research. Subject matter experts take members of the design and engineering teams through real-world scenarios—disaster response, financial crimes investigation, oil and gas management, product research, world news analysis—where we identify and test applicable data sets. From there, we move our ideas quickly to scale.
Accessibility to the immersive pixel canvas for everyone involved is key to the process. Designers must be able to see their ideas outside of the confines of 15″ laptops and prescriptive software. Utilizing tools tuned for rapid iteration at scale, our capable team of designers, data artists, and engineers work side-by-side to envision and define each experience. The result is more than a polished marketing narrative; it's an active interface that allows the exploration of data with accurate demonstrations of Watson’s capabilities—one that customers can see themselves in.
Under the Hood
Underlying the digital canvas is a robust spatial operating environment, g‑speak, which allows our team to position real data in a true spatial context. Every data point within the system, and even the UI itself, is defined in real world coordinates (measured in millimeters, not pixels). Gestures, directional pointing, and proximity to screens help us create interfaces that more closely understand user intent and more effectively humanize the UI.
This award-nominated collaboration with IBM is prototyped and developed at scale at Oblong’s headquarters in Los Angeles as well as IBM’s Immersive AI Lab in Austin. While these spaces are typically invite-only, IBM is increasingly open to sharing the content and the unique design ideas that drive its success with the public. This November, during Austin Design Week, IBM will host a tour of their Watson Immersive AI Lab, including live demonstrations of the work and a Q&A session with leaders from the creative team.
Can't make it to Austin? Contact our Solutions team for a glimpse of our vision of the future at our headquarters in the Arts District in Los Angeles.