Next Generation Video Conferencing

Serial entrepreneur and tech investor Brad Feld shares his thoughts on the short-comings of modern video conferencing, and why Rumpus will fill that gap.

I’ve been a remote worker for many years. As a result, I think I’ve used every flavor of video conferencing and screen sharing going back to Carbon Copy and PC Anywhere.

Today’s bell of the ball is Zoom, which has an outstanding video and audio conferencing experience. But, like most video conferencing services, there are significant limitations when you are working with content in a video conference. Existing video conferencing is adequate when one person shares content with a group. Sometimes you can pull it off when two people share content within a single video call. But once you increase the number of users trying to share content, or get into a real collaborative situation where multiple users are trying to comment on and interact with multiple pieces of shared content, everything breaks down very quickly.

We’ve been investors in Oblong for many years. They invented the idea of multi-stream collaboration and have been implementing a high-end multi-stream sharing environment in high-end video conferencing rooms with their Mezzanine product. In addition, they provide a spatial operating system so you can control the interaction simply by pointing at the screen. And, with their g-speak platform, you can integrate this capability into any technology environment.

But to do this, you needed a Mezzanine room system. Until recently. Now, you can use Oblong’s cloud-based collaboration system, called Rumpus, to bring all the multi-stream sharing and concurrent interaction features of Mezzanine to any video conferencing system, including Zoom, BlueJeans, Webex, and Google Meet.

Show is better than tell for this, so I’ll walk you through several examples. Let’s use Zoom and launch things using a Zoom conference ID. Rumpus is the window on the left and Zoom is the window on the right. All of the users automatically end up in the Rumpus app based on their Zoom ID.

Next, each person in the conference can share screens at the same time (in the Rumpus window). You can see the different screen shares at the bottom. Any of the users can switch between any of the screen shares. In this case, there are three screen shares happening at the same time with the current focus on the one in the middle.

Now, we have a fourth video user who has joined and a slightly different view (partial screen side by side instead of a full-screen view. In the Rumpus window, you can see different colored annotations for the different users. All of the annotations are live and persistent on whichever screen is in focus.

With Rumpus, the conversation just flows. There’s an always-on opportunity to access content – any of the material anyone in the conference needs to talk about is always accessible. You don’t have to ask permission to share, nor do you have to override someone else’s presentation as everyone can share a different screen simultaneously. Each user has a personal cursor so annotations are done live, rather than someone verbally trying to explain what they are virtually pointing at. There are endless extensions to this collaboration interaction from the years of Oblong’s experience with multi-share in Mezzanine, each of which are quickly being rolled out in Rumpus.

The way we communicate and collaborate online is rapidly evolving. I think video conferencing has entered a new era where it is infrastructure that fades nicely into the background. However, the collaboration layer is completely nascent and is wide open for innovation. Oblong’s experience over the last decade at the high end makes it a natural for bringing the collaboration capability to the masses. And, this is another step in the path towards Oblong CEO John Underkoffler’s vision of a new UI for always-on collaboration.

Rumpus is in public beta right now on the Mac. Download it for free at rumpus.co and invite your team to try it out alongside their favorite video conferencing system. If you are interested, the Oblong team will work with you to help you get set up and using Rumpus, as they are iterating rapidly on the beta. Drop me an email and I’ll connect you.

Brad Feld
Managing Director at Foundry Group

Working with Watson

The goal of each Watson Experience Center—located in New York, San Francisco, and Cambridge—is to demystify AI and challenge visitor’s expectations through more tangible demonstrations of Watson technology. Visitors are guided through a series of narratives and data interfaces, each grounded in IBM’s current capabilities in machine learning and AI. These sit alongside a host of Mezzanine rooms where participants further collaborate to build solutions together.

The process for creating each experience begins with dynamic, collaborative research. Subject matter experts take members of the design and engineering teams through real-world scenarios—disaster response, financial crimes investigation, oil and gas management, product research, world news analysis—where we identify and test applicable data sets. From there, we move our ideas quickly to scale.

Accessibility to the immersive pixel canvas for everyone involved is key to the process. Designers must be able to see their ideas outside of the confines of 15″ laptops and prescriptive software. Utilizing tools tuned for rapid iteration at scale, our capable team of designers, data artists, and engineers work side-by-side to envision and define each experience. The result is more than a polished marketing narrative; it's an active interface that allows the exploration of data with accurate demonstrations of Watson’s capabilities—one that customers can see themselves in.

Under the Hood

Underlying the digital canvas is a robust spatial operating environment, g‑speak, which allows our team to position real data in a true spatial context. Every data point within the system, and even the UI itself, is defined in real world coordinates (measured in millimeters, not pixels). Gestures, directional pointing, and proximity to screens help us create interfaces that more closely understand user intent and more effectively humanize the UI.

This award-nominated collaboration with IBM is prototyped and developed at scale at Oblong’s headquarters in Los Angeles as well as IBM’s Immersive AI Lab in Austin. While these spaces are typically invite-only, IBM is increasingly open to sharing the content and the unique design ideas that drive its success with the public. This November, during Austin Design Week, IBM will host a tour of their Watson Immersive AI Lab, including live demonstrations of the work and a Q&A session with leaders from the creative team.

Can't make it to Austin? Contact our Solutions team for a glimpse of our vision of the future at our headquarters in the Arts District in Los Angeles.

Interested in Learning More?

Start a conversation with our team to deliver the best collaboration experience for your teaming spaces

Contact Us