g-speak


 

g-speak™ is Oblong's core technology platform. g-speak is used today to address high-value, real-time, big-data, and big-workflow challenges in applications such as military simulation, logistics and supply chain management, and energy grid management.

The g-speak platform enables the development of multi-user, multi-screen, multi-device, spatial, networked applications.

g-speak is available for license customers as an SDK with runtime libraries for standard computers and mobile devices.

Oblong also offers full-scale g-speak development environments that include high-end hardware for multi-user gestural input and object tracking.

Oblong's Client Solutions team works with Global Fortune 500 companies, government agencies, and other clients to deliver or jointly develop custom g-speak applications. For a glimpse at how our Client Solutions team has worked with customers previously, check out CIO Magazine’s interview with Boeing executives on their relationship with Oblong.


 


g-speak Features

g-speak is deployed to solve real-world problems, including:

  • Integration of large screens and multiple computers into room- and building-scale work environments
  • Analytics workflows integrating multiple data sets, large data sets, and multiple applications running in multiple locations
  • Operation of three-dimensional interfaces
  • Scalable multi-user collaborative environments
  • Large-scale interactive application sessions that run across enterprise networks


g-speak Architecture

The g-speak platform provides three core functional components:multi-device, spatial input, and output; Plasma networking and multi-application support; and a geometry engine that renders pixels across multiple screens with real-world spatial registration.



Multi-Device, Spatial I/O

g-speak allows any number of devices and screens to be used seamlessly together.

The g-speak platform supports practically any kind of input, including input from the mouse and keyboard, from touch interfaces on mobile devices, from web browsers, from large-screen touch displays, from spatial pointing devices, from bare-hand and glove-based gestural input systems, and from custom input devices.

A g-speak application can easily output to multiple screens. g-speak supports projectors, LCDs, and 3D displays. Different types of displays can be used together in a single environment.



Plasma Networking and Multi-Application Support

The g-speak Plasma networking framework makes interactive, multi-device development simple and scalable. Every g-speak application relies on Plasma to coordinate event streams, application synchronization over the network, and media transport.

Plasma allows simultaneous, multi-user interactivity from multiple different users, software clients, and input devices.

Plasma allows large data sets to be processed and selectively displayed in real time across many screens.

Plasma allows application video and videos streams to be moved around between cooperating devices and screens with a standard API that supports buffering, synchronization, and flexible rendering.

Plasma allows the integration of unmodified legacy applications into a g-speak environment. Existing applications automatically benefit from network transparency and spatial input. A standard extension architecture makes it possible to build support for full gestural/spatial I/O and spatial semantics for any existing application, regardless of the underlying technology stack or operating system.



Real-World Pixels

Every graphical and input object in a g-speak environment has real-world spatial identity and position.

This spatial architecture makes it possible for any number of users to move both 2D and 3D data around between any number of screens.

The spatial architecture also allows natively spatial data -- for example maps, aerial imagery, geographic meta-data, aircraft flight paths, and geolocated entities of all kinds -- to be displayed accurately on any 2D or 3D display.

The g-speak platform is display agnostic. Wall-sized projection screens can co-exist with desktop monitors, table-top screens, large touch screens, and hand-held devices. Every display can be used simultaneously, and data can be moved to the displays that are most appropriate. Three-dimensional displays can be used, too, without modification to g-speak application code.



g-speak Form Factors

Oblong offers g-speak as a software SDK and runtime libraries. The complete g-speak graphical application development stack is available on Linux and OS X. The g-speak Plasma networking components are available on Linux, OS X, Microsoft Windows, Java, iOS, and Android platforms.

Oblong also offers hardware configurations that are designed for g-speak development and deployment.

Oblong's multi-user, glove-based gestural systems set the standard for next-generation interactive computing. These are the gestural interfaces made famous by the film Minority Report.

Oblong also works with customers to specify custom hardware configurations, for example walk-up gestural kiosks that support mobile device interaction plus gesture recognition using the PrimeSense and Microsoft Kinect depth sensors.



WORKING WITH US

Oblong's Client Solutions team can specify custom hardware and software configurations, provide developer training and certification, and work on site alongside customer engineering teams. Contact us to learn more about working with our Client Solutions team.




Learn More

For more information on g-speak or licensing our SDK, contact our business development team.