Mezzanine 3—which expands the Mezzanine workspace onto all walls of the room with flexible display configurations—rethinks the way you organize your visual content with a brand new snapping algorithm. Giving careful attention to the unique aspects of the system, such as the spatial wand and multi-user control, our designers and engineers crafted a system to help you command the workspace with accuracy and precision. We believe we've found a solution that does its best to align with the user's intent, empowering users to manipulate items freely while maintaining precision and purpose, ultimately making it faster and easier to get work done.
The wand's high-precision tracking and six degrees of spatial freedom enable natural gestures for simultaneously moving and resizing items around the room. However, the freedom to manipulate items with such precision can also inhibit accurate placement, such as when attempting to neatly position two items side-by-side with no wasted pixels. We hate wasting pixels. When we set out to design Mezzanine 3, we sought to retain the freedoms that the wand affords while simultaneously making it easier to arrange content in an organized way.
Paper, Pencil, and Prototypes
We began sketching. An abundance of ideas spilled onto the pages of our sketchbooks. We could tessellate the workspace into small units, like graph paper, and allow placement of items with granular precision. Or snap individual items into large tiles. Or allow content to snap to the edges of the screens and move as if on rails. Or consider the placement of items with respect to each other, snapping to their edges or negative spaces.
A small sampling of our early sketches.
Choosing any one idea seemed like a daunting task, so we built prototypes to see how they felt. Some died quickly while others showed promise, but no solution stood out. We did some research. Similar problems have already been solved by others, but we couldn't find anything that fit the specific needs of Mezzanine, with the right balance between flexibility and constraint, and appropriate for multi-user, wand-driven input.
We resharpened our pencils, and eventually came to a realization. All of our designs shared a key element: at any given moment, there are a finite number of areas on the screen that a particular item might snap to. By devising a language to describe those areas and a method for choosing the best match, we could build not just one solution but a system for exploring many of the ideas we'd sketched. That may sound complicated, but the ideas at play are fairly simple, and the math (yes, there's math!) isn't so bad either. If math makes you cry, feel free to to skip ahead.
Putting Things in Place
So, what does it mean to snap an item into place? And when should it snap? Logically, we want an item to snap when it's "close enough" to the center of the target. Each snap target has a center point (`x_t`, `y_t`) (here we're using subscript
to indicate a property of the target) and a radius (`r`) within which an item may be close enough to snap.
Each snap target has a center location and radius of effect.
From this we compute a match percentage for its position, `M_(pos)`, (with a value between 0 and 1) that measures the fit. Here's what it looks like:
`M_(pos) = 1 - sqrt((x_t - x)^2+(y_t - y)^2) / r`
The scariest bit on top is really just the Pythagorean theorem, used to calculate the distance between the item and the target. We divide that distance by the target's radius of influence, and subtract the result from 1. When the distance is zero, we get:
`1 - (0 / r) = 1`
A perfect match. When the distance is equal to the radius, we get:
`1 - (r / r) = 0`
No match. In practice, we constrain the result so that we never get a negative match value. Using this simple technique, we can compute a match value for the position of an item relative to any target.
Finding the Right Fit
So it's in the right spot. Is it the right size and shape?
Each snap target has a width (`w_t`) and a height (`h_t`). As above, we'd like to calculate a similarity between the item and target sizes to measure the fit. We could just compute the area of each and compare them. This isn't a bad idea. However, consider a target with width 5 and height 4, and an item of width 2 and height 10. Both have an area of 20, but they are far from a perfect match. We need to take into account the shape of the item—its aspect ratio—not merely its size.
Four ways to obtain a 25% match (targets in red, items in blue)
To account for differences in both size and aspect ratio, here's what we came up with:
`M_(size) = (min( w, w_t ) * min( h, h_t )) / (max( w, w_t ) * max( h, h_t ))`
In the numerator we compute an area based on the minimum width and height between the item and the target. If you envision the item (blue) overlaid on the target (red) with their upper left corners aligned, this is the area of overlap (purple). The denominator represents an area computed from their maximum width and height, or their bounding rectangle.
The item (blue) and target (red) are overlaid, and their widths and heights used to compute a match. Disparate aspect ratios incur an added penalty even when areas are similar.
When item and target have the same width and height the numerator and denominator are also equal, and we get 1. A perfect match. When the item fits entirely within the target area (or vice versa) the result defines the percentage of coverage. However, when one does
in aspect, the match value trends toward zero.
Bringing the Pieces Together
So now we've got a match percentage for the position, and another for the size. From these values, we compute a combined match percentage (`M`). Each target defines a ratio (`R`)—again from 0 to 1—to weight the match for position vs. size, and a weight (`W`) which biases the final value with respect to other targets, letting us prioritize important targets over less important ones.
`M = (M_(pos) * R + M_(size) * (1 - R)) * W`
This works like weighted grading algorithms, biasing the result toward a position or a size match for each target. Through hands-on exploration, we discovered that size tends to need a much higher weight than position to feel aligned with the user's intent. Perhaps users can more easily visualize an item snapping into a spot of similar size, like a puzzle piece. On the other hand, large changes in scale—even in the same position—feel abrupt and unintentional.
Now that we've got a final match percentage, the only thing left to do is compute this value for each potential snap target, pick the best match, and apply it whenever it exceeds our global match threshold, `M_(thresh)`. We can adjust this threshold to bias the system toward free-form placement, or toward more aggressive snapping constraints.
We can also increase a target's weight to provide hysteresis—a technique that biases the current match over others. This prevents fussiness in the logic, and prevents Mezzanine from quickly bouncing between lots of snap targets with only subtle movements of the wand.
An Exercise in Flexibility
Okay, so all we've really said is that something which is "close enough" in both position and size should snap into place, with a bit of math to back it up. That seems obvious. And in some ways it is. The payoff comes from our ability to define a collection of snap targets in real time, as items are manipulated in the workspace.
For instance, we can define a target based on the size or aspect ratio of the object that was grabbed. We can define different snap targets based on the type of content being manipulated. We can define a target at the item's initial size and position, making it easy to abort an interaction without changing anything. We can adjust the snap targets dynamically based on the manipulations of other users in this multi-user system. And we can tweak the radius of effect, the bias toward size or position, the weight of snap targets with respect to each other, and the overall aggressiveness of the constraints.
Take the simplest behavior of snapping an item to be fully visible within the workspace. We define a single snap target at the center of the workspace, with an appropriate radius (say, half its height), and a width and height equal to the inscribed width and height of the item being dragged. The target is dependent on the width and height of the grabbed item, but we know that information.
A single snap target in the center of the screen.
In this example, an item will snap to fit in the center of the screen if it is close enough to the center, and approximately the inscribed size, give or take. Now consider the desire to provide alignment capabilities, snapping not only to the center, but also to the left and right edges (for a vertical item—the same pattern applies for top and bottom alignment with a horizontal item). We can think of this as three potential final locations for the item, so we define three snap targets instead of one. Perhaps the radius, the weight, or both for the center target are greater, so that it's easiest to center the item on the screen.
Snap targets defined for the left and right edges of the screen. These targets may overlap depending on the aspect ratio of the item.
Now imagine wanting a way to snap an item in one axis, while allowing it to move freely in the other, as though on rails. Because we can reconfigure the snap targets each frame, we add a fourth snap target to the above algorithm, at the current `x` value but with a constrained `y` value. This target will move as the item moves, allowing the item to slide easily back and forth but remain affixed to top and bottom edges.
A snap target which constrains to the horizontal axis. The dotted red line indicates the range of the target as the item moves.
Summing it Up
The new snapping system in Mezzanine 3 allows us to define as many snap targets as desired, at once, for a given object. Once we've defined them, the matching system does the rest, automatically choosing and applying the best match when appropriate. The end result proves to be a powerful system for sizing and arranging content in the workspace, and for extending the assisted placement capabilities of Mezzanine in the future.
The mathematical foundations of the new snapping system are basic, but the machinery offers a number of knobs, letting us tune the balance between freedom and constraint. All of this work helps Mezzanine to understand user intent, remaining an agile tool that can also work with you to optimize your workflow. Mezzanine 3 is already taking advantage of this new flexibility to make moving, sizing, and placing your content even more exhilarating, and we can't wait to push these ideas even further.
Working with Watson
The goal of each Watson Experience Center—located in New York, San Francisco, and Cambridge—is to demystify AI and challenge visitor’s expectations through more tangible demonstrations of Watson technology. Visitors are guided through a series of narratives and data interfaces, each grounded in IBM’s current capabilities in machine learning and AI. These sit alongside a host of Mezzanine rooms where participants further collaborate to build solutions together.
The process for creating each experience begins with dynamic, collaborative research. Subject matter experts take members of the design and engineering teams through real-world scenarios—disaster response, financial crimes investigation, oil and gas management, product research, world news analysis—where we identify and test applicable data sets. From there, we move our ideas quickly to scale.
Accessibility to the immersive pixel canvas for everyone involved is key to the process. Designers must be able to see their ideas outside of the confines of 15″ laptops and prescriptive software. Utilizing tools tuned for rapid iteration at scale, our capable team of designers, data artists, and engineers work side-by-side to envision and define each experience. The result is more than a polished marketing narrative; it's an active interface that allows the exploration of data with accurate demonstrations of Watson’s capabilities—one that customers can see themselves in.
Under the Hood
Underlying the digital canvas is a robust spatial operating environment, g‑speak, which allows our team to position real data in a true spatial context. Every data point within the system, and even the UI itself, is defined in real world coordinates (measured in millimeters, not pixels). Gestures, directional pointing, and proximity to screens help us create interfaces that more closely understand user intent and more effectively humanize the UI.
This award-nominated collaboration with IBM is prototyped and developed at scale at Oblong’s headquarters in Los Angeles as well as IBM’s Immersive AI Lab in Austin. While these spaces are typically invite-only, IBM is increasingly open to sharing the content and the unique design ideas that drive its success with the public. This November, during Austin Design Week, IBM will host a tour of their Watson Immersive AI Lab, including live demonstrations of the work and a Q&A session with leaders from the creative team.
Can't make it to Austin? Contact our Solutions team for a glimpse of our vision of the future at our headquarters in the Arts District in Los Angeles.