Hands-On: Magic Leap One First Hands-On Impressions for HoloLens Developers

Magic Leap One First Hands-On Impressions for HoloLens Developers

In a surprising twist of expectation management, Magic Leap managed to not only ship but deliver the Magic Leap One I ordered on Wednesday by 4 p.m. PT on the same day.

After spending about an hour with the headset running through set up and poking around its UI and a couple of the launch day apps, I thought it would be helpful to share a quick list of some of my first impressions as someone who's spent a lot of time with a HoloLens over the past couple years and try to start answering many of the burning questions I've had about the device.

Meshing

The Magic Leap One takes a different approach to meshing (a.k.a. spatial mapping) than the HoloLens. It works by mapping chunks of cubic regions that then overlap slightly with each other to fill in any micro gaps. The results are much cleaner for many scenarios than the HoloLens' triangular mesh that builds what appears to be a single giant ridged wireframe off all the points it scans.

The Magic Leap One's mesh is very accurate when it comes to anything flat or sharp. It hugs straight edges and corners in a grid of small squares, managing to do a very good job of detecting large flat surfaces, walls, and even corners. Sharp, protruding 90 degree corners occasionally get a 45-degree chamfered edge, but they still tend to reflect the geometry of sharp straight edges, and especially surfaces more accurately than the HoloLens does with its more jagged mix and match triangular mesh. In areas it can't mesh (black non-reflective surfaces, mirrors, windows, etc.) the ML1 does a good job filling in the backs of the holes with an approximate extension of the walls and floors it knows about. And oh yeah... it's super fast. Eyeballing it I'd say it's 3x to 5x faster than the HoloLens at filling in gaps in the mesh as you scan a room.

When you get to odd shapes like lamps and computer monitors, you end up with the same kind of close but not-quite-right angular boundaries as the HoloLens. The Magic Leap mesh tries to chunk what it sees into shapes within each tiny block of its square which leads to a fewer sharp and pointy jut-outs than the HoloLens, but occasionally does so at the expense of closely hugging odd shapes.

While both the HoloLens and the Magic Leap have trouble mapping black surfaces, the Magic Leap did slightly worse in my office. It completely failed to see my black office chair and mini fridge, while the HoloLens at least tries. Though its chair mesh looks more like a nubby mushroom stub than a chair, it was able to map my mini fridge after spending a little time looking at it from a variety of angles. It would appear that the HoloLens' cameras have a slight edge at detecting surfaces that reflect hardly any light.

TL;DR: Better with flat surfaces and edges than the HoloLens, but worse with black non-reflective furniture, and less forgiving in direct sunlight or outdoors.

Tracking & World Position Locking

The HoloLens is known for how well it makes digital objects "stick" to the real world. It does this by tracking your position at high frequency, and then up-scaling the 60fps input you give it to 240fps (at one color per frame), adjusting ever so slightly four times for any tiny motions your head makes during the duration of one of your 60 frames. You can shake your head quickly, jump up and down, tilt your head at any angle, whatever... the windows and objects you've put in your room will almost always just stick. It's brilliant.

If you've scrutinized the Magic Leap footage online, you'll have noticed that the Magic Leap One has a bit of drift. I can confirm the subtle drift you've seen in those clips is a fairly accurate representation of what you'll see in the device. It's subtle, but it's there. It's enough to notice if you're looking for it almost every time you move about. But once you start to engage with an app, you'll find you stop thinking about it and most everything is stable enough to not be bothersome.

It's much less shaky than the Meta 2, and about on par with ARKit and ARCore. That said, ARKit and ARCore can mask their flaws better as you see both the real and digital world at the same fixed frame rate. Not possible on a true see-through AR display where the real world "supports" your eyes maximum "fps" at all times. Clever apps can introduce subtle character animations that float and shift about instead of standing still, so as to prevent this from being noticeable. You won't really notice it on floating objects like the jellyfish, UFOs, or goldfish. But you'll notice it a little when moving around things that appear to be attached to a surface. If you really try and push it by, say, shaking your head rapidly from side to side or jumping up and down, the drift is obvious. I don't know how much of this is a frame rate cap thing (I'm still trying to understand if the display its self is capped at 60fps or just the software), or how much of this can be improved over time with software updates, but I'm hoping at least some of the more noticeable drifts during slower head rotations can be improved.

TL;DR: Passable. On par with ARKit and ARCore. But not as solid as HoloLens. I hope this can be improved in a software update.

Set Up

When first turning it on, you won't see anything after hearing the startup chime until it finishes booting. Its integrated eye trackers also means it can automatically measure your interpupillary distance (IPD) by having you focus on the position of a number of points around the range of your vision (including at different depths).

Optics

If you've been following the news, you've heard a variety of folks from Magic Leap say that you can't accurately capture what you see when you look through it with your eyes versus a camera. This is true. I tried to snap some pics and videos with my camera through the lens, and the holograms are always blown out with a glowing halo effect and fuzzy. When looking through it with your own eyes, the resolution is high enough to be crisp without any screen-door effect, almost exactly like the HoloLens. You don't see individual pixels, though distant fine hairline-thin details may have the subtle flicker of anti-aliasing around their curves one sees in practically any 3D application on any platform. It also has far less of a neon rainbow effect across its waveguides than the HoloLens, even when filling the field of view (FoV) with large flat-white web pages (though, to be fair, I have an early Wave 1 HoloLens, so the rainbow effect in mine might be more noticeable than others).

The FoV is clearly bigger than the HoloLens. It's not the full peripheral vision some have been hoping for, but it's a welcome step in the right direction that makes a noticeable difference. If you've been looking at the wildly inaccurate FoV comparison image with frames superimposed over a cat in a living room that's been making the rounds online since we broke the news of Magic Leap's FoV, you're going to find this beats your low expectations. If you want complete coverage of your peripheral vision before you pull the trigger, you've likely got a few years to wait. It's a shame this was overhyped, as it is genuinely better than the HoloLens. But since everyone's expectations were so high, many are finding this disappointing.

TL;DR: Either matches or is better than the HoloLens in every way.

Depth of Field

While not something that immediately pops out at you, I wanted to try and test to see if the headset does indeed render multiple depths of field. When the device first starts up, you're presented with floating islands with a space man jumping between them and hot air balloons off in the distance. I walked up close to one of the islands, closed one eye and focused on the tree with a balloon off in the distance behind it. The balloon did seem to go slightly blurry, and did not feel like it blended with the foreground that was clipping it.

Then I focused on the balloon and did get the sensation that I was looking past the tree. I'll need to test this with a telephoto lens camera at some point, to see if this isn't just my mind processing these as two separate distances. But it did feel like they weren't all rendered at the same depth.

TL;DR: Need to test more.

Eye Tracking

I'm excited to dive into eye tracking and mutimodal inputs. The only thing I've noticed it used for so far is the automatic IPD adjustment, but I've only spent about an hour with the device. I did notice that the absence of the gaze-based center cursor dot that HoloLens uses for input, opting instead for the mouse trackpad-like controller-based input makes interacting with web browser windows far more intuitive and fell much faster than on the HoloLens, where air-tap-pinch-and-drag gestures tend to have just enough latency to make it feel sluggish. It's still tracking your gaze. Look at one browser window and your mouse will appear there, instantly ready to move about. Shift your head to look at another and your mouse is there, too. It just works without you thinking about it.

I also tried to test just looking from one to the next with only my eyes, but wasn't able to get the mouse to switch windows without slightly moving my head, too. I'm not sure if this is part of the multimodal input approach, or if they're simply not using eye tracking for window focus in Lumin OS, but will dig deeper later.

TL;DR: An exciting new feature all mixed reality headsets need.

The Controller

The controller is very responsive and digital selection beams emanating and curving up from its bottom feel effortlessly tethered. In the Lumin OS, app selection is based on your gaze, and then the controller is used to change selection within the apps. I ran into a little trouble with confusing selections when I had the main menu open in front of another app, but other than that, you really don't have to think about it.

You do have to keep the controller somewhat in front of you if you don't want it to slightly lag while it switches hemispheres. I kind of wish the hemispheres tilted down at an angle to avoid this, but it's not that big of a deal. I found it less tiring and more precise than air tapping.

TL;DR: The added precision is a godsend. The latency when moving between hemispheres is a little bothersome.

Gesture Control

The Lumin OS Shell does not appear to allow you to use it without the controller. There is no gaze and air-tap functionality enabled. At least not that I could find. I found this surprising as the gesture support and hand point tracking detailed in Magic Leap's documentation is extensive. It shows a much more flexible range of controller-free input options than the HoloLens' "ready", "air-tap", and "air-tap-and-drag". Support for these gestures appears to be app specific, and at least on day one the controller appears to be required when interacting with Lumin OS and its Prisms. I may be wrong given my limited use so far, but I'd like to see a standard minimum set of gestures to be used across all apps and environments to give the option of controller-free use. For example, by default the "C" to "OK" gesture (essentially the same as HoloLens' air-tap) should perform the primary selection task the controller's trigger usually does. This would be especially critical in environments where controllers aren't really an option, like operating rooms or factory floors. While Apps can be built to support rich gesture interaction, if one can't launch said Apps without using the controller, it still means you have to fumble about for it when starting up. In fact, after initially booting up you're prompted to pull the controller's trigger before the Lumin OS shell will even launch.

That's all I have for now. We'll have more updates for you in the coming days and weeks so check back often or give us a follow on Twitter.

Just updated your iPhone to iOS 18? You'll find a ton of hot new features for some of your most-used Apple apps. Dive in and see for yourself:

Cover image by Bryan Crow/Next Reality

2 Comments

Generally good information, particularly the part about the rainbow color across a white field. This is inevitable with diffractive optics that both Magic Leap and Hololens use. Hopefully, you have heard of my blog on displays and optics (kguttag.com)

You can take a picture with a camera but you have to get the camera into the correct position (roughly where the human pupil will be -- allowing for the difference in optics of the camera's complex lens). It is tricky and requires having a small camera to get it to fit. I use an Olympus Micro 4/3rds mirrorless interchangeable lens camera, turned 90 degrees ("portrait mode") to get the lens in the "right position" with some headsets (where the temples of the headset will block turning the camera to take a landscape picture). You can't get a full-size SLR in the correct position normally.

Additionally, by taking video and using the "beat frequency" you can learn a lot more about how the image is created. You can figure out the field sequential color rate and other information. If you are interested, I would be happy to explain.

Can you say if you are prohibited from posting photos? It should be noted that the camera is "objective" whereas the eye is "subjective" so it is true the image will not be exactly what the eye sees, but you can get reasonably close.

Hi Karl, yes I'm always interested in more detailed explanations. I don't think I have the right equipment to attempt what you're describing but I'm sure others would be interested to learn and try what you're suggesting.

We can and will be posting photos in the coming days and weeks.

Share Your Thoughts

  • Hot
  • Latest