Alright, so let’s dive into this mind-bending tech from folks at Meta Reality Labs and Stanford. They’ve whipped up this new holographic display that, get this, is small enough to fit into glasses. Yep, you heard me right.
Oh, before I lose my train of thought—there’s this paper in Nature Photonics. Stanford’s own Gordon Wetzstein and the Meta gang go all techy and talk up this prototype. It’s like they’re mixing super-thin waveguide holography with some snazzy AI to give us these mind-blowing 3D visuals. Science magic, am I right?
Now, about those optics—nope, not clear like the ones on HoloLens 2. So, they call it a mixed reality display, not augmented reality. Makes sense? Maybe? It’s doing its own thing, I guess.
Anyway—wait, where was I? Oh yeah—the device is only 3 millimeters thick. I swear, my pancakes are thicker. It’s got this custom waveguide and something called a Spatial Light Modulator—the SLM for short. This thing juggles light pixel-by-pixel, giving you, and I’m quoting here, “full-resolution holographic light field rendering.” Yeah, I didn’t really get it either at first.
Hold up, there’s a pic of the device in the article. Not sure why this image grabs me, but it does. It’s all thanks to Nature Photonics, I suppose.
So, instead of using flat images like other XR headsets, it’s crafting real holograms. Talk about mind-trickery—actually seeing things that aren’t there. Wetzstein is really hyped about how unique holography is, and I kind of see why. Smaller package, yet big on the experience.
Also, they’re aiming for visuals that don’t lose sharpness no matter how much you twitch your eyes. Imagine not losing focus every time you glance around. Wetzstein calls that crucial. Makes you wonder how often he loses focus in meetings.
Okay, so why wasn’t this a thing earlier? The team blames “limited space–bandwidth product.” Whatever that is. Something about not being able to manage the view and eye movements together. But they’re on it. Bigger field for peeking around, and accommodating all sorts of faces. Kinda inclusive, if you think about it.
Apparently, this is part two of three. Last year, they did the waveguide thing. Now, we’ve got a prototype. Final chapter—a sellable gizmo—is still some years off. Wetzstein is hopeful. Sounds like a cliffhanger to me.
They want to meet this “Visual Turing Test.” Basically, fooling us into thinking a digital image is the real deal. Suyeon Choi, who led the paper, mentions it. I mean, what will they think of next?
Oh, and there’s this thing Meta’s Reality Labs are cooking up—vr and mr headsets with wide FOV. But those use different optics. More science wizardry for them, I suppose.
And there you have it. Now, if only I could find my glasses…