MWC always has a way of making technology feel bigger than life. Halls filled with screens, concepts that stretch the imagination, and enough buzzwords to last a year. This year, AI was omnipresent, shaping everything from spatial computing to generative UI. But beyond the AI-heavy noise, XR continued to push forward, with new prototypes and projects emerging in unexpected ways.
One shift stood out: more and more XR prototypes are considering design from the very beginning. It feels like we’re finally stepping through the door where the industry recognizes that UX is where adoption happens in XR. Raw specs and technical breakthroughs are no longer enough; what matters is how these technologies fit into everyday interactions. This year, rather than just evaluating hardware specs, I focused on how naturally these devices integrate with user behavior. Are we designing experiences that fit into how people already move and interact, or are we still forcing users to adapt to technology?
.png)
Below, I’ll break down each XR product I tested using XR Heuristic Framework, assessing their spatial integration, interaction directness, guidance, comfort, presence, feedback, and accessibility. Each product will get a concise review, highlighting what works, where friction exists, and how the design could push further.
Biel Smart Glasses
Biel Smart Glasses is a really new project that are designed to assist people with low vision by leveraging computer vision and augmented reality (AR), overlaying real-time visual enhancements to help users detect obstacles, adjust contrast, and improve depth perception
.png)
Their prototype aligns with the way we design for XR and how XR headsets render scenes, particularly through foveated rendering techniques used in VR and AR headsets. Human vision relies on central and peripheral processing. Our central vision (foveal vision) is sharp and detailed, allowing us to read, recognize faces, and focus on fine details. In contrast, peripheral vision becomes blurrier the farther it extends but is highly sensitive to motion and spatial awareness.

The AR overlay appears over any detected obstacle, including objects and people, displaying an orange-colored visual cue with a rectangle and an arrow on top. While this provides basic awareness, the contrast remains blurry, requiring intentional focus to interpret the information. This could be improved by enhancing contrast sharpness dynamically based on lighting conditions or user preferences. Additionally, the digital overlays appear unnaturally “floating” in space, feeling more like an artificial digital layer rather than a seamlessly integrated spatial cue. A potential refinement would be adaptive occlusion and depth-based rendering, making the overlays feel anchored to real-world objects rather than just overlaid on top.
.png)
The interaction is fully passive, engaging vision and hearing—which aligns with the needs of low-vision users. Audio alerts indicate obstacles, but there is no differentiation in cues for object type or movement. Adding a subtle secondary visual indicator or adjustable audio tones could improve clarity without adding cognitive load for their user.
.png)
Currently, the system dictates movement rather than assisting it, making users feel reactive instead of in control. The overlay representation could be refined to guide movement rather than just alerting obstacles .
The product from my point of view offer a strong foundation for assistive XR, effectively combining AR overlays and audio alerts to enhance obstacle awareness for low-vision users. The hands-free, passive interaction aligns well with accessibility needs, making navigation support seamless. but the floating overlays feel disconnected, and the lack of customization limits control. Refining spatial anchoring, introducing adaptive contrast settings, and offering subtle user adjustments could enhance clarity and control. Moving forward, shifting from passive detection to active guidance would strengthen user confidence and long-term usability.
Muse Scene Lab
Muse Scene Lab is a training tool to lead a virtual orchestra in real-time using precise hand gestures. Designed for music education, the app helps users practice tempo, dynamics, and articulations in a responsive concert hall environment. The system uses hand tracking to interpret conducting gestures, giving users control over the orchestra’s responsiveness. Up until now, its primary use case has been education. MuseSceneLab is available on Meta Quest.
.png)

The orchestra layout matches real-world stage positioning, and the depth perception is strong, making it feel like I’m truly leading a virtual orchestra. Instruments are well-anchored in space in front of me, adding to the realism. One of the imporovement that i personally feel is that the library UI feels basic and purely function , while music itself is a form of arts , a more visually refined UI could enhance the experience .
.png)
The hand-tracking calibration works, but as someone without a music background, I wasn’t fully sure what I was expected to do beyond setup. There’s no real-time feedback on conducting accuracy or comparison to traditional interpretations, which could help guide creative decision-making. An onboarding guidancebeyond callibration would reinforms the user control to see the tempo and dynamics, and potentially also attract non music background user , i find it quite seldom in here.
.png)
The experience is optimized for both seated and standing use, but standing feels more immersive. However, I didn’t see options for adjustable session lengths or rest intervals, which could be useful for longer rehearsals. A UI Placement for conductor's score could also give more feedback for creativity.
Muse Scene Lab successfully creates a strong sense of spatial presence, making the user feel like they are leading a real orchestra. However, a more better choice of UI design to demosntrate music as a form of art, clearer onboarding, and real-time feedback on conducting accuracy would improve the experience in my opinion, including for those without a musical background to start enjoying the application and make it a more valuable training tool for both students and professionals.
Expacia Puzzle
Expacia is one of the products that I find closest to being truly design-driven, which is crucial in today’s experience economy. As a software company, they deliver various VR and AR experiences, all built with gamification as a core element. What stands out is their commitment to thematic consistency—whether it’s a 90s video game-inspired world or an immersive cultural experience, every detail, from UI and fonts to user interactions, is intentionally designed to reinforce the theme. Their approach goes beyond just visuals, integrating interaction design and storytelling to create cohesive, immersive experiences.
.png)
.png)
The placement of UI elements and objects feels well-positioned and comfortable, creating a natural and well-proportioned virtual space. The way puzzles are introduced works well but could be introduced more smoothly into the environment . The interaction system is gesture-based, relying on one-hand interactions. The puzzle mechanic—rotating pieces to fit—is simple, but the lack of initial onboarding or clear guidance can leave users confused, I consider myself an experience user, but without external guidance, I also felt a bit confused.
.png)
There is a little distinction between interactive and decorative elements, making it harder to differentiate what can be engaged with. I thought that having contract and highlights as cues to differentiate interactive to decoratives one would be nice.
The other AR location-based game I tried at their stand was also designed with strong design principles in my opinion. Ultimately, experience is what keeps users coming back, not just initial acquisition. A well-crafted design that feels intuitive, engaging, and immersive plays a key role in retaining users long after their first interaction.
I'm grateful to have experienced each of these products with their founders and teams guiding me through the details. As XR continues to grow, design-first thinking will shape its success—not just in attracting users, but in making them return. The future of XR isn’t just about technology; it’s about well-crafted experiences that make people feel human again. Thanks for reading—hope this serves as design inspiration for everyone!








.jpg)
.jpg)
.jpg)

.png)



.png)