Many museums today use digital collection systems like Museum+ to catalogue and display their collections online. Curators can link images, videos, audio, and descriptive texts to each artefact, creating rich digital narratives that remain accessible to both researchers and the public.
For this project, I worked on the Atopia Application, a Quest-based platform for Arts and Culture. My role as an XR UI/UX Designer was to turn one of the Museum+’s 2D features into an interactive VR experience. I defined the product requirements, created the user flow, and designed the spatial interface that became the Multi-Media Mode (MMM) , a feature that lets curators arrange and explore different media types in 3D space to create immersive, story-driven exhibitions.
My work also focused on micro-interactions , exploring how users’ muscle memory and real-world familiarity with different media types can shape more natural and intuitive interactions in VR. The design was guided by spatial mapping from daily life, allowing users to rely on their existing sense of time, space, and movement to navigate and interact effortlessly.
One of the first applications of this Multimedia feature was for the Temple of Dendur exhibition at the Metropolitan Museum of Art (MET), where curators add rich storytelling layer through different medias.






.png)
The feature i'm working on is within the advance interaction layers , is for user that ask How does this work?, What more is there?, or How can I engage with others around this piece? users should be able to access deeper Multi-Media Mode (MMM) through manual interactions. Before moving into UX design , I defined a set of questions that guided how users perceive, locate, and interact with multimedia elements inside Atopia. |
- How does user find POI ?
- What does user see when they come close to POI ?
- What does user see when user hover on POI ?
- How does user see when they click on POI ?
- How are multimedia elements displayed relative to the user’s POV?
- How does user interact with different medias ? ( image , audio , and video )
- How does the user close or dismiss MMM elements?

In the Multi-Media Mode (MMM), the Point of Interest (POI) system defines how users notice, approach, and interact with hidden layers of media inside the exhibition space. Each POI functions as both a visual marker and an interactive gateway, with goal of a discovery-driven experience that mirrors how people naturally move through physical galleries. Instead of relying on direct UI prompts, POIs communicate through light, motion, and proximity

.png)
.png)

Once a POI is activated, the Multi-Media Mode (MMM) becomes the main layer of interaction. It lets users explore different media types — images, videos, audio, and 3D objects. It includes 2 main components :
Carousel Interface
A horizontal carousel appears in front of the user, showing thumbnail previews of all media linked to the artwork. Users can browse through the carousel with their controllers. Each thumbnail opens its corresponding media instantly.
Content Display Zone
When a media item is selected, it appears directly within the user’s line of sight. Images, videos, audio, and 3D models open in floating frames that appear naturally within the user’s field of view. There were many many many many rounds of prototyping to refine how these elements behave in user space , from proximity and interaction zones to the balance between content and the main artwork.

.png)
.png)




The interaction model for Multi-Media Mode (MMM) was shaped by combining users’ existing mental models from everyday digital behavior with natural VR gestures.
.png)
.png)
.png)
.png)


Designing Multi-Media Mode (MMM) required extensive iteration across different media types — image, video, audio, and 3D — to make them work both independently and simultaneously. I spent a significant amount of time prototyping how these media could coexist within the same user space, balancing proximity, focus, and comfort. The challenge was to let users interact with multiple elements at once without breaking immersion or overwhelming the scene.
Through this process, I learned that small details in interaction rhythm like how an audio fades when a user turns toward a video, or how light adjusts around an active 3D object , define whether a space feels calm or chaotic. Designing for XR isn’t just about what users can do, but how those actions flow together.
This project deepened my understanding of:
- Interaction orchestration : coordinating multiple simultaneous interactions across modalities.
- Cross-modal consistency : keeping gestures and responses intuitive whether in VR or WebVR.
- Spatial hierarchy : ensuring that every new layer of media still supports the main artwork rather than competes with it.
Ultimately, these iterations helped shape a design language where users move through digital stories as naturally as they would walk through a real exhibition.
These interactions are now live on the Atopia Platform and can be experienced both in VR and WebVR. Visitors can explore the Multi-Media Mode in real time :)
Thank you for reading ❤️








.jpg)
.jpg)


.png)



.png)