How Treeview Implements Spatial User Interfaces for Apple Vision Pro

Case Study: Treeview

Treeview faced front-end implementation challenges while working on projects for the Apple Vision Pro and Meta Quest 3. They adopted ShapesXR to streamline their workflow, reducing implementation cycles from 5-10 to just 2 iterations. ShapesXR bridged the gap between 2D design representations and actual spatial interfaces, improving precision and collaboration.

Background and Objectives

Treeview is a high-end boutique development studio that builds AR/VR products and solutions for enterprise clients. Treeview has been working for over 8 years in the AR/VR industry and is currently ranked as the #1 AR/VR development team in LATAM.

Treeview has started developing software for the Apple Vision Pro and faced unique development challenges in the front-end implementation process for this platform.

When developing Inviewer, a Spatial STEM Simulation library, the team set out to explore an improved workflow to minimize iterations and reduce time in the front-end implementation of spatial user interfaces for the latest AR/VR technologies, including the Apple Vision Pro and Meta Quest 3.

"ShapesXR has enabled our team to drastically reduce the time for front-end implementations of Spatial User Interfaces for the latest generation of XR devices, including the Apple Vision Pro and Meta Quest 3." - Horacio Torrendell, Founder & CEO of Treeview

Decision Process

We were having trouble with the amount of iterations needed to get the sizing, positions, and rotations of the UI to feel correct in AVP due to the slow build cycles of Unity→Xcode→AVP. We used ShapesXR to enable the design team to define the exact spatial layout, which was then used as the source of truth for the engineering team to implement in Unity.

We use Figma as our UI design tool, but there is a large gap in communication between a 2D representation of a spatial interface and the actual spatial interface. We discovered that ShapesXR bridges this communication gap.

Implementation and Use

We used ShapesXR to design the exact spatial layout we wanted for the Mixed Reality app. We executed design reviews with all stakeholders in ShapesXR to reach agreement on the final design. Once agreed upon, this design was used as the source of truth for the front-end implementation on the Apple Vision Pro and Meta Quest 3.

No items found.

Results and Feedback

ShapesXR allowed us to drastically reduce the time of implementation by reducing the amount of front-end implementation cycles. Before using ShapesXR, a front-end implementation could take around 5-10 iterations, requiring the design team to collaborate with development to really nail a high-quality interface implementation. Now with ShapesXR, we’ve reduced the iterations to around only 2: a first implementation that exactly aligns to the ShapesXR source of truth and a final polish round which completes the front-end implementation. 

Previously, we would use Figma as our source of truth for the user interface, which has many gaps in information regarding sizing, distance, and rotation of objects relative to the user’s starting position. Now with ShapesXR, our team can define a source of truth that aligns exactly with the expected outcome of the front-end implementation.

Overall Experience

We started with a self-onboarding process for ShapesXR, through which we discovered the power of this tool. We then had a team onboarding session with one of the ShapesXR team members, which was invaluable for providing an in-depth presentation and offering our team a chance to ask questions and explore how we could fully leverage the tool.

One feature we found particularly useful was the layer feature, which enables us to simulate UI logic flow and even define UI animations. This is advantageous because it allows us to add more layers of detail during the design phase, ensuring all details are defined before starting front-end implementation.

Future Plans

Our plan is to start integrating ShapesXR into the design process of the AR/VR software that we build at Treeview. Being a development studio that works with many clients and many projects, we have various design processes for new AR/VR applications. Our plan is to integrate ShapesXR into our workflow, moving from Figma to ShapesXR, then to Unity, and finally to devices such as the Apple Vision Pro, Meta Quest, MagicLeap, and Microsoft Hololens.


We feel that ShapesXR is becoming the primary design tool for XR development. This tool should be used in all XR development teams. For us, the key benefit is that it allows the design team to cover a more extensive range of design details that are specific to spatial user interfaces. This leaves fewer design decision gaps for the engineering team to fill during the front-end software implementation.