Skip to content

Country/region

Europe

    Asia

      North America

        What is a 3DoF vs 6DoF in AR Glasses?

        Written by: wyx

        |

        Published on

        Augmented Reality (AR) Glasses have been recieving quite a lot of attention in the past few months. Championed in the recent past by develoeprs, tech enthusiasts and forward-thinkers, new groundbreaking developments in more consumer-friendly versions of the technology have been hitting the mainstream. However, people new to this emerging technology may find themselves lost in technical jargon like "Degrees of Freedom" or DoF. In this article, we will explore not only what DoF means, but how they impact AR glasses and why it matters!

        What is Degrees of Freedom (DoF) ?

        Before we jump into the nitty-gritty, it's important to understand what DoF means and what it stands for. In the simplest terms, DoF refers to the number of independent movements a device can track. It's the bread and butter of spatial awareness for any AR device. Without it, your AR experience would be akin to viewing a PowerPoint presentation – flat, unengaging, and quite frankly, a snooze fest.

        Two Types of DoF

        Now, let's break down DoF into its two primary components:

        Rotational DoF: This involves the ability of an object to rotate around an axis. Picture yourself in a swivel chair – you've got roll, pitch, and yaw at your disposal, allowing you to manipulate the chair and its movements. AR glasses with rotational DoF can track these head rotations, offering a more immersive experience.

        Translational DoF: Translational DoF refers to movement in straight lines along a X, Y, and Z axis. Imagine walking through a room as if you were a ghost, gliding straight through walls and furniture. Translational DoF enables the AR glasses to track these linear movements,adding depth and realism to virtual interactions.


        Understanding 3DoF/6DoF through Film

        If you've seen Ready Player One, you'll probably recall all the cool virtual screens used by the characters, popping up throughout the movie. Here's a simple, visual explaination using clips from the film;

        What can you expect from AR glasses with different DoF?

        0DoF Screen: Follows Your Head

        Let's begin with the most fundamental form, 0DoF. It's akin to attaching a display screen to the glasses, tracking only head movements.

        3DoF Screen: Follow Your Body, But Stays Still

        With 3DoF, The screen is fixed around the user at a certain position, centered on the person. As the user moves forward, backward, or side to side, the screen follows them - but when they move their head, the screen stays still.

        Adapting to user is crucial: Ideally, the most comfortable range for eye movement in daily use is around ±15°. When viewing content on a large screen, users will unconciously tilt their heaad and eyes to focus better, or shift their postition to be more comofrtable. With AR glasses, this experience can be enhanced with 3DoF capabilities, providing a more comfortable experience by minimizing the need for extensive eye movements. For example, imagine you're gaming with your AR glasses. Your eyes rapidly move around the screen as you check your HUD, your health bar, the mini-map, and so on. Without adjusting your screen, you might turn your head to check the mini-map only to find that the screen drifts along with you, making it difficult to see all the important details. 

        6DoF Screen: Fixed in Space

        Lastly we have 6DoF, the gold standard of AR technology. With the screen or displayed content fixed in a specific position in space, you're no longer just a passive observer – you're part of the action. It's like interacting with objects in the real world, offering an unparalleled sense of presence and immersion.

        Augmented Reality (AR) can only reach its full potential with controlled and stable display management (3DoF /6DoF); otherwise, it remains a head-mounted display. In fact, AR glasses lacking Depth of Field (DoF) capability can't truly be considered AR glasses.

        How Do AR Glasses Achieve DoF?

        So what kind of technology is used to achieve DoF in AR glasses? 

        Essentially, it's based on computing the rendered screen position using information from the inertial sensors in the glasses and displaying the [calculation results] in the glasses. A computing unit is required to complete this task. Currently, the industry has three solutions to make it happen: installing software, developing host devices, and developing chips.

        Software Installation

        In the industry, to reduce power consumption and the size of the glasses, the most common approach is to place the display unit in the glasses while externalizing the computing unit. Simply put, software is installed on various devices, or "hosts," relying on the computing power of the host to process image calculations.
        Principle: By transmitting the IMU data (head movement and posture data) collected from the glasses back to various terminals, the terminals compute and output a corresponding motion-rendered video, which is then transmitted to the glasses for display.

        However, due to the inconsistent performance of these "hosts", spatial computing software struggles to adapt universally across all and any device, resulting in a less-than-ideal implementation of 3DoF screens through software installation.

        Host Devices

        In recent years, AR glasseshave companies introduced self-developed, smart host devices to address interface compatibility and adaptability issues. These devices can either work independently, or can be used as a "spatial adapter", enabling 3DoF on additional devices. For example, XREAL created the XREAL Beam and XREAL Beam Pro to enable 3DoF for their range of AR glasses.

        Chip

        XREAL's One / One Pro series glasses pioneered a significant technological breakthrough in the industry through integrating a self-developed chip directly into the device itself. This innovation marked the XREAL as the first to achieve built-in 3DoF capabilities within the glasses themselves. By incorporating the computing component directly into the glasses, XREAL has eliminated dependency on external hosts for processing, allowing for seamless and consistent performance. This trailblazing approach not only enhances user experience but also positions XREAL as a leader in the AR market, offering a glimpse into the future of self-contained AR technology.

        How to evaluate 3DoF Screen Performance?

        1. Screen Stability

        Many AR glasses encounter "drift" when using 3DoF spatial screens, meaning the virtual screens gradually shift out of position. Stability is typically assessed by latency, with "Motion to Photon" (M2P) latency being a key industry metric. M2P latency measures the total delay from the moment a user moves to when the corresponding visual adjustment is displayed.

        Software

        M2P Latency Since the data pathway in most software solutions involves external devices like smartphones, the M2P latency remains high. This results in slower image correction, leading to instability in the 3DoF spatial screen.

        Self-developed Host Devices

        With self-developed host devices, the data transmission process is similar to the software solution, so the latency issue persists and cannot be fully resolved.

        Self-Developed Chip: Ultimate M2P Latency

        Through extensive optimization of the data pathway, M2P latency has been dramatically reduced from over 20ms to an industry-leading 3ms. When users interact with the 3DoF spatial screen, the visual correction occurs at a speed undetectable by the human eye, offering the most stable experience possible. This improvement minimizes stutter, jitter, blurring, flickering, and tearing during head or body movement. Additionally, it reduces power consumption and heat generation in the host device.

        2. Compatibility

        Software

        1. Host Interface: The interface may not support DisplayPort (DP) or might require power to output video signals (mentioning the Switch specifically). There might be other compatibility issues as well, such as incompatible cabling and unstable signal transmission.
        2. System Adaptation: Manufacturers may implement "optimizations" in systems that limit our desired image output and rendering capabilities. Insufficient support for app permissions and capabilities results in an inability to calculate the display position according to our expectations.
        3. Chip Adaptability: Due to different chip solutions used by terminal devices (like phones, PCs), there is a significant variance in experience across different platforms. Chips with lower computing power struggle to support the computational demands of spatial screens (3DoF), leading to issues like lag, screen tearing, and overheating.
        4. Incompatibility with Certain Devices: Specific gaming consoles such as the Nintendo Switch, PS, and Xbox cannot install relevant apps, preventing users from experiencing spatial screens (3DoF) through software solutions.

        Self-developed Host Devices:

        Since the self-developed host devices is a computing unit specifically designed for pairing with the glasses, it exhibits perfect compatibility in terms of interface, system, and chip adaptability with the glasses. As for pairing with certain gaming console devices, users can experience spatial screens through adapters or wireless streaming methods.

        Self-Developed Chip:

        Achieving the Ultimate Spatial Screen Experience Across Different Devices.

        In the past, when glasses were paired with various devices, differences in terminal performance often led to issues. Some devices, particularly those with lower processing power, struggled to handle the computational demands of spatial screens (3DoF), resulting in stuttering, screen tearing, and overheating. Now, the realization of the spatial screen experience no longer relies on smartphones or other terminals but is entirely achieved on the glasses themselves. This innovation eradicates dependency on the performance of external devices, enabling the glasses' spatial screen capability to be used across a wider range of devices. Users no longer need to worry about terminal compatibility—as long as the terminal device can output DP video, users can experience consistently excellent spatial screen performance. 

        Supporting Devices That Couldn't Previously Experience the Spatial Screen

        With the spatial screen functionality now integrated into the X1 chip within the glasses, devices that previously couldn't utilize spatial screens through apps—such as gaming consoles like the Switch, PS, and Xbox—no longer require assistance from other devices (like connecting to Beam or BP for streaming) to gain spatial screen capabilities. Users only need a pair of X1 chip-equipped glasses to enjoy the spatial screen experience (3DoF).

        3. Portability

        Software:

        Installing apps on proprietary devices to achieve this feature offers comparatively high portability.

        Self-Developed Host Devices:

        Users must purchase an additional self-developed terminal to enable the spatial screen.

        Self-Developed Chip:

        The ultimate in portability is achieved as there is no longer a need to carry the host device.

        Which is the Best 3DOF Solution in AR Glasses?


        Software Installation?
        Self-Developed Host Devices
        XREAL A2P +BEAM
        Self-Developed Chip
        XREAL ONE
        Screen Stability
        May Drift, Not Stable
        Very Stable
        Very Stable
        M2P Latency
        High Latency
        Same As Software Solution
        Ultimate Low Latency
        Compatibility
        Bad Compatibility
        Perfect Match with Glasses
        Perfect Match with Glasses and almost compatible with All Devices
        Portability
        Very Portable
        Need to carry 1 extra device
        Very Portable

        1