Directory

Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3 – KGOnTech

Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3

Introduction – Sorry, But It’s True

I have taken thousands of pictures through dozens of different headsets, and I noticed that the Apple Vision Pro (AVP) image is a little blurry, so I decided to investigate. Following up on my Apple Vision Pro’s (AVP) Image Quality Issues – First Impressions article, this article will compare the AVP to the Meta Quest 3 by taking the same image at the same size in both headsets, and I got what many will find to be surprising results.

I know all “instant experts” are singing the praises of “the Vision Pro as having such high resolution that there is no screen door effect,” but they don’t seem to understand that the screen door effect is hiding in plain sight, or should I say “blurry sight.” As mentioned last time, the AVP covers its lower-than-human vision angular resolution by making everything bigger and bolder (defaults, even for the small window mode setting, are pretty large).

While I’m causing controversies by showing evidence, I might as well point out that the AVP’s contrast and color uniformity are also slightly lower than the Meta Quest 3 on anything but a nearly black image. This is because the issues with AVP’s pancake optics dominate over AVP’s OLED microdisplay. This should not be a surprise. Many people have reported “glow” coming from the AVP, particularly when watching movies. That “glow” is caused by unwanted reflections in the pancake optics.

If you click on any image in this article, you can access it in full resolution as cropped from a 45-megapixel original image. The source image is on this blog’s Test Pattern Page. As if the usual practice of this blog, I will show my work below. If you disagree, please show your evidence.

Hiding the Screen Door Effect in Plain Sight with Blur

The numbers don’t lie. As I reported last time in Apple Vision Pro’s (AVP) Image Quality Issues – First Impressions, the AVP’s peak center resolution is about 44.4 pixels per degree (PPD), below 80 PPD, what Apple calls “retinal resolution,” and the pixel jaggies and screen door should be visible — if the optics were sharp. So why are so many reporting that the AVP’s resolution must be high since they don’t see the screen door effect? Well, because they are ignoring the issue of the sharpness of the optics.

Two factors affect the effective resolution: the PPD of optics and the optics’ modulation transfer function sharpness and contrast of the optics, commonly measured by the Modulation Transfer Function (MTF — see Appendix on MTF).

People do not see the screen door effect with the AVP because the display is slightly out of focus/blurry. Low pass filtering/blurring is the classic way to reduce aliasing and screen door effects. I noticed that when playing with the AVP’s optics, the optics have to be almost touching the display to be in focus. The AVP’s panel appears to be recessed by about 1 millimeter (roughly judging by my eye) beyond the best focus distance. This is just enough so that the thinner gaps between pixels are out of focus while only making the pixels slightly blurry. There are potentially other explanations for the blur, including the microlenses over the OLED panel or possibly a softening film on top of the panel. Still, the focus seems to be the most likely cause of the blurring.

Full Image Pictures from the center 46 Degrees of the FOV

I’m going to start with high-resolution pictures through the optics. You won’t be able to see any detail without clicking on them to see them at full resolution, but you may discern that the MQ3 feels sharper by looking at the progressively smaller fonts. This is true even in the center of the optics (square “34” below), even before the AVP’s foveate rendering results in a very large blur at the outside of the image (11, 21, 31, 41, 51, and 61). Later, I will show a series of crops to show the central regions next to each other in more detail.

The pictures below were taken by a Canon R5 (45 Megapixel) camera with a 16mm lens at f8. With a combination of window sizing and moving the headset, I created the same size image on the Apple Vision Pro and Meta Quest Pro to give a fair comparison (yes, it took a lot of time). A MacBook Pro M3 Pro was casting the AVP image, and the Meta Quest 3 was running the Immersed application (to get a flat image) mirroring a PC laptop. For reference, I added a picture of a 28″ LCD monitor taken from about 30″ to give approximately the same FOV as the image from a conventional 4K monitor (this monitor could resolve single pixels of four of these 1080p images, although you would have to have very good vision see them distinctly).

Medium Close-Up Comparison

Below are crops from near the center of the AVP image (left), the 28″ monitor (center), and the MQ3 image (right). The red circle on the AVP image over the number 34 is from the eye-tracking pointer being on (also used to help align and focus the camera). The blur of the AVP is more evident in the larger view.

Extreme Close-Up of AVP and MQ3

Cropping even closer to see the details (all the images above are at the same resolution) with the AVP on the top and the MQ3 on the bottom. Some things to note:

  1. Neither the AVP nor MQ3 can resolve the 1-pixel lines, even though a cheap 1080p monitor would show them distinctly.
  2. While the MQ3 has more jaggies and the screen door effect, it is noticeably sharper.
  3. Looking at the space between the circle and the 3-pixel wide lines pointed at by the red arrow, it should be noticed that the AVP has less contrast (is less black) than the MQ3.
  4. Neither the AVP nor MQ3 can resolve the 1-pixel-wide lines correctly, but the 2- and 3-pixel-wide lines, along with all the text, are significantly sharper and have higher contrast than on the AVP. Yes, the effective resolution of the MQ3 is objectively better than the AVP.
  5. Some color moiré can be seen in the MQ3 image, a color artifact due to the camera’s Bayer filter (not seen by the eye) and the relative sharpness of the MQ3 optics. The camera can “see” the MQ3’s LCD color filters through the optics.

Experiment with Slightly Blurring the Meta Quest 3

A natural question is whether the MQ3 should have made their optics slightly out of focus to hide the screen door effect. As a quick experiment, I tried a (Gaussian) blur of the MQ3’s image a little (middle image below) as an experiment. There is room to blur it while still having a higher effective resolution than the AVP. The AVP still has more pixels, and the person/elf’s image looks softer on the slightly blurred MQ3. The lines are testing for high contrast resolution (and optical reflections), and the photograph shows what happens to a lower contrast, more natural image with more pixel detail.

AVP’s Issues with High-Resolution Content

While Apple markets each display as having the same number of pixels as a 4K monitor (but differently shaped and not as wide), the resolution is reduced by multiple factors, including those listed below:

  1. The oval-shaped optics cut about 25-30% of the pixels.
  2. The outer part of the optics has poor resolution (about 1/3rd the pixels per degree of the center) and has poor color.
  3. A rectangular image must be inscribed inside the “good” part of the oval-shaped optics with a margin to support head movement. While the combined display might have a ~100-degree FOV, there is only about a 45- to 50-degree sweet spot.
  4. Any pixels in the source image must be scaled and mapped into the destination pixels. For any high-resolution content, this can cause more than a 2x (linear) loss in resolution and much worse if it aliases. For more on the scaling issues, see my articles on Apple Vision Pro (Part 5A, 5B, & 5C).
  5. As part of #4 above or in a separate process, the image must be corrected for optical distortion and color as a function of eye tracking, causing further image degradation
  6. Scintillation and wiggling of high-resolution content with any head movement.
  7. Blurring by the optics

The net of the above, and as demonstrated by the photographs through the optics shown earlier, the AVP can’t accurately display a detailed 1920×1080 (1080p) image.

AVP Lack “Information Density”

Making everything bigger, including short messages and videos, can work for low-information-density applications. If anything, the AVP demonstrates that very high resolution is less important for movies than people think (watching movies is a notoriously bad way to judge resolution).

As discussed last time, the AVP makes up the less-than-human angular resolution by making everything big to hide the issue. But making the individual elements bigger means less content can be seen simultaneously as the overall image is enlarged. But making things bigger means that the “information density” goes down, with the eyes and head having to move more to see the same amount of content and less overall content can be seen simultaneously. Consider a spreadsheet; fewer rows and columns will be in the sweet spot of a person’s vision, and less of the spreadsheet will be visible without needing to turn your head.

This blog’s article, FOV Obsession, discusses the issue of eye movement and fatigue using information from Thad Starner’s 2019 Photonic’s West AR/VR/MR presentation. The key point is that the eye does not normally want to move more than 10 degrees for an extended period. The graph below left is for a monocular display where the text does not move with the head-turning. Starner points out that a typical newspaper column is only about 6.6 degrees. It is also well known that when reading content more than ~30 degrees wide, even for a short period, people will turn their heads rather than move their eyes. Making text content bigger to make it legible will necessitate more eye and head movement to see/read the same amount of content, likely leading to fatigue (I would like to see a study of this issue).

ANSI-Like Contrast

A standard way to measure contrast is using a black-and-white checkerboard pattern, often called ANSI Contrast. It turns out that with a large checkerboard pattern, the AVP and MQ3 have very similar contrast ratios. For the picture below, I make the checkerboard bigger to fill about 70 degrees horizontally for each device’s FOV. The optical reflections inside the AVP’s optics cancel out the inherent high contrast of the OLED displays inside the AVP.

The AVP Has Worse Color Uniformity than the MQ3

You may be able to tell that the AVP has a slightly pink color in the center white squares. As I move my head around, I see the pink region move with it. Part of the AVP’s processing is used to correct color based on eye tracking. Most of the time, the AVP does an OK job, but it can’t perfectly correct for color issues with the optics, which becomes apparent in large white areas. The issues are most apparent with head and eye movement. Sometimes, by Apple’s admission, the correction can go terribly wrong if it has problems with eye tracking.

Using the same images above and increasing the color saturation in both images by the same amount makes the color issues more apparent. The MQ3 color uniformity only slightly changes in the color of the whites, but the AVP turns pink in the center and cyan on the outside.

The AVP’s “aggressive” optical design has about 1.6x the magnification of the MQ3 and, as discussed last time, has a curved quarter waveplate (QWP). Waveplates modify polarized light and are wavelength (color) and angle of light-dependent. Having repeatedly switched between the AVP and MQ3, the MQ3 has better color uniformity, particularly when taking one off and quickly putting the other on.

Conclusion and Comments

As a complete product (more on this in future articles), the AVP is superior to the Meta Quest Pro, Quest 3, or any other passthrough mixed reality headset. Still, the AVP’s effective resolution is less than the pixel differences would suggest due to the softer/blurrier optics.

While the pixel resolution is better than the Quest Pro and Quest 3, its effective resolution after the optics is worse on high-contrast images. Due to having a somewhat higher PPD, the AVP looks better than the MQP and MQ3 on “natural” lower-contrast content. The AVP image is much worse than a cheap monitor displaying high-resolution, high-contrast content. Effectively, what the AVP supports is multiple low angular resolution monitors.

And before anyone makes me out to be a Meta fanboy, please read my series of articles on the Meta Quest Pro. I’m not saying the MQ3 is better than the AVP. I am saying that the MQ3 is objectively sharper and has better color uniformity. Apple and Meta don’t get different physics, and they make different trade-offs which I am pointing out.

The AVP and any VR/MR headset will fare much better with “movie” and video content with few high-contrast edges; most “natural” content is also low in detail and pixel-to-pixel contrast (and why compression works so well with pictures and movies). I must also caution that we are still in the “wild enthusiasm stage,” where the everyday problems with technology get overlooked.

In the best case, the AVP in the center of the display gives the user a ~20/30 vision view of its direct (non-passthrough) content and worse when using passthrough (20/35 to 20/50). Certainly, some people will find the AVP useful. But it is still a technogeek toy. It will impress people the way 3-D movies did over a decade ago. As a reminder, 3-D TV peaked at 41.45 million units in 2012 before disappearing a few years later.

Making a headset display is like n-dimensional chess; more than 20 major factors must be improved, and improving one typically worsens other factors. These factors include higher resolution, wider FOV, peripheral vision and safety issues, lower power, smaller, less weight, better optics, better cameras, more cameras and sensors, and so on. And people want all these improvements while drastically reducing the cost. I think too much is being made about the cost, as the AVP is about right regarding the cost for a new technology when adjusted for inflation; I’m worried about the other 20 problems that must be fixed to have a mass-market product.

Appendix – Modulation Transfer Function (MTF)

MTF is measured by putting in a series of lines of equal width and spacing and measuring the difference between the white and black as the size and spacing of the lines change. People typically use 50% contrast critical to specify the MTF by convention. But note that contrast is defined as (Imax-Imin)/(Imax+Imin), so to achieve 50% contrast, the black level must be 1/3rd of the white level. The figure (below) shows how the response changes with the line spacing.

The MTF of the optics is reduced by both the sharpness of the optics and any internal reflections that, in turn, reduce contrast.

Karl Guttag
Karl Guttag
Articles: 261

77 Comments

  1. Fantastic technical post as usual. Thank you! Have you seen the meme that shows “well designed” 4-way intersection? The sidewalks are all uniform, and each corner has a grassy area. The image is designed to show how engineers see the intersection in its perfect state. But you’ll notice in the image that the grass has dirt paths that cut off the corners while traveling on foot. I feel like there’s an analogy to be had here. I love the technical details, but it’s the lived experience that matters more. Will it do the things that Apple (or anyone) claims? I’m still in this headset for an average of 7 hours per day since launch day. I can’t imagine that changing any time soon. This minute, I’m writing this reply in Safari on a MacBook Pro using a paired Magic Keyboard and Trackpad (to the headset). There are several native apps (and iPad) surrounding me. The extra dimensions of work are now normal, and missed when not present. This device will improve, but there will always be something technically insufficient.

  2. Numbers dont lie. The extensive coverage Karl is doing (for free!) for many years dont leave much room to question the results. What is interesting here is that sofar every other review that compared the clarity and ability to read text avp vs quest 3 unequivocaly favored avp, with the result as quest unusable vs avp (on the edge) usable as a monitor replacement. Now I dont think there is a contradiction just some aspect not discussed that favors avp by “experience”. In any case looking forward to see more of analysis form Karl. Thank you Karl for doing all these deep dives for so many years.

  3. Don’t know what the point is with all these tests when you can just put on both headsets and the Vision Pro is clearly on a league of it’s own. I own both and the graphical fidelity on the Vision Pro is far superior and it isn’t even close.

    • Define graphical fidelity, the article shows you how that is just not true for text. It also matches all the issues I had with my AVP in terms of text clarity. And that’s before we start talking about blur caused by head rotation to read things. Maybe you don’t notice these things, if not, bully for you! For the rest of us its important. For articles you think are pointless you spent time replying, guess they challenge the rightness of your decision to purchase or something. tl;dr I don’t know what the point of your post was, you can tell the AVP has image issues that need to be explored.

    • The point is to understand what the AVP is doing and how it performs. I’m trying to separate the variables that goes into displaying an image. I understand that the AVP’s image will seem more pleasing to most people.

      Clearly the screen door effect of the MQ3 is distracting and less pleasing to the eye than the MQP, but it is also optically sharper. The AVP seems to have a bit warmer white point out of the box (there are adjustments which I have not tried) which may seem more pleasing to many people. But then the AVP’s white point varies across the image slightly and is affected by the eye tracking. The MQ3 has objectively better color uniformity.

      Can you please explain what exactly you mean by the “graphical fidelity” being better with the AVP? Can you give examples?

    • Ah and there is the fan boy response. Was looking for it.
      I also bet you don’t have both and are just jumping in to defend crapple, why because you don’t want to admit you were scammed of $3500, if you even did buy it.

      • Phillip: “Ah and there is the fan boy response”

        Also Phillip: “crapple”

        (I don’t like Apple myself and I don’t think AVP is the second coming of Christ, but calling someone fanboy and using the term “crapple” in the same sentence is hypocritical – and funny)

  4. Thanks for the deep dive into the topic! Aside of the static blurring, do you have any comments related to motion blur of AVP? There has been some critics and comments related to that in some of the reviews.

  5. Thank you, this explains all the issue i saw for the week i had mined and then returned it.

    Would be great if you could look into why avp when head is rotational moving (turn left/ight or look up/down) is so bad compared to when shifting weight left to right causing lateral movement. I found the quest 3 to blur much less during moment for pass through and little less for rendered content. I also found looking at apple watch on my left arm was sharper on quest 3 than my AVP.

  6. Bravo for these tests and the effort Karl puts into them – not necessarily to disparage any particular display tech but simply to describe and show experimental results to communicate their properties as the current state of the art. It is unfortunate that the market is fixated on “better than a previous product” rather than how to ultimately get a wearable display to be without compromise in image quality compared to a desktop monitor. I applaud the comparison to a 1080p monitor because if the image quality isn’t at least that good then there are a lot of reasons for a mass market to say it is just not worth it for the different visual experience a headset can offer. Success in this industry, especially when the business model is based on content and app development, requires a significant fraction of a billion units. We won’t get there if it’s easier (and better) to look at a desktop monitor.

    There is a lot of misinformation out there. It is amazing how many people think an OLED or uLED display is the holy grail for future headsets due to native contrast when, as Karl points out, the lens is the limiting factor for contrast and clarity. It is also a bit amazing from the development side to hear people complain about pixel structure visibility when that visibility is often the best testament to the quality of an optical system. Removal of pixels structures was a critical transition era in the home theater projection industry for many years. Nobody wanted blurry images. Spatial “Filters” went from square-wave band-pass to Gaussian to ultimately twin delta functions (ie image splitting) to remove pixel visibility without sacrificing perceived clarity. There is no pancake lens design this point that provides 4K MTF over the full field through a non-pupil-forming optical system. It’s kind of a shame that anyone would market that limitation as a positive in the context of not being able to see pixel structure. I almost think it would be better to try and show the highest MTF possible regardless of pixel structure and allow an aftermarket to evolve to solve that problem if people object. But first things first. Let’s get to a headset that can actually present the quality of a simple 1080p monitor. Wouldn’t that be something.

  7. Karl great article as usual but I need to ask why are you comparing sharpness by streaming images from computers to the devices? You have to know there is major compression taking place with streaming. Why didn’t you open the images directly on the devices?

    • A reasonable question, but I don’t think they are doing image compression when they mirror the PC/MAC.

      I have gone back and compared various methods of putting up an image with the headsets and don’t see a difference with bitmaps. In the case of the Meta Quest 3, it seems all the “native” modes like to make a highly curved virtual monitor, whereas by using the Immersed app, I can get a flat virtual monitor. I have found with the default Quest native browser that the image looks much worse if I make the window the same size as in the remote desktop in Immersed.

      • You’re likely aware of this already Karl, but worth pointing out: When using Immersed with a Mac, there is a “Retina Display Quality” setting (inside the “Advanced” menu, in the Mac settings, as opposed to the settings you change within the app running on the HMD).

        It’s easiest to understand if you imagine you’re running your display at exactly 2x pixel density, so for the sake of illustrating that, let’s say you’re running a 13.3″ MacBook Pro (which has a display panel with a resolution of 2560×1600) at 1280×800 (which is one of the “short listed” options, but the default setting is actually 1440×900, or 1.77̅x density). At the default “Retina Display Quality” setting of “0”, the virtual monitor will have a resolution of 1280×800. At a setting of “10”, the virtual monitor will have a resolution of 2560×1600, just like the physical display.

        I just asked on the Immersed Discord, and it sounds like it’s always “2x” the “scaled resolution”, so 1440×900 + “10” would be 2880×1800, which is greater than the physically display’s maximum resolution. Presumably “5” corresponds to “1.5x”, or 2160×1350.

        Obviously the “effective resolution” reaching your eyes will be limited by many factors, enumerated in this articles and others on this website, I’m just talking about the resolution being sent from a Mac to the Immersed app running on an HMD.

      • Thanks, I didn’t realize that the Immersed App worked on the Apple Vision Pro (I have been using it on the Quest products). I will give it a try at some point. I’m inundated with things to do at this point.

        I have tried various resolution settings when mirroring the MacBook Pro 14″ to the AVP, and have not seen a difference in what I see with the test images (mostly 1920×1080) I have tried.

        One of the simplest ways to transform a flat image into 3-D is to upscale it to a higher resolution and then transform that image into 3-D space. If you don’t upscale an image that will be close to 1-to-1 with the output resolution and transform it into 3-D space, it will suffer worse degradation by the resampling.

  8. There also seems to be some significant variability in AVP optics. I’m on my second unit, and the first one was sharper overall, however it had a large blurry spot on the right side of left lens.

    The replacement unit is a bit less sharp subjectively, but the sweet spot is larger, however some extra blur (not confined to the very edges) is still present, but instead of having it all on the right side of left lens, it is spread between the side and the top.

    The second unit also has a small spot in the center of vision (if looking straight), also only in the left lens, that has a bit of blurriness, but also some kind of ripple distortion which is mostly noticeable with head movement.

    I’ll try one more unit to see if I get a better one, as I still like to use it as a portable monitor.

    MQ3 is sharper indeed, but I still prefer AVP when working with text – MQ3 has this CRT vibe about it (and I can see flickering on wide white areas too)!!!

    • Thanks for the information on the optical issues you experienced.

      I have talked to other optics people and they agree that the AVP optics are a little “soft.”

      I would definitely agree the AVP is a much better overall product. I think the screen door effect which is prominent with the MQ3 detracts from the overall image quality impression even though it is a bit sharper.

      My eyes are not very sensitive to flicker. I lived in the UK for many months back in the days of CRT TV running at 50Hz and it “cured” me of seeing flicker.

      • I’ve just got the third unit, and while it still has worse edge-to-edge clarity than my Quest 3, it’s better than the other two. The image sharpness is more uniform all around, the blurry areas are present, but tolerable, and it looks sharper overall, even in pass through.

        I had chromatic aberrations in the second one, and they are now gone too. I will do more tests while I still have both, but it’s now obvious that the lens vary greatly from unit to unit.

      • I’ve done a bit more back to back comparisons between both AVPs (subjective, as I don’t have a camera setup that can do through-the-optics pictures), but I did it using a Settings window positioned at the same height and distance from me (as indicated by a fake shadow).

        The new unit is noticeably sharper overall, and also left and right lens seem to be identical (or have very minor difference). The first two units had a big difference between left and right. Right was always OK, left was off.

        I mocked up blurry areas in my post here: https://forums.macrumors.com/threads/lets-talk-about-blur-and-distortions.2420684/

        The passthrough also seems to be sharper – I can see both strokes in the double quote (“) inside my watch complication on the new unit, and the previous one blends them together.

        I wonder if Apple finally got their manufacturing processes dialed in. Both previous units were probably from the first production batch – I pre-ordered the first one, and bought the last remaining one in the area as a replacement two weeks later. They were out of stock for a few days, then a new batch arrived, and my third unit is from that new batch.

        Actually, I almost never buy any Apple product at launch and wait for a month or two to avoid early production issues. This time I was too impatient, I guess. Maybe it would make sense to wait even longer until replacing the second unit, but this one is good enough to keep.

      • Thanks,

        I have been trying a bunch of things to see if there are some software factors. Based on my latest eye observations, I can report that the AVP output image is different based on whether one opens the same PNG file via folders on the headset, the safari browser on the headset, the native file on the MacBook being mirrored to the AVP, or the file being viewed in the Safari browser by the MacBook and then mirrored to the AVP. The file being natively on the headset seems to have lines that alias more and are thinner (using the same white text and lines on a black background). The case where the MacBook displays the native file and then mirrors appears to have slightly thicker white lines and less aliasing, and the using the Safari Browser either natively or with the MacBook and Mirroring seems to have the thickest while lines.

        The biggest difference is between the native on the headset and any of the other ways of viewing the file. All the white lines appear much thinner when opened natively on the AVP from a folder. The differences between the others are much more subtle (and some may not be different as it is hard to get all them the exact same size).

      • Another interesting observation. Again, that could be subjective, but since the new unit arrived with 1.0.3 (I had 1.1b4 on another one), I could compare things before and after the upgrade. Looks like Apple improved rendering in 1.1, at least for Mac virtual screen. It became sharper after upgrading to 1.1.

        I also noticed the same effect when using your test pattern from this article when viewing it via Mac virtual screen in Safari. I could resolve more lines after the upgrade with the same window positioning. So, the sharpness issue seems to be a factor of multiple things, one of which is software.

  9. Karl, any reason why you didn’t display the image directly in both headsets? You can just use flat window mode in Quest and use the same size window for AVP.

    I’m thinking some loss of sharpness comes from casting the Mac screen – I noticed that it is noticeably less sharp then when displayed directly.

    Also, using Environment vs passthrough seems to make everything even less sharp on AVP.

    • I have found that the Quest 3 native browser does a very poor job of displaying images. I got much better results with the Immersed mirror of my PC. If by the “flat screen” mode you mean the near mode (which is flat), that image is terrible (looks like a poorly scaled image) compared to what I see in Immersed.

      As far as the AVP, I have not seen any difference in bitmap images between using the Safari on my blog, the MacBook mirror, or running the file locally. I do see a significant difference between running say an Excel Spreadsheet on the Mac versus natively in the headset.

      I have not checked the “environment vs passthrough” difference. I typically run a dark environment when taking pictures to keep from adding light that would reduce contrast.

      • Environments definitely make text looking “softer”. I use light environments to reduce glare, but I’ve just tried to look at my Mac virtual display in dark Moon environment, and I can definitely see it getting slightly out of focus when the environment is active and go back to crisp(er) image when the environment is turned off.

        As for the variability between units, it’s not just that, the left and right lens have different quality issues too! What’s puzzling is that the right side was always much better than the left, at least for me and for one other poster on MacRumors forum. I thought it could be some weird software bug, but then different units manifest different distortion/blue patterns…

        I wonder if there is some hidden diagnostic mode that would show max quality static render of a test grid without foveated rendering or any dynamic distortion/color correction…

      • Thanks for the information. I will try and investigate. it is more awkward to shoot via the right eye as the camera has to be upside down to fit, but I have an adapter and will give it a try.

      • If you just want to display images sharply on Quest 3, please give my app immerGallery a try. There should be no need to stream it over network. You can email me and I can give you an internal flag for it that may be better if you use it for photographing through the lens.

  10. I’m pretty sure that your test is showing the unfoveated rendering. I looked at it in an Apple Vision Pro, with the image filling about 45 degrees FOV in safari, and the results were far clearer than your results.

    • Thanks for the feedback and checking my results. There could be some variability from unit to unit in focus as Eugene has reported in the comments above. I’m not the only person finding that the optics in the AVP are not as sharp as the MQ3 and there are more reflections/loss of contrast. The blur I’m seeing is equivalent to a Gaussian blur to about 1/2 to 1/4 of an AVP pixel. Also the camera being more “objective” can “see” things the eye does not see with all the visual cortex processing.

      Foveated rendering is clearly going on and you can even see the foveation boundary in the image. If you look in Cell 31, for example, of the full AVP image at full resolution you can see it. I have just uploaded a crop of Cell 31 and have indicate with red lines the Foveation boundary. If you look where the red arrow is pointing you can see the boundary where the single pixel wide line crosses the boundary. Below is a link:
      https://kguttag.com/wp-content/uploads/2024/03/Cell-31-crop-showing-foveate-boundary-copy.jpg

      • Man, this thing sucks at typing. So when you go to this website on the AVP, and you make the safari window roughly 45°, which accomplished by folding a piece of paper along it’s 90° angle and tracing the sight lines from the corner, The source image and the results image look equivalent to you?

        For me, I put your results and your source test pattern in different tabs, And switching back-and-forth between them look very, very different.

        So I would be very curious if doing a test with your eyes on your output versus your source seems to confirm your results because for me it does not.

      • I’m sorry, but I can’t follow what you are doing. I have put up windows from the MacBook and a safari browser window side by side at the same size and they look to be the same.

      • Sorry if I was unclear. My question is, if you wear the AVP, visit this website, and look at the test pattern while the browser window is sized to occupy roughly 45 degrees, does what you see match what your camera recorded?

        I can look at both the original test pattern you provided, and the result your camera captured, and they are drastically different. This difference is so pronounced that I speculated it might be foveated rendering, but it’s extreme. The “Arial 6 point” is a striped blob in your test, but I can vaguely make out the dot of the eye, even thought it does look a bit like it says ”anal 8 point”. Haha. Still, the blur you have described does not look at all like what I see in the HMD.

        So, to explain the difference, if it’s not foveated rendering, a quick diagnostic would be to see if when you visit this page in the AVP, and look at the same test pattern, the results match what your camera collected. Then we could rule out if it were a manufacturing problem or, potentially your cannon camera focus.

        But I can confirm that what I see in the AVP is not similar to the results your camera captured.

      • I believe you you being earnest. It is a close call on whether the 2 pixel wide lines are clearly visible with a 45 degree FOV. I can see them both “live” and in the picture if I put it up on my 4K monitor, but the Modulation Transfer Function is less than 20%. I’m going to try and take pictures through the right eye (it involves mounting my camera upside down (I have an adapter), so I haven’t closed the book on this subject.

        There are a lot of variables. It is also possible that you don’t appreciate how much “image enhancement you human visual system is performing. We may also be measuring the 45 degrees differently: I’m working off the camera image as I know the lens has about a 97-degree FOV (BEFORE cropping). There may also be differences in lenses in the various AVPs and right and left eyes.

        Regardless, I have corroboration from optical experts that the AVP’s optics are less sharp than the MQ3 and others. So I think we are debating the degree to which it is an issue.

      • I can’t believe I spelled “i” as “eye”. One thing is clear, this is not an ideal device for debating strangers on the internet 😄

      • Is just wearing it not an option?

        There are, of course, many opportunities for error when extrapolating insights through simulation.

        To get precise measurements requires a sophisticated rig, but to determine whether the rig is capturing roughly representative media should be as simple as putting it on for a few minutes.

        Is what you see in HMD consistent with what your camera captures?

      • No, he is not willing to saw out half his head and glue an SLR camera with the required resolution for this kind of test in there.

        Yes, he says multiple times in the post that the objective optical results he gets from the camera match his subjective experience in the HMD.

      • On the contrary, I have seen no mention that he ever attempted to position the test pattern in a roughly 45-degree area of the FOV while wearing the device.

        It would be a useful diagnostic, because what his results show is nothing like what I see running that simple experiment.

        I also notice that he wears glasses. Did he have the presence of mind to get un-corrected inserts before placing his camera in the HMD?

        Anyways, I can clearly see that the test-image, while spanning 45 degrees of FOV, is not nearly as blurry as what he captured.

        The only remaining question is why.

  11. Karl – I’m not sure why the system identifies me as hifipix but can you change my account name to just Shawn Kelly?

  12. Hello.

    You write:AVP is superior to the Meta Quest Pro, Quest 3, or any other passthrough mixed reality headset.

    Have you tested Varjo xr4 focal edition?

    • Due to a mix-up at CES, I didn’t get to try the Varjo xr4, so I should have made the caveat that I hadn’t compared the Varjo xr4.

  13. Karl, did you adjust the EV of your camera when taking pictures between the AVP and MQ3? Decrease ISO or aperture or anything?

    Given the difference in brightness between the displays, could the difference in blurriness be explained by over/under exposure?

    • Actually, and I should have mentioned this, I shot both with the exact same camera settings with ISO800 and aperture f8 and the same shutter speed 1/60th. The AVP is only about 5-10% brighter than the MQ3 when both are at full brightness (not the insane/wrong 20X brighter multiple sources have reported) and I did adjust the MQ3 in post to get it to the same white level as the AVP

      • Can you measure max brightness for AVP ( say HDR 1000 nits, MP4, minimum AVP brightness settings and environment is set to Dark? This settings should give max brightness).

      • First, I don’t have the type of measurement equipment that is perfectly accurate measuring into a headset. I am working with other companies that have such equipment to get better measurements. That said, from what I can see the AVP has less that 100 nits or is about 10% brighter than the Quest 3 with both at full brightness with a mostly white display. It could be that with the right test condition, a mostly dark background, the AVP possibly can get brighter over a small area (this is true of most OLED displays).

        I have seen bogus reports of the AVP having 5,000 nits which is pure fantasy. First you have to realize that the AVP’s pancake optics only let through about 10% of the display’s light. I also don’t think the large ~4K OLED microdisplays are going to be driven so hard that they will burn out quickly and thus the display is likely putting out about 1,000 nits which at 10% gives about 100 nits.

  14. Hi, may I ask if you happened to be capturing the test image via a fully immersed application on the AVP? If you did, you should know a few things, because it may impact your conclusions:
    – Immersed Drawables are limited to 1920×1840, currently–I’m told by Apple this is a bandwidth limit, but I’m skeptical.
    – Immersed applications have no foveated rendering, because wakeboardd (the daemon which handles layer creation) checks if the app has a private Apple eye tracking entitlement. If the check fails, the resolution is halved and a fixed variable rasterization map is applied with (allegedly) 26PPD
    – If foveation is disabled by the app itself, it seems to use a 20PPD fixed foveation rasterization instead.
    – There is at least one drawable render mode that doubles only the vertical resolution–I haven’t figured out why. Maybe it’s nice for text somehow.

    • I’m using, the MacBook cast onto the AVP, the Safari Browser case onto the AVP, and have downloaded the image to the AVP and when sized the same, I get very similar results.

      I’m clearly seeing Foveated rendering, both when I move my head, I can see the foveated boundaries change, and the sill images show the foveated boundaries if you inspect the images closely most evident in rectangle 31 (https://kguttag.com/wp-content/uploads/2024/03/Cell-31-crop-showing-foveate-boundary-copy.jpg)

      I don’t follow everything you described as I am not an Apple developer, but if true it sounds like a bizarrely bad way to render. If they really limit rendering to 1920×1840, the resampling is going to kill resolution post 3-D rendering and resampling. Also PPD should be post optics, I don’t know what it even means as the AVP’s optics yield variable PPD from about 44.4 PPD in the center to about 15 PPD in the outsides (depends on what you consider the full FOV) with an average PPD in the 35 range.

      Could you point me to a reference for how Apple says they are rendering?

  15. Hi!

    I’m wondering, have you tried the developer video capture feature in XCode/Reality Composer Pro as a way to get rid of foviated rendering for testing that does not require it?

    It claims to disable foviated rendering and some other unspecified optimizations for capturing high-quality 2D videos. Unfortunately I don’t have a headset to test with, so I have no idea if it actually renders this image to the headset or if it is only visible in the saved video…

  16. Great read and detail. Do you have any posts about how you capture through the lens? I’ve been tinkering with a few ideas but I would love to learn from someone that has successfully accomplished particularly with see through optics.

    • Thanks,

      Getting the optics to stay on is the first trick which is described below:

      • Cover one eye (it helps to have the light shield removed) and then Turn the headset on.
      • Look through the uncovered eyepiece. The unit will turn on, but it will ask for a manual IPD adjustment. Double-click on the Crown Dial through the two screens to manually align the IPD (you don’t need to do anything other than double-click each time).
      • If, at any point, the cover is removed and both eyes are “exposed,” the headset will shut off. But you can “switch eyes” if you can keep at least one closed at all times.
      • I have found that it is better to cover with a white microfiber than a black light-absorbing material (it does not always work with black). Usually, I tape the cloth in place to keep it from accidentally uncovering the lens. I understand there are some other ways that I will be exploring.

      I use two tripods. One tripod has a small clamp mounted on a ball head on it where I can grab one side of the headband. I get the image up and running and have the tripod clamp it, and then I adjust the position of the headset. The second tripod has 3-axis geared head mounted on a geared two axis “micro rail,” on a tripod with a geared center column to net 6 degrees of freedom on positioning the camera. I then position the camera roughly into to the headset and use the geared adjustments to position the camera lens. I use a mirrorless R5 camera with relatively “thin” (front to back) prime lenses (RF16f2.8, RF28f2.8, and RF50f1.8. I sometimes also use a Micro-fourthirds Olympus camera with a 17mm lens which is closer to the eye optically, but does not have the resolution of the R5.

  17. I’d like to see the same comparison done with native text rendering in the browser instead of just images of text. Apparently Apple has a way to take optical distortion and pixel structure into account when rendering text that wouldn’t be possible with displaying a bitmap image of text. Obviously that won’t eliminate the optical blur, but it may remove a layer or two of digital blur.

    And a small nitpick: In the “Experiment with Slightly Blurring the Meta Quest 3” image, I’m disappointed that the 1.5 pixel blur isn’t gamma correct 🙂

  18. this is some of the best writing I’ve seen on the Vision Pro on the entire dang internet. Thank you!

    (also your blog on a whole is great, vibes like early 10’s, I miss that internet so much, so double thanks!)

    • You are perhaps too kind. I’m an engineer and don’t have time for a “fancy web site.”

  19. Good article – the one thing to perhaps consider is the ‘counter rotated’ panels of the Quest 3. I like these as they remove the screen door effect, but it may mean that your screenshots are monoscopic, vs stereoscopic ‘merged reality’. By this I mean that the rasterized patterns and outlines will deliberately not match in the left and right eye panels, due to the counter rotation which compensates for the physical rotation. So – we cannot really say that either the left or right screen capture matches what the user will see, since your brain will see both, with their rasterized differences, and combine the result. It is good at doing this, as it happens all the time in the real world, but it may mean that monoscopic screen captures don’t portray the image that our brains will see when looking at both left and right rasterized images simultaneously?

    I hope I described this adequately. 🙂

  20. “Yes, the effective resolution of the MQ3 is objectively better than the AVP”

    this is a really bold statement. in the end it means the apple vision pro cannot make good use of it’s ~ 2.5x resolution over quest 3 for text clarity.

    i hope it will be confirmed or disproved by additonal testers (personally i hope it can be disproved because it is a bit of desaster otherwise :-).

    • It may be bold, but I have put up my proof in the form of pictures. I’m waiting for someone to show it to be otherwise. I have had several other optics people comment that the AVP optics are a bit “soft.” The AVP after all is having to magnify the smaller display by 1.6x more in a shorter distance that the Quest 3 (or Quest Pro).

      I have hopefully been clear that “resolution” can be tough to quantify for all uses. Having more pixels can help with a “natural” relatively smooth image even if single pixel high contrast lines (as in the test pattern) can’t be resolved. Having more pixels helps with jaggies and other artifacts.

      I think most people will and are finding the AVP image more pleasant with less jaggies. But the AVP is not higher in measurable resolution.

      Over a physical monitor you have a bunch of losses with a VR 3-D simulation of a 2-D monitor image. First you have the issue of “fitting” the rectangular display inside the oval sweet spot of the optics. Next you have the Nyquist rate resampling with is at least a 2x resolution hit. And then you have any optical issues. What the AVP or any VR headset gives is a relatively low density, low resolution, content when displaying content like text.

      • It would be interesting to repeat your test now that VisionOS 1.1 is officially out. As I mentioned previously, it visibly improves the sharpness across the board, and especially when casting Mac Virtual Display. Also, the immersive environments make everything a bit less sharp. I can’t find a reference anymore, but it looks like VisionOS renders everything at a lower resolutions in VR mode (that seems to include their own environments too).

        So, using 1.1 and pass through mode should result in the sharpest image currently possible.

        Overall, it seems likely that they use slightly out of focus optics too, however there is a big software component to it as well.

        I can‘t shoot pictures through the lens, but when I had two AVPs and could compare 1.0.3 with 1.1, the difference in sharpness was very noticeable, especially in Mac Virtual Display. Ditto for virtual environment vs pass through.

      • Thanks,

        I download and installed VisionOS 1.1. I didn’t notice a big difference if any visually. I compared the “Folder View” to the MacBook Mirroring and they both looked more or less the same with the folder view being dimmer and thinner and the MacBook mirroring being thicker and bolder with fine objects (text and small limes) being brighter.

        I also don’t see a difference between having the passthrough or environment dialed in.

        I will try to get some pictures later today, but it is extremely difficult to get images to be the same size between setups in 3-D space.

      • I wonder if you got one of the “bad” units. I had to go through 3 different AVPs to find the one that didn’t have significant issues with clarity and distortions. That may explain why you don’t see any difference between environments and passthrough or OS versions.

        BTW, I used this trick to get the same window size consistently. I would pull it as close to me as possible and then expand it to maximum size and center it in my view. At least for MacBook Pro screen it gave consistent size and placement.

        Another trick was using my 34″ monitor and resize the window to match the size, while making sure the “shadow” was showing under the monitor.

      • Thank for suggestions. I use my 28″ monitor as a reference for sizing the first window. The trickiest thing it virtual distance for the MacBook window. Every time you grab it it changes distance. Then it is very hard to tell the distance (versus size) for the window. A small amount of hand in and out moves it considerably. I want all the two windows to be the same virtual distance so that they don’t resize relative to each other when moving the headset/camera for alignment and size.

        I did reshoot the same image (white lines and text on black) with the 1.1 software with the same camera settings. It does appear that the MacBook window doing less to make the lines and text “bold/bright” which results in a little better effective resolution. But should emphasis that this is far from definitive. It is not night and day different, but the text does look a little better in the photographs.

  21. Karl, there is definitely a difference in image quality between having environments dialed in or passthrough. If you can’t see that you need to get your eyes examined. It is very noticeable for environments themselves, try opening up safari in Yosemite or Joshua tree in daytime mode for example, you will immediately see the environment get significantly blurrier as a result. Once you close safari, the environment gets much clearer again. It is also noticeable for text but in a way that’s more subtle but still very real, as if you dial in an environment and pull up a safari webpage with text and then go back to passthrough mode with the same webpage loaded up in front of you you can see the text get slightly clearer. Not by a significant amount, but to a degree that’s noticeable enough that if I was planning to work in the AVP I would do it in pass through mode as a result.

    • I do see a quality difference with a bright background that I would associate with internal reflections in the optics. These reflections cause a flood of light that reduces contrast and causes lack of sharpness (sharpness is defined as “contrast at an edge”). When going back to passthrough, the background of a typically lit room is pretty dark via the cameras. Normally, when I use environments for taking pictures, I use the dark mode where the backgrounds are dark.

      Something “fun” I just noticed when looking at the current article via the AVP, is that the brightness of an area changes based on eye tracking. If you scroll down to the middle of the article and size the window somewhat narrow horizontally and tall vertically, you will see multiple images with dark backgrounds at the same time. If you change where you look the images will took brighter when you are looking at the image and then get darker when you look at a different image.

      • The “fun” brightening I noticed was the AVP’s way of indicating that the image had a link if you clicked on it. So I guess it is a feature and not a bug.

  22. I wonder if the images in the AVP are foveated… That is what makes the difference in resolution, since only the foveated area is rendered with all details… What is that area when you put a camera in front of the AVP´s display? The rest of the image could be blurry and low contrast, but it is irrelevant because the fovea is the only area that could take advantage of a high resolution, high contrast image.

  23. Great work! I have a question though, which method did you use to evaluate if the blurriness is caused by the lenses or the software?

    • Thanks,

      In terms of blurriness, it was a combination of the evidence available. First there is the judgement from my experience having shot through many headsets plus looking in photoshop through the years of tens of thousands of normal photos. In the longer focal length (28mm versus the more full frame 16mm shots) the screen door between pixels which is clearly visible looks slightly out of focus. I had some input from some other optical people that the AVP was a little soft. And then I had the AVP lens from the iFixIt teardown and the best focus point of the lens did not seem to match the location of the display panel.

  24. […] essay is by Hugo Barra. They were also not at all interested in entertaining the suggestion that the screens are slightly blurred, seeming to chalk it up to FUD without even reading any of the linked […]

  25. Great review. Lot’s of interesting info. I had an idea which probably won’t work but I’ll put it out there anyway. Your images show that the output from a slightly blurred MQ3 looks better than the perfectly focused MQ3 on real life photographs (while being about the same for text). The MQ3 supports prescription inserts. Would it be possible to put a lens insert in the MQ3 to achieve a similar result to your algorithmic blurring?

    • I would assume it would be possible to defocus the image slightly with inserts. I’m not sure of the overall advantage other than reducing the screen door effect somwhat.

      • The photograph you displayed looked much more natural on the slightly defocused MQ3 than on a precisely focused MQ3. I would expect it to be true that video on a slightly defocused MQ3 would also be much more pleasant to watch (vs a precisely focused MQ3).

        Hugo Barra (https://hugo.blog/2024/03/11/vision-pro/) cites your review and says this: “Intentionally making the Vision Pro optics blurry is a clever move by Apple because it results in way smoother graphics across the board by hiding the screen door effect (which in practice means that you won’t see pixelation artifacts). This is also where Apple’s “taste” comes in, essentially resulting in the Vision Pro display being tuned to have a unique, softer, and more refined aesthetic than Quest 3 (or any other VR headsets). This is certainly a refreshing approach to designing VR hardware.” and “This is the kind of thing that our hardcore VR engineers at Oculus would have fought against to the end of the world, and I doubt we could have ever shipped a “blurred headset”, LOL!”

        Well if an inset could could slightly blur the image you could have your cake and eat it too. Those that wanted a precisely focused headset could have it, and those that wanted a slightly blurred headset could have that too.

  26. Carl, thanks for a very interesting post. I had my doubts too–when I first put the Vision Pro on, I didn’t feel the screen door effect as much as I was surprised. I thought, “Funny, the Vision Pro should have a much lower ppd in the center than the Varjo XR-3. But why am I not feeling the screen door effect as much as I should?” I think you’ve answered that question. Thanks again.

    As a side note, this is just like how old games used to make pixel art based on the assumption of cathode ray tube blurring. Of course, the resolution is VERY different!

    I look forward to more of your expert analysis. Thank you!

    *I am a Japanese speaker and I can read English, but I am not good at writing, so I am sending you a translated text using DeepL Pro. Sorry if there are any incorrect or rude wording.

  27. Your blog is a breath of fresh air in the often stagnant world of online content. Your thoughtful analysis and insightful commentary never fail to leave a lasting impression. Thank you for sharing your wisdom with us.

  28. Thank you for the work you’ve done. I really wanted to love the AVP, but experienced disappointment within the first hour of use. I just couldn’t convince myself that using the headset as a monitor provided a better VR monitor experience than what I was already getting with the MQ3+Immersed. It wasn’t even marginally better, let alone $3.5k better. Your article confirms this.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading