Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Wow, is Apple’s Vision Pro loaded with pixels (ieee.org)
105 points by mfiguiere on June 9, 2023 | hide | past | favorite | 137 comments


The math in this article doesn't seem to add up correctly, as far as I can tell. Its calculation of the Vision Pro's PPD (pixels per degree of vision) seems way too high.

The Vision Pro has 23 million pixels total, or 11.5 million per eye, or 3,391 x 3,391 if square.

The Quest 2, by comparison, is 1,920 x 1,832 per eye, or 7 million pixels total across both.

So the Vision Pro has 3 times the total pixels of the Quest 2, but just 1.81 times the number of pixels along a single dimension (sqrt(23)/sqrt(7)). And the article quotes people saying that the field of view is comparable to other headsets.

The Quest 2 has a PPD of 18.88 horizontal and 18.69 vertical. The article also mentions the Quest Pro as having 22 PPD.

But this article is claiming that by their calculations, the Vision Pro has "50 to 70 PPD", which is supposedly in the ballpark of the PPD of the fovea which it says is around 60.

If the Quest 2 is 18.8, and the Vision Pro has 1.81 as many pixels in a dimension, then the PPD of the Vision Pro should be around 34. Nowhere near "50 to 70".

Did the authors forget that PPD is calculated along one dimension only, rather than two?

I mean, the resolution is a great improvement, but it's not "retina-level" yet. It's halfway there.


A simpler argument: the Vision Pro OLEDs are approximately the same resolution as a 4K display but field of view is much higher. Hence the angular resolution must be lower.


Display size and distance from the eye are also variables when calculating PPD


In general yes, but the article already says that the field of view is comparable to other headsets. And display size and distance are merely used to to determine field of view. So those variables are already accounted for.


Passthrough with a virtual computer screen was the most compelling use case we found when we tested the Meta Quest 2 as a team in a virtual office environment. Having your screen projected in front of you in a virtual environment felt like the real game changer. So it really makes sense to me that Apple is going for AR with this crazy hi rez display because it gives you the ability to basically throw away all your screens and project as many as you want anywhere you like. Very exciting.


I think the real value proposition of VR in general is "like a big monitor, but it fits in your backpack and you can use it everywhere" and not the successor to smartphones.

Direct social interactions seem really... creepy. It somehow appears rude, like looking at your phone while speaking or pointing a smartphone in someone's direction while potentially filming them.

But if this device enables WFH in the garden/nature where it's too bright for laptops or installing a desktop would be impractical, now that sounds great!


Direct social interaction while wearing things on one’s face will become second nature as quickly as interactions while wearing AirPods (or a Covid mask).

This is better than a Covid mask in that even Version 1 lets people see “behind the mask”, kind of. By version 14 it’ll be so completely natural that it won’t occur to anybody on HN to even comment on this aspect of it.


This is why I find all the comments I see on here about needing a killer app baffling. We already value both screen real estate and portability. Devices already compete on both.


Let's just hope they don't half-ass it like pretty much every AR product that ever was. Maybe my standards are way too high, but I've yet to use a VR/AR that wasn't obviously just designed to make C-level execs and investors shout "gee whiz!" and little else.

I would like an AR device that acts as a monitor for my Macbook or phone, isn't so heavy that it pinches my nose or strains my neck, isn't pixely, has a wide field of view with low-to-no distortion, and overall doesn't feel like a McDonald's Happy Meal toy.


I think you’re in their target market, because by all appearances they went super high-end and engineered the crap out of this thing!


Yes except it’s built out of metal and glass, so it’s definitely not light


I am surprised they went for aluminium and not magnesium alloy. Quick google shows metal parts could be about 30% lighter.


I wonder how much it will cost to replace a cracked eyeball screen.


I'm just imagining the equivalent of "screensharing" with this setup. Remote workers could get the "over the shoulder" pairing vibe we used to have in office, by opening a "portal" over your shoulder, and both looking at the displays and turning and talking to each other. Both people could even point at the display.


Do these passthrough displays give you a variable focal distance like the world does? As in, without the goggles, to focus on my dog, I have to focus about 10 feet away, but to focus on my hands I have to focus about 2 feet away. With passthrough, is it all effectively the same distance from me as far as focus is concerned?


That’s possible though even better would be to have independent control of focal lengths for different parts of each screen. https://www.optica.org/en-us/about/newsroom/news_releases/20...

What’s hard to fathom is just good eye tracking can be on these systems and how many hacks such tracking enables. They can for example spend extra rendering time on the pixels in the center of your vision and update this fast enough that you don’t notice the difference. Similarly using adaptive focal control the screen can always be in focus no matter what depth you focus on.


Is there much point in having different parts of the screen at different focal distances? Couldn't that be simulated by just changing the focal depth at the fixation point and applying blur for other z distances?

I guess that's easier for vr than ar and may tend to exaggerate any issues in the depth mapping.


For AR it depends on if you can see through the screen like Google glass or if it’s a VR headset with external cameras. With pass through you can’t change the facial length of the outside world, but you want to change it on your overlays.

In terms of VR it depends on how well you can pull off the blur based on eye tracking vs how much control you have over focal lengths. A perfect version of either is probably equivalent, but doing both reasonably well is probably easier than taking just one close to perfection.


That's what I was wondering, what happens to your eyes natural lenses? do they need to contract to read the content or does it account for divergence so the muscles in your eyes can relax as if the screen is 20 feet away?

Seems like that would be the case otherwise it would cause insane stress guess apple isn't dumb.


The Apple ones do not have a variable focal distance, though the technology exists in the lab.


Main issue isn't the resolution though - limiting factor is discomfort/fatigue of having this device/screens strapped an inch from your face.


The main issue with the Quest 2 for me was the resolution. I didn't mind the headset nearly as much as the blurry text.


yeah.. a friend lent me his pico4, and while it is fine for games or other applications like googlemaps/streetview, I really do mind the optics. Blurry text disqualifies it for many usecases.


I'd agree. I tried one of those virtual desktop environments, the closed environment, the dense foam, buildup of heat and moisture just made it uncomfortable quickly. The battery life was an issue too, with the common recommendation being to get an external battery back on your belt with long cable and wire it into the headset.

I think it would have to be something like immersive AR glasses on the order of glasses for simple corrective lenses to have the level of comfort necessary to not only want to use it, but use it long enough to actually be seeing a productivity increase.


I'll be buying this the moment I can. I pretty much do my entire work lying down in my bed and if I can get all the screens I want with that, it's totally worth the 3500 for me.


You can already do this with the Quest + BigScreen app w/ Remote Desktop.

You'll be fatigue from having a screen on your face within an hr.


The fatigue in the case of the quest will be because there is no eye tracking, so you constantly have to turn your head to get good focus (Only the center of the screen is fully rendered), the displays are of poor quality in terms of color accuracy, screen door effect and resolution, and the lenses have a very small sweet spot. In addition your eyes have to have one of three specific IPD measurements, otherwise you are by default not in the sweet spot.


That's an interesting theory (re: no eye tracking).


I have a screen hanging above my bed for that purpose. doing things lying on my back is easier on the back/neck


Tangent, but I'm curious: why do you do your work lying down in bed? Do you mean this is how you prefer to work?


Yeah im pretty lazy lol. If I have the option to lie down I will. Wfh has not been easy with that.


Interesting! Not a use case I'd considered. But in that case, you must feel almost like they developed this device for you personally! ;-)


Resolution is a big issue, the quest 2 resolution is too low for me, that using it as a virtual monitor for productivity was a non starter.


But if I do the math compared to my normal desktop monitor in pixels per FOV, the headset comes nowhere near close to a normal desktop setup.

You would have to make your virtual monitors pretty big in order to have a lot of comfortably readable content and that sounds like an ergonomics nightmare to be moving your neck around all day looking all around your virtual screens.


> That should place the headset’s pixels per degree around 50 to 70 PPD.

> “The resolution of the fovea, the highest resolution portion of the eye, is considered to be 60 pixels per degree. And if you have a display like 60 pixels per degree, probably like 99.9 percent of people wouldn’t perceive the pixels”

As a comparison, Quest 2 has a resolution of 20 PPD and sitting in front of a 27" 4K monitor on a desk leads to about 70 PPD.


What's great is, if in 2023 we can buy approximately 60 PPD for $3500, imagine what we'll be able to buy in 2030 for a similar or lower price.

Even if it turns out that 60 PPD is slightly too low, and some people perceive pixels, it doesn't take a very high rate of improvement to reach 90-100 PPD in under a decade, and at that point we as humanity will always be able to enjoy basically perfect no-perceived-pixels VR and AR, which is absolutely amazing.

I would certainly like to have a Double robot streaming 8K video to my eyes from other countries. If someone could add smell sensors and smell reproduction, it would almost be like being there. And something like Starlink might eventually make it possible to do the 8K streaming from absolutely anywhere, regardless of local infrastructure, although limiting it to certain wifi-enabled zones would work meanwhile.

Now that I think about it, I actually have a business idea to take that even further, but unfortunately it won't make sense for another 10-20 years until this technology becomes more widely used.


>it doesn't take a very high rate of improvement to reach 90-100 PPD

everyone that worked on these for however long it took just reached for their virtual pitch forks and are looking your way


One of the complaints about remote assistance is the missing olfactory clue needed by remote surgeons and the like. I am sure some company has it in the pipeline, but given how Apple will likely always have their "not meant for medical use" in their TOS it is unlikely to be an Apple priority. Still their accessibility crew may make some noise given the number of people who have lost their sense of smell.

With any luck whatever 3D picture/video format is open and includes roving geolocation, compass direction, inclination, smells, humidity, wind speed/direction, light sources, humidity and more. Saving more metadata now should encourage technology to take advantage of it later.

The complete lack of cellular/satellite announcement is not too surprising as Apple computers and CarPlay also suffer the same lack. Apple pinches pennies in the weirdest places.

There is an opportunity for widespread Vision Pro adoption even at this price point so perhaps get started on the business idea. As a display alone there is high success potential as long as Apple does not feel the need to knee-cap the future of Vision Pro by abandoning the shared display market to others. It has nearly the resolution of a Studio display ($1599 at 14.5 million px vs $3500 at 11.5 million px/eye) and can emulate multiple monitors at once. Apple will want to control the experience, but personally I am hoping they quickly jump to WiFi 7 and become the central home hub for multiple PCs, consoles, and headless/clamshell Macs with multiple desktops each.

Desktop PCs and Phones are hardly social to start with so I think people complaining about isolation with AR are overstating their case. I expect most will just take this off like putting down the phone for conversation. Apple has an opportunity to create unique environments, increase personal privacy for all screen based devices, and champion compatibility if they cut back on ego a bit. Here is to hoping they open source some sort of easy pairing/streaming/VR protocol that limits frames/resolution based on attention/distance and the console makers adopt it with Find My in every controller and everyone makes a bundle doing it. One can dream.

Eventual Thunderbolt ports for a cellular dongle/fast media backup/head mounted lamp/charging controllers while playing/etc would also be nice if they don't add flashlights and cellular options to future models would be my main current request.

The first AirPods were great, yet the original Apple Watch and iPhone were bare bones so I will keep my expectations low. Here is to hoping Apple grabs every opportunity as quickly as possible to increase adoption as the competition will be fierce. As long as they are not too patient with the long game I think even five years should have a good market with a lighter plastic lens no front screen non-pro version for the casual.


Apple won’t put cellular in anything new until they think their own modems are good enough.

Apple had a years-long lawsuit with Qualcomm which, to be extremely reductive, had as its main goal “stop charging us a licensing fee that’s a percentage of the final sale price of the device, you should charge a fixed price per chip”.

Apple essentially gave up when they realized Intel was years away from making good enough modems, bought Intel’s modem group, which has been chugging away since then making still-not-good-enough modems.


I am not a vision expert, but simply matching the resolution of your display to the resolution of your sensor wouldn't necessarily produce a clear image, unless the pixels are aligned to the sensing elements.

By analogy, if you resize a 1025x1025 image to 1024x1024, it's usually going to look bad.


Interesting, at what viewing distance is a 4k monitor 70 PPD?

From what I've seen the common advice for monitors is that 5k is the ideal resolution for a 27" screen and 4k is a little bit less sharp if you're looking closely.


Apple's displays basically hit the mark exactly.


That's why I'm curious just how good the resolution can be on the Vision Pro. If it takes a 5k monitor at, say, 2ft away which covers maybe 30 degrees of your field of view vertically to be truly "retina", then surely a 4k display 3 inches away from your eye that covers ~120 degrees of your field of view is not quite there.

But you also have lenses that are stretching out the screen to cover your periphery where it can be much less sharp so it's not exactly comparable.


As another comparison the Vision Pro has 23 million pixels while a Virtual Boy has 384. Not the best comparison, but it's the only VR headset I've used.


Have to call you out on that one:

> The [Virtual Boy] display consists of two 2-bit (four shade) monochrome red screens of 384×224 pixels

That would be 127K pixels for the Virtual Boy.


That’s the perceived resolution which is different than number of physical pixels but you’re probably right in the 127k is a better comparison.


Interesting, this made me research PPD. Apparently 2 feet away from my MBP, ChatGPT says my ppd ~100. So I suppose it will be roughly 2/3 the resolution of what I currently experience today (which is phenomenal)

  import math
  distance_to_screen_inches = 24
  pixels_per_inch = 254
  ppd = 2 * distance_to_screen_inches * math.tan(math.radians(0.5)) * pixels_per_inch
  print(ppd)


> Vision Pro packs a pair of 1.41-inch micro-OLED displays that combine an OLED frontplane from Sony with a silicon backplane from chip foundry TSMC. Each pixel measures a mere 7.5 micrometers in size—similar to the diameter of a human red blood cell. “It is by far the highest resolution micro-OLED on the market,” says Young.

Crazy stuff.


You’ve heard of Retina, now comes Red Blood Cell


It's actually lower res than retina, given that you hold the retina-branded devices much much further away from your eyeballs.

It's a 1.0, and it's state of the art. We won't have retina-level AR/VR for a few more years yet, and a few more years after that for it to become affordable in consumer devices.


I know people are very accustomed to miracles these days but I still find the fact that these guys are comfortably doing foveated rendering kind of crazy.

Anyone here work on that stuff and know how it is done in practice? Is it just things like AA and post-processing that are cranked up in the foveated region, or is it also using simpler meshes and whatnot? The former I can easily believe, the latter sounds really hardcore


I assume the real challenge here is preventing some kind of "pop" when a pixel leaves or enters the eye's area of focus. Any kind of change in the periphery of your vision would be distracting.


Saccadic masking probably comes into play: https://en.wikipedia.org/wiki/Saccadic_masking

(brains "drops frames" while the eyes move, so "popping" during eye movement probably won't be perceptible)


It’s probably rendered at a lower resolution outside of the focused area, so you don’t have fine detail, text becomes blurry, etc.

The quest does this but without eye tracking, whatever you’re facing toward is higher detail than stuff toward the edges.

Rendering full resolution at high frame rates for 23 million pixels would be too much for an M2 gpu, but most of your retina can’t see that fine detail so you don’t need to.


The question is how much of rendering actually happens in M2 gpu, and how much is delegated to R1.


From how they described it I don't think the R1 does any rendering, it's for handling input from a dozen cameras and other sensors within 12 ms.

The one rendering related thing that I'd guess can be offloaded to that is warping the passthrough camera frames into whatever form they need for correct optical alignment.


Came here to write the same thing. How have Apple pulled this off? I remember watching one of Meta's researchers describe the extreme challenges, such as the fovea wobbling as it moves. They made it out to be unachievable any time soon, and yet here it is.


And yet I'm left wondering why the investors had such a bad reaction to its launch? Early signs point to this being a technological marvel, and despite the price tag I think it'll have useful applications in businesses with deeper IT budgets. You won't give one to everyone on the floor, but there's likely a case to having a small set of these.

It reminds me of the "Neural SubNet" from Deus Ex.


If you are referring to the stock price, I'd guess there was a lot of run up anticipating an unknown great announcement. Once it was announced, the "fear of missing out premium" went away and the stock went down just a bit.

The other factor is probably the the high price made everyone realize Apple is really targeting devs and early adopters. They aren't ready to sell one of these to every person who owns an iphone...at least not yet. So regardless of how promising the technology is, they probably won't start making money from it until some point in the future.


This is the “Vision Pro”. I suspect they’ll come out with a lower priced “Vision” in the future that is missing some of the features but allows people to gain access to the product ecosystem.

Apple might also simply release this model Vision Pro as the Vision in a few years when they release the next Vision Pro generation, something like they’ve done in other product lines.


The lightweight consumer version will be called Vision Air, a.k.a. Visionaire.

You know, for visionaries. ;)


Because the Oculus Rift was demoed in 2012, and in the more than 10 years since then, there has been not even an ounce of indication of a mainstream adoption of anything VR or AR related. Actually, more than that, the last company that massively invested in it, with bottomless pockets, has nothing to show for it beside failure and looks ridicule.

It seems insane that CEOs of the world's biggest companies don't seem to understand the very simple fact that nobody wants to wear a fucking headset all day long.


Indeed. I bought a Valve Index and it's great. But other than Half-Life: Alyx there hasn't been any other AAA titles that target VR it's all Beat Sabre level niche games that are fun for a few goes then get old (for me at least).

As you allude to there is no killer app for VR after a decade. Maybe the "Apple factor" might bring some more devs and some fresh ideas but at the price point its at you're still at the "the market is too small" for most companies to ivents in.


It isn’t about wearing it all day long. It’ll have a set of applications, just like the the phone and tablet and laptop have sets of applications, with some overlap.

The difference here from Meta is that Apple isn’t selling a dystopian future where we all spend our time in a meta verse with a “fucking headset all day long” - they are explicitly showing it as in use for certain applications as part of - rather than replacing - reality.


There is a really compelling use case for VR that is under-exploited due to low resolution and heat output of various non-apple headsets:

Driving video games.

Driving in VR is 1000% times better than on a 2D screen, you can actually judge breaking distances, and you feel like you're actually going through the corner.


Even taking this claim at face value, this is too niche of an audience to make VR compelling for investors.


I don't know about the absolute numbers but my general sense is that the market for harder core flying and driving and other simulators has actually declined at least as a proportion of the absolute market in the past 10 to 20 years.


Also flying. Flight simulators and space simulators are a blast in VR.


Adoption sometimes takes that long, we just always forget the predecessors to the devices that made it big (eg the worldwide success of mobile phones was not overnight, and was limited to business people and the rich at first).


There's nothing especially valuable about a technological marvel. Applications matter. Applications people pay for, or can be served ads alongside. That market is untested.


Everything about this seems an in-home thing. Certainly the announcement didn't suggest otherwise--including Disney's presence.

So more VR than AR? Not walking around with magic fashionable glasses that tell you things? I can imagine interesting aspects of AR but I'm not sure this gets to the starting line. (Other than maybe a developer platform)


I think the pitch is in-home and in-office (eventually this will be the same thing for more people). People might use it on a flight or commute but I don’t see the key pitch being anything like walking the street getting AR map directions etc.


From the flight/commuting perspective it seems very appealing because it lacks the ergonomic issues of other devices, but the battery life is a bit of an issue.


Power bricks are pretty cheap and easy to carry. And many flights have USB charging plugs. I reckon they'll have more or larger power bricks as an up-sell.

And I agree - laptop on a plane wrecks necks.


I can vouch for this.

On my first trip to most antique Corinth, someone noticed the fine Casio timepiece from my last (Reagan-era -- ca. AUC 2750s, I think -- or do I reckon now from the founding of Christ?) trip. When the watch was seen to "move", I explained the function of the band by analogy to a water clock (a clay vessel with a small aperture near the bottom to let water drain) and compared the water to the (non-visible) battery inside the Casio. I even went into elaborate explanation about the special symbols for numbers which mark the hour and how they are distinct from the symbols for making sounds for speech (e.g. "sure, '1' is rather like 'alpha'...").

Sadly, it was a very clever Greek to whom I explained all this. Slightly irritating were the questions about why it was on my wrist in the first place -- something something WWI. But my heart sank during the debrief when I came to really understand the time-travel problems which were exploited to near extinction by ancient sci-fi authors.

Happily, those (extremely antique) courts found it so much more cost-effective to cut off an advocate when the water drained from a clay vessel than to invest the wealth of the city-state into the production of a -- dare I say 'Byzantine' -- contraption wrought of too much brass and requiring too many servants to operate?

I guess the real lesson here is that while everyone has been conditioned to be distrustful of the Greeks, my personal experience has been that they are pretty solid folk.


> And yet I'm left wondering why the investors had such a bad reaction to its launch?

AAPL made a new all time high this week and yesterday it set a new all time high closing price.

Selling off 3% after running up to a new ATH prior to WWDC is not a bad reaction, lol.


Agreed that a 3% sell-the-news drop is minor.

But worth noting, in an era of higher inflation and stock buybacks, that probably the relevant metric for investor happiness is real enterprise value (that is, discount market cap by net cash, then adjust for inflation).

By that measure they’re pretty far from the pandemic peak.


Investors look at business potential and might not understand or want to bet on this market yet, it’s very much untested and unproven. If you want to know if it’s any good, likely better to look at tech publications.


> it’s very much untested and unproven

I feel like I'm reading cryptobros saying "we're still early".

It's not a new market, it has existed for more than 10 years, and it is a micro-niche market.

And I'm saying that as a guy that has a Quest 2.


> It's not a new market, it has existed for more than 10 years, and it is a micro-niche market.

So were smartphones before the iPhone. Sure, some people had a Blackberry. But now _everyone_ has a smartphone.


I would argue investors expected/hoped for some kind of application that is useful and would depend on wearing a headset.


I think the demo looked too simple most viewers saw the demo as no big deal. The demo for me was the hand tracking. The labor that went into the product is not reflected in the demo.


Human angular resolution is often quoted at ~1MoA (the estimated angular resolution of this display), but I suspect this is one of those misleading underestimates. Certain types of visual acuity, like Vernier acuity, are much tighter, on the order of 0.1MoA. I expect that we won't "solve" HMD resolution until we can squeeze another 2-3x the angular resolution.


What does the Apple Vision Pro offer in terms of expanded vision? Can I zoom in on something while wearing them? Can I use them to see into the Infrared spectrum? What about decoding QR Codes in real time and displaying the metadata?

One use case for me would probably be to wear them around the house (or at the grocery) and expand the text on labels. Being over 50, my presbyopia forces me to constantly have my glasses available. If I used AVP I wouldn't need glasses, right? What about hooking it up to my Tesla so that I can drive with them on and not have to look down to see my current speed? Talk about a HUD!

Or what about expanding our natural capabilities? FLIR cameras are getting pretty reasonable; could there be an add-on for detecting heat patterns? Or what about visualizing ultrasonic waves; like in "The Dark Knight"?


I imagine the way these work is that there’s a fast hardware path from camera to display, with the OS alpha composited on top. Any kind of post-processing of the pass-through video will introduce latency. So my guess would be no zooming.


I doubt it - the cameras are not at eye location. I expect they employ a similar solution to the Quest, which has no front-facing cameras at all - it builds a 3d model of the scene on the fly using photogrammetry on the corner cameras (if Apple do this they'd avail themselves of the built in LIDAR) and folds it into the rest of the rendering stack. This has theoretically better latency for the important part (head movement) than piping cameras directly into the displays, because it's perfectly synchronized with the display refresh without having to wait for a camera to complete a frame. I expect Apple does something similar.


This isn't what Quest does. Photogrammetry isn't that good (nor that fast). They just warp the images from the down/outward-facing cameras.


Well sort of. It's a depth-accurate warp, fed from the SLAM system. This is functionally equivalent to texturing a depth map with the cameras. It has to be this way because there's no other way to simulate having a camera in a different spot than it really is. If they were simple 2d warps, parallax, occlusion, and stereopsis would be all wrong, and you'd feel violently ill any time you moved your head. You can see the system break down if you bring your hand too close to the headset. You're certainly not getting raw passthrough images.

I'm not intimately familiar with the precise technical details of the Quest system, but I do know that real time photogrammetry even more sophisticated than depth maps can be achieved on mobile chipsets - google LSD-SLAM.


This makes a lot more sense, thanks for elaborating.


We have to stop talking about resolution only in terms of being able to see pixels. Not being able to see pixels is not the final limit to human vision. If it was 4k monitors and TVs would have never become a thing. There are other aspects like seeing aliasing or the blurriness introduced by antialiasing that is a particular challenge to small text and very well perceptible well beyond the point where you can spot individual pixels. It's incidentally also something Apple pointed out as the strength of the Vision Pro. So I wonder, how does the perceived sharpness compare to other Pro branded products by Apple?


Do you have an authoritative source to back up that claim, with examples that separate the effect you're claiming?

When I can't see pixels, it sure seems to me that I've reached the limit. Antialiasing isn't blurriness once you can no longer distinguish pixels, it's just how vision works. The "blurriness" of the source matches the blurriness of the eye so any additional sharpness is a waste.


I can make out pixels on 4k monitors let alone 4k televisions. If you believe pixels stop being visible at some pixel density below 4k televisions, you may have bad eyesight and have not realised because its not "need glasses to be able to read" bad.


I felt let down by my 5k/218 ppi screen when most of my messaging apps cant render my native language clearly at default text size. Next one will have mobile-like or higher ppi counts.


> Not being able to see pixels is not the final limit to human vision. If it was 4k monitors and TVs would have never become a thing.

Can you expand this idea? I thought 4K became popular because it was easy to see pixels at lower resolutions.


You're quite correct, in a movie theater or on a large TV the pixels in 1080p content are very easy to see. 4K is precisely for these cases.


This is still based on Ross Young earlier tweet of 4000PPI and 1.41" which we now know it may be wrong. It should be closer to 3400 PPI.

It may be the highest resolution micro-OLED on the market, but micro-OLED [1] could do quite a bit higher than that. Well above 4000 PPI.

And I know some people are pissed with the pricing. But I would not be surprised if this micro OLED cost $300 per piece. Or $600 in total. And I believe even the $300 is a very conservative number it could be much higher.

[1] God I hate this term as if they intentionally name is to distract it from microLED. It used to be called Si OLED or Silicon OLED.


Just wanna say the first thought I had when I read this is that these tiny pixels will soon end up in our body just like forever chemicals when these devices get thrown away.

They're claimed to be the size of blood cells.


As someone who has had a couple bad back and neck injuries, the idea of being able to position hi-res displays anywhere in space is very attractive. $3500 would be totally worth it if I can tolerate it for the day.


> Meta’s Quest Pro delivers 22 PPD

Wow, the resolution + lens solution on the Quest Pro was already really close to workable for streaming your computer screen. I imagine it’s a solved problem at 60 PPD.


Nreal Air does 49 ppd.

It's passable, but not all day usable. It feels closer to working on a 720p projector than a virtual screen.

I suspect the turning point will be around 100 ppd.


I know I've probably missed it, but does the Vision Pro have a video input, or is it limited to programs it run on it's onboard computer/streaming?


Apple's speakers don't have any audio input nor standard Bluetooth support, so it's safe to say that Vision Pro will show only Apple-blessed content the way Apple wants it.


I agree, and not just for this reason. Apple isn’t the kind to add more ports unless absolutely necessary. A video input would limit the user’s ability to move around, and is not something Apple would do.

I’m sure there are people at Apple who are terribly unhappy with the Vision Pro’s external battery pack connected by a cable and port as well as the ability to connect to power outside (I recall there was some mention of external power connectivity for usage longer than the two hours the battery pack provides). This is one of the first things Apple will get rid of by the time the third generation is released.


It has a feature called Mac Virtual Display, which I guess creates a large screen in space that mirrors your MacBook. But I’m not sure about a more general-purpose input than that.


easy guess is that it's similar to AirPlay with an AppleTV, but that's just a SWAG on my part. it works well enough on my 65" 4K TV. I don't watch features or anything (that's the point of the AppleTV), but the shared/extended monitor looks fine from my couch


You can project your Mac’s entire screen in virtual space.


But what protocol is used?


probably something similar to AirPlay considering that it is also limited to sharing you one screen.


I didn't see any mention of it in all the material I've seen so far. Seems like a missed opportunity for applications which require external hardware but lower latency than streaming across wifi.


It doesn't seem all that good to me.

If we assume about a 100 degree FOV, and you are using the headset to watch a big virtual movie screen at 36 degree FOV (typical THX).

Then if it's 3391 pixels wide, that only gives you a virtual movie screen resolution of around 1220 wide, or approximately 720p.


The semiconductor industry has been making pixels much smaller for a very long time. Not for displays, but for camera sensors. The reason why they didn't make them for displays is simply that there wasn't an application that required these kinds of resolutions.

So from a technical point of view, this isn't really that groundbreaking.


CMOS sensor pixels and LED output pixels aren't the same type of circuitry at all.


Modern CMOS and OLED displays are quite similar, actually. The high end CMOS's that operate at the pitch used in these OLEDs use individually wired pixel 'cells' to capture and convert photons into electrical signals. Those cells/pixels in the sensor are wired up in rows, just like the pixels of an oled, and each pixel transmits 3 or 4 signals along the row, just like oleds receive them. And in fact, each cell of the CMOS sensor is made of a photodiode, which is a lot like a light emiting diode, except it works the other direction, converting photons into electricity.

Then, just like an oled has transitors along its rows and pixels for timing and signaling, so too does the CMOS have a 'rolling shutter' effect caused by the sensor having it's pixels sampled row by row, column by column, just like an oled display draws its pixels.

Here's a few sources in case you'd like to learn more, but clearly if a CMOS sensor of a given resolution is constructable, the only thing preventing its OLED counterpart is the actual organic chemistry driving the pixels, which is a large reason OLEDs are better for fine pitch displays like this than LCDs, they can be made this small.

https://isl.stanford.edu/~abbas/ee392b/lect04.pdf https://isl.stanford.edu/~abbas/ee392b/lect05.pdf

I couldn't find perfectly similar cross sections of an OLED, and the wiring will of course vary a bit, but not enough to blow past the limitations of CMOS's precision lithography process.

https://www.researchgate.net/figure/Cross-section-of-the-pix...

And to give you an idea of how far that can go, Sony has a 61MP sensor that's roughly the same physical size as these displays, but basically 5-6 times as dense with photodiode based pixels instead of oled based ones. Maybe we can't make a 61MP display that size yet, but we can clearly make a 4k one (which is about 12MP, I think...)


Fair point, AMOLED is basically just an inverted CMOS sensor. I was thinking of PMOLED, but I guess these days that's probably less common.


You have to look at it from a fabrication perspective. Both CMOS cells and LEDs aren't new, in fact LEDs are older. They can both be mass produced in an array using a very similar process. From a semiconductor fabrication perspective, the LEDs in your TV or smartphone have a laughably low resolution.


This is the most well executed vision of AR (spatial computing whatever) I've seen and yet, it's still a fundamentally flawed concept. I am simply not going to casually hang around my house with a pair of goggles strapped to my face. I am also not going to put in a full 8 hour work day like that.


I’m sure with all of the sensors it would be technologically possible, but I’d be interested to see if there is an augmented dark mode (dim the lights in a bright room) or night/low-light vision mode (use the IR depth sensors to add light into a completely dark space)


On an unrelated note, I am waiting for the horror game where you walk around your house in high res AR and random creepy stuff starts hallucinating out of nowhere.


Wow! What are they going to do with so many pixels!?


Achieve photorealism.


Can vision pro do "normal" VR too? or is it only augmented reality. like can i actually play games on it that arent weird games coming from the app store? (ie steam games)


The question with games is how you're going to control them, hand tracking isn't a good substitute for the standard physical VR controllers, especially if you're using something like Virtual Desktop to stream existing SteamVR games which are specifically designed for those controllers.

In theory you could crudely rig it up with a set of Index controllers, if you can stomach adding another $580 of gear (controllers + two lighthouses) to your $3500 headset.


The question with games is how you're going to control them

The lady in the presentation just used a regular game pad.

I don't know why people keep thinking that the only way to use this thing is with hands, when the presentation showed several different input devices, including Bluetooth keyboard and trackpad.


The game they were playing with a regular game pad was an iPad game, not a VR game. Nearly all VR games are built for VR controllers like you get with the Quest, and those are something that Apple themselves would have to facilitate for it to not be a massive hack.

The industry already went through this phase, the very first Oculus Rift came with a regular Xbox controller and that idea was quickly abandoned. The Quest has hand tracking but it's treated as secondary to the set of controllers it comes with.


A gamepad is a terrible way to control a VR game.

You need individual hand motion controllers at a minimum.

Look at all the other VR headsets out there. Look at the Valve Index controllers for example.


Apple has recently added support for Xbox controllers to macOS. Optimistically, that could have been a warm-up before adding support for Index controllers.


I cant see Apple officially adopting Index controllers, the need to semi-permanently install base stations in the play area is very un-Apple. Maybe they'll make something like the Quest controllers for the gen2 product when they get the feedback that hand tracking is kind of terrible for gaming.


I'm going to predict a controller will come later. much like the first ipad didn;t come with the notebook keyboard or apple pencil


Apple' software allows to generate a virtual model of your hands. If they manage to track your hands well enough that could be used for games.


It could be used for some games, particularly simple ones without much need for precision, but I don't see how you could generalize that to all VR games which exist. Take the common case of aiming a gun, if you mime the action of a trigger pull with just hand tracking there's zero tactile feedback as to where the actuation point is, and if your gun hand twists the wrong way or your other hand occludes it then it's just going to fail because the headset literally can't see what you're doing. It's not magic.


It sounds like it will support both AR and VR use-cases, with smooth transitions in between by detecting people around you or through the digital crown.

They also announced official support in the SDK for CrossOver (Wine). [0] It could be part of a strategy to support existing VR apps for Windows.

[0]: https://news.ycombinator.com/item?id=36223927


It's not just augmented reality, it can theoretically do normal VR as well. But it's not a headset compatible with outside computers. It's a computer (and battery) and a headset all connected together. So unless Apple allows new/existing VR games onto their proprietary ecosystem, you won't be able to use the VP to play them.


Yes. "Digital Crown" dial is used to set how much "reality" is let in.


Yes I think so. I remember them showing someone playing a basketball game in the presentation


Who manufactures the display?


This was in the article:

> the Vision Pro packs a pair of 1.41-inch micro-OLED displays that combine an OLED frontplane from Sony with a silicon backplane from chip foundry TSMC


So basically this is great for watching movies. Or porn.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: