Exploring Apple Vision Pro: A Personal Journey
After spending a week with the Apple Vision Pro, here are my first impressions of the device. While certain aspects may feel familiar to those accustomed to VR headsets, the blend of exceptional image clarity and superior sound quality elevates the overall experience to a new level of immersion.
After the initial trial in June, several aspects of what my experience with Apple Vision Pro would be like were confirmed, while others came as a surprise. Remarkably, there’s practically no learning curve. Mastering the essential gestures, like selecting an object or icon and pinching to execute an action, quickly becomes second nature. Seeing my child use the Vision Pro with minimal instructions was fascinating. The most challenging part was teaching them to moderate their gestures, a habit formed from using other VR headsets and one not required with Vision Pro because of the ability to register the pinching gesture even when the hand rests on your lap.
The user interface (UI) is straightforward, yet some nuances might require a brief adjustment period. One example is accessing the Control Center, which ingeniously remains hidden until you physically look up, not just raising your eyes but as if peering over the top edge of the Vision Pro’s screen. This gesture brings the Control Center into view, allowing you to use it within any app, although its features may vary slightly from one application to another.
Despite some minor inconsistencies in the UI, the overall ease of use is impressive. The device lets users dive into the experience with minimal fuss, highlighting a shallow learning curve that enhances enjoyment and engagement.
The Future of Computing Needs Users’ Investment
Using the Apple Vision Pro with a Mac presents what I consider the most challenging aspect, not due to any setup complexities— in fact, the process is quite simple— but because it requires one to rethink one’s workflow to best take advantage of what each individual device has to offer. For instance, the Keynote app for Vision Pro offers a feature that allows a user to rehearse a presentation while standing on the stage of the Steve Jobs’ theater. The Mac app also allows a presenter to rehearse but to do so in a more traditional way. Although helpful, it might not help a nervous presenter gain confidence because it lacks the emotions that standing on a virtual stage can invoke.
I shot a couple of videos demonstrating the seamless transition from complete immersion in a virtual environment to interacting with the physical world, showcasing an intriguing feature of the Vision Pro called EyeSight. Since my first demo in June, I’ve emphasized how the device automatically detects someone entering the wearer’s space and enhances user comfort by promoting a sense of safety, especially in more public settings. Furthermore, the design allows outsiders to see the eyes of a Vision Pro user, not only maintaining a connection with reality but also eliminating the need to remove the headset frequently during conversations.
One of the most remarkable aspects of the Vision Pro’s immersive experiences is its effortless setup process. Unlike other VR systems where you need to define a play area, the Vision Pro eliminates this requirement, significantly flattening the learning curve. You simply put it on and start exploring. The headset’s cameras dynamically map your surroundings, allowing objects or people nearby to seamlessly pass through into your virtual environment. This feature enhances safety by preventing accidents and adds a fluid, natural feel to movement within the VR space. I’ve captured this in a video showcasing how objects in the real world can become part of the experience in a way that’s both safe and intuitive, a stark contrast to the constant boundary adjustments needed with devices like the Oculus Quest.
Amid recent discussions about Netflix’s skepticism towards the Vision Pro’s market viability, my experiences, particularly with Disney’s 3D movies and branded environments like Avengers Tower, stand in stark disagreement. The vast potential for monetizing unique, immersive experiences, especially those paired with Netflix’s exclusive content. They could leverage these for direct purchase or include them in a premium subscription tier, coinciding with new releases or series, thus offering a model for Netflix and other developers to follow.
The transition from traditional applications to those fully embracing the immersive qualities of Vision Pro shows developers the opportunity to create highly engaging content that offers the option to earn more. While using familiar apps and games with a standard controller or even hand gestures is enjoyable, the depth of engagement when fully immersed in a virtual environment is incomparable. This is true whether you’re navigating through game quests or surrounded by interactive, virtual objects.
It is worth pointing out that the compatibility of iPad apps with the Vision Pro presents a far superior experience than the early days of the iPad when iPhone apps were merely enlarged for a bigger screen.
I’m intrigued to see how people react to this particular feature. To me, it’s the most realistic representation of myself I have used as an avatar. The visuals have a somewhat ghostly appearance, likely a mix of the color palette and a certain translucency in the rendering. Yet, it still presents a convincing depiction of a human. The real question is about its practical applications – when will people choose to use it?
My own experience involved a few FaceTime calls where the caller appeared not just as a regular video but as a larger, floating screen in my space. This added a more personal touch compared to traditional computer or phone screens, though I’m not sure it’s something I’d use every time I need to make a Facetime call. The same goes for Zoom or WebEx meetings; it is an excellent alternative to keeping your camera off when you are not in a setting or a physical state that you deem professional. This aligns with Microsoft’s perspective on mesh avatars: utilizing them as an alternative to not using the camera. In professional settings, this feature can make interactions seem more polished and enhance connection. However, it’s important to remember that this technology is still in its beta phase, and while the facial expression capture – including smiles and frowns – is impressively accurate and responsive without lag, there’s potential for Apple to refine and enrich this experience further.
Using Siri With Vision Pro was captivating because of the capability to see Siri’s icon floating in my surroundings, which added a lively touch to the interaction. Requesting assistance and then seeing tasks completed, followed by telling Siri to disappear and watching the icon fade away, creating a unique sense of connection with Siri.
As I mentioned, the onboarding process for the Vision Pro is impressively quick. Guest mode is designed for sharing the device with others without compromising the personalization set by the primary user. On occasions when I neglected to utilize guest mode, I needed to recalibrate the device for my eyes after another person used it—indicative of the Vision Pro’s sensitivity to different users. However, when employing guest mode, the transition between users was seamless, eliminating the need for reconfiguration.
Guest mode is not just a mere feature; it’s instrumental in preserving privacy and ensuring security, especially considering the personalized nature of the device, much akin to a personal computer but more intimate. When sharing the Vision Pro, one must be mindful of the apps and information accessible to the guest. Guest mode lets the device owner control app accessibility, ensuring private messages or sensitive data remain confidential. Apple also offers the option to purchase additional accessories for a better fit, enhancing the personalization and comfort of the Vision Pro experience.
Vision Pro And The Future Of Computing
Following Apple’s keynote in June, I shared my belief that Apple aims to dominate the future of computing. This has only solidified after my experience with the Vision Pro. Apple is undoubtedly aware that its current pricing strategy will limit widespread adoption. Yet, I’m increasingly convinced that the Vision Pro represents a more significant opportunity for Apple than the iPad ever contributed. Although adopting the Vision Pro necessitates users to explore and adapt to new workflows, this transition appears more straightforward than it was with the iPad. The potential pitfalls of attempting to replicate Mac workflows on the iPad, which often led to frustration, seem less likely with the Vision Pro, suggesting a smoother integration and adoption process for users.
More Random Thoughts
Apple needs a way to do face ID with Vision Pro to unlock the iPhone, so you do not have to remove the headset and create friction.
Vision Pro has two bands available, and both are comfortable to wear. I found the Dual Loop Band to provide more support than the Solo Knit Band. If I were wearing the Vision Pro for a more extended period of time, I would use that one.
Battery life wasn’t an issue for me. On occasions when I was connected for extended periods, I used a long USB-C cable to charge the device while using it. Honestly, for stationary activities like watching a movie, this setup was perfectly adequate. When I needed mobility, I conveniently carried the battery in my back pocket.