In this series of articles, I’m analyzing how AI is impacting my job as an XR developer and entrepreneur. In the first episode, I discussed how I’m utilizing AI in my development activities. Today, I want to explore the broader implications of how AI & XR can work together. We often hear that these technologies are a perfect match, and at the latest AWE, Ori Inbar even said the sentence, “There’s no AI without XR.” But are these two technologies really that good together? Here are my thoughts.
AI & XR are a great match.
The short answer to the previous question is: yes. While many people (and, unluckily, also investors) see AI and XR as two different technologies that compete with each other, actually, they are technologies that can really empower each other. Saying there’s no AI without XR is a bit extreme (after all, we all currently use AI on a PC or a phone), but for sure, AI and XR are a match made in heaven (which was also the title of a session of the AWE Nite Florence I did with Cecilia Lascialfari). Let’s see why.
XR gives the context to AI
When you write a prompt to ChatGPT to explain what you want, you need to tell it all the context surrounding your question. The context may be the application you are working on if you are coding, or what machine you are trying to fix, and why, in case you are sending a photo to get technical assistance. The context may even be something big: if you want to ask ChatGPT to predict how your body will be in 10 years from now, you should supply it all the data about what you eat, how you daily behave, who you hang out with, how you sleep, etc…
The good thing about XR is that it is provided with glasses or visors that you wear on your face. These glasses have all the cameras and microphones that can potentially record what is around you. This means that the context is often implicit. If your car breaks, you can just open the hood of the car and ask the AI, “How do I fix this?”. You don’t have to write all the details of the car, of what is not working, etc You just look around, and you’re done. Potentially, if we forget for a second all the potential battery and privacy problems, the glasses may constantly record what you are doing, so they may know by themselves that the car is broken, and may even use all the recording of the moments before the break up to verify if there were any cues about the damage (e.g. there were weird noises for 5 minutes before the engine broke and from the noise type it is possible to infer what was the problem).
You may argue that you could have a similar context also with a phone, and this is true. But first of all, you usually keep your phone in your pocket, or you use it with the camera facing down. This means you do not automatically have the context: some people are suggesting that you could use smartglasses with AI to recognize people at business events. If you do that with your glasses, you just look around, and it is very discreet. If you take out your phone to scan the face of someone, it is cringeworthy as hell.
Then, to use your phone, you need to use your hands. Try fixing your broken car by using the phone in one hand and the screwdriver in the other. With glasses, you can operate hands-free and have AR suggestions about what to do always in front of your eyes.
This is all great, but of course, there are huge privacy concerns to keep in mind: if I have on glasses that constantly record my life, the AI assistant indeed has a lot of context, but also the advertisement services of the company owning the glasses have a lot of context about my life… About this, I find it interesting the approach that Pico is having: while on the enterprise headset it is possible to have access to the raw camera images, on the consumer one they are proposing a system where the developer of an application can not access the camera frames, but it is given the OS the opportunity of running an AI algorithm on the camera frames, and having access only to the results. This is good to achieve higher privacy while still empowering XR with AI.
The smartglasses opportunity

Since glasses are so good at giving context to AI, I think there is now a big opportunity for developing AI applications for smart glasses. Some smartglasses already offer an SDK, and it was good to see recently what people were able to build with Mentra glasses at a recent hackathon or with the Brilliant Labs Frames (by the way, Brilliant Labs is launching a new device these days, so stay tuned…). The big dogs, Meta and Google, are for sure working on some SDK for glasses too, and Google has already announced that a preview of that SDK is coming before the end of this year.
I see this as greatly impacting my work: if smartglasses continue to sell well, like it is happening now, for sure, I will get more requests to develop AI agents for glasses, both for enterprise and consumer use. Unless there is some drop in the sales of these devices, I’m pretty sure I will have something to do with them in 2026.
I see this happening with glasses, but not much with MR headsets like the Quest. There are many reasons for that: headsets are pretty bulky and more useful for short sessions than long ones. Quest passthrough quality is also quite good for MR applications, but not good enough that you would keep the Quest in passthrough mode all day. Plus, I would never operate something that has the potential to hurt me (e.g., some screwdriver, or a big knife) with just the vision of the Quest passthrough. Glasses are lightweight, and you can keep them on you for many hours, and trigger AI only when you need it. If you had to find your Quest and turn it on when you need AI, there would be too much friction, and you had better use your phone directly.
There are still cases when AI can be useful with MR headsets, too. For instance, some activities you do in mixed reality in your room may be powered by AI. With camera access, the Quest may recognize the objects in your room and, for instance, put on them the translation in a language you want to learn. Or the Quest may recognize your activities during a training on a physical machine in mixed reality and guide you to perform them. There are various applications possible.
But still, I see more potential in smartglasses and AR glasses, because they are a product you can carry everywhere. Plus, it’s a new category of products, so many things have still to be invented.
AI can tell you what XR experience to create
One thing for which generative AI is very useful is brainstorming. I will talk about it in more detail in the next article of this series, but for now, I can say that a good use of AI for XR is that with AI, you can find ideas about things to do in XR. If a customer asks you how to use XR to help his production processes, you can ask the AI 20 ideas about how to do that (always being careful of not sharing sensitive data with commercial AI applications): among these 20, 30, 50 ideas, there will for sure be some interesting idea you can use as is, or you can use as a starting point to develop your idea. So AI can help you in understanding what kind of XR experience to build.
AI can solve XR’s content problem…

One of the other popular arguments to propose the use of AI with XR is that AI can solve the “content problem” of XR. Making an MR/VR game is pretty long, expensive, and requires professional skills like 3D modeling. If you could produce all the content with AI, the creation of XR content would be democratized, and XR headsets would be flooded by so much useful and entertaining content that many use cases would be enabled, creating a new golden age for XR. At least in theory.
There are many efforts happening in this sense. Probably the company that is currently doing the most is Meta: in its desktop editor for Horizon Worlds, you can create everything with AI. You can create a basic environment with a prompt, then generate 3D models with prompts, then sounds with prompts… You can vibe-create your whole world. Thanks to this, even the kids who inhabit the Horizon Worlds spaces can easily create content for it.

If we want to talk about the creation of assets in general to be used in XR applications, now there are services like Suno that can create the music for your games. And Meshy, Tripo, or the Chinese HunYuan can generate 3D assets from a description or a photo. Then, of course, there is coding, for which you can vibe-code with Cursor or Claude Code.
All these AI systems are growing pretty fast: if the 3D models generated by AI were quite shitty two years ago, now they are quite decent. So we are really approaching an era of democratization for the creation of XR content, which could open the doors for Mark’s dream of the metaverse.
… but I’m not sure it’s being solved the right way
I appreciate the fact that AI is democratizing content creation, but I’m not sure that this is currently happening the right way.
The first point I make is that the fact that content is easy to create does not guarantee it is good content. Quite the opposite, it facilitates the creation of low-level content that now goes by the unfortunate name of “AI slop”. If content is difficult and expensive to create, you will do your best to release good content, since you invested a lot of money in it. If it is just $2 to create an XR game, you just build whatever idea you have in mind, and you release it. With this, I’m not saying that all AI-created content is bad, but I’m saying that such a low-level democratization brings a lot of shovelware, in which it is more difficult to find valuable content. And considering how bad today’s recommendation systems are, we just risk that people try XR glasses, find a lot of shitty content, and don’t try them again. There is a need for strong curation.
The second problem is that current AI systems are pretty ok, but they are not perfect. The generated 3D models do not have a perfect mesh topology. The various assets may have a slightly different style and together may form a non-cohesive environment. Most of them also have a clear “AI-generated” look, which seems pretty bland. Of course, this is going to improve in the future, but for now, this is a problem.
The third problem is that creations become soulless. Creating a Unity application, or a world in VRChat, is a work of art. It’s many people putting their expertise and crafting something together. Sometimes it is also a work of love, with people putting their heart into what they are doing. And when you enter a place and you see that it is magnificent, you appreciate the creators putting their effort into it. There is a connection created between the creator and the user. If the whole world is AI-generated, it feels like the songs on Suno: enjoyable, but flat and heartless. I have read the books where it is explained how John Carmack and the people at ID Software created Wolfenstein 3D and Doom, fully revolutionizing the world of gaming. It’s very fascinating to see what these people did. Imagine if the book was only one line, “Oh, I wrote a good prompt,”… it would be less interesting, I guess (but at least you could tell people you read a book by just reading one line!). Craft has a value of its own, and if you remove it, things look the same, but do not feel the same.
Of course, I’m talking about consumer-oriented creative projects… if we speak about enterprise settings, no one cares as long as things work.
In the end, I do not think that XR has a content problem anymore. There are lots of good games for Quest, lots of worlds on VRChat, and lots of 360 movies. Many other problems are blocking our ecosystem (comfort, price, low revenues for devs, slow market, etc) even before we think about content. Also, I’m not so sure everyone needs to create 3D interactive content. We had a very successful smartphone market even without anyone being able to create websites and apps…
XR gives physicality to AI

AI assistants are cool, but they are very ethereal: they are just some writings or a voice that speaks to you (No, I won’t mention THAT movie). Without XR, all of this feels a bit abstract. With AR glasses you could see some additional cues about what the voice is saying, to make its support even more useful: if it is suggesting a restaurant, you could see the arrows guiding you to arrive at the place; if it is suggesting how to fix something, you could see the 3D manual of the machine in front of you, and so on. Of course, the AI assistant may be made physical: your AI fitness assistant may appear as a big, muscular man who cheers for you. We humans love to connect with humans, and if we can also give a face to our AI assistant, the connection with it can also become more personal.
XR is good at letting AI appear in the world around you, enhancing its potential.
Anyway, AI is everywhere in XR

I focused this article on the most recent uses of generative AI, but AI in general is everywhere in XR since the beginning. Body tracking uses AI, upscaling uses AI… if you were here 3000 years ago, you would remember that the amazing lenses of the Daydream View were designed with the use of AI. What is generally called “AI” (sometimes even improperly) is basically used everywhere in the tech field since ages, and is widely used also in XR. So AI already enables XR to do a lot of things.
BONUS: AI can help XR people get money
How business works nowAre you looking for investments for your XR company? An easy trick to have investors more interested in you is to just slap the word “AI” somewhere in your presentation. It is not important if AI is truly useful, just put some big AI image and say some complicated tech words, so you will look very smart, even if you do not know what you are saying. It is like when four years ago, the hype was on NFTs, and investors asked everyone to have a strategy about NFTs, so startup decks were all filled with monkey JPEGs and tokens. Now the word “token” has stayed, so if you still have your old deck, you just have to say that the monkey was made with AI, and you’re done. Thank me later
Final considerations
I agree with my friend Ori Inbar that AI and XR are technologies that work very well together, and together will shape our future. XR and AI are complementary in many things, and we are already seeing how AI is always more integrated with XR glasses and headsets: Ray-Ban Meta is getting a live AI assistant, and Android XR is ultra-focused on your interactions with Gemini AI. We can happily say that XR provides a fantastic interface for AI. The next years are going to be interesting and also present new opportunities for us, XR developers and content creators. We need to be very smart to catch them.
I hope you have enjoyed this human-written article. If it is the case, please support me by sharing it around on your social media feed for your other human friends to read. And if you are curious to read my next episode, which will be around AI and creativity, subscribe to my newsletter so you don’t miss it!
The post My thoughts on AI in my XR job – Part 2: AI & XR appeared first on The Ghost Howls.
This is a companion discussion topic for the original entry at https://skarredghost.com/2025/07/30/ai-xr-together/