Showcasing the best of XR & AI for creatives and professionals | Tech Ambassador | Podcast Host | Speaker
This is the biggest improvement in v66 NOBODY is talking about. Here is why I think it's a BIG deal. The ability to automatically identify the objects in the user's environment after the room scanning is HUGE. Developers can finally use this information to position digital content in meaningful ways like a #Mixedreality portal out of a window, a digital screen in front of a couch, a #3D model on top of a desk etc I look forward to seeing how this new capability will be used. Do you know about any app that already implemented this in some ways?
This is huge. We can finally design experiences relevant for different types of spaces like the living room, dining room, bed room etc. 3D contents can be automatically placed on these furniture and the experiences can adapt accordingly.
Mike van Zandwijk Rainish Lalai this shows the real power! Cool stuff
I imagine https://immersive-web.github.io/webxr-samples/proposals/mesh-detection.html should work, let me know if it does not.
Actually, this feature was already a part of the previous release
Great that you liked it. We’ve been working hard to make our developers lives easier. Looking forward to what the community builds on top. CC: Jay Goluguri Audrey Muller
Detecting and classifying where the objects in your space are is good and helpful but it isn't quite enough to place digital content that is meant to closely match. You need the be able to adjust the model after it is spawned in that box to perfectly fit the real object. At that point I think it makes sense to just extend the underlying Spatial Anchor system for your specific use. We've done this and it works pretty well in ideal conditions. Maybe some combination of the Spatial Anchor from the room set up, and the mesh of the space might enable automation of that process but I'm not sure. If anybody has tried it I'd love to hear how it went.
I haven't seen it work this well in available apps yet. The ones I use sorta make a guess about whats going on but you don't have the ability to choose or correct it. Polycam, for example. But it is still useful and I've used it for specific jobs. For me, the most useful Lidar Scanning fast solution is still "Scaniverse" which even does pretty well with small objects like a seashell or food. Boxes of items or shelves full of items.
the amount of data I would be sharing with Meta by let them scan my living space this way gives me headaches.
At this point Meta has me literally drooling over every update.
XR Developer and Interaction designer
1moFINALLY!! This is a big deal!!!!