Gabriele Romagnoli 🥽’s Post

View profile for Gabriele Romagnoli 🥽, graphic

Showcasing the best of XR & AI for creatives and professionals | Tech Ambassador | Podcast Host | Speaker

This is the biggest improvement in v66 NOBODY is talking about. Here is why I think it's a BIG deal. The ability to automatically identify the objects in the user's environment after the room scanning is HUGE. Developers can finally use this information to position digital content in meaningful ways like a #Mixedreality portal out of a window, a digital screen in front of a couch, a #3D model on top of a desk etc I look forward to seeing how this new capability will be used. Do you know about any app that already implemented this in some ways?

Aurelio Puerta Martín

XR Developer and Interaction designer

1mo

FINALLY!! This is a big deal!!!!

Arijit Debnath

Lead Experience Designer @ ThoughtWorks, AR/MR/VR Enthusiast

1mo

This is huge. We can finally design experiences relevant for different types of spaces like the living room, dining room, bed room etc. 3D contents can be automatically placed on these furniture and the experiences can adapt accordingly.

Tim Hermans

Chief Revenue Officer (CRO) & Chief Technology Officer (CTO) | Co-host RelaXR 🎙| Spatial Computing Enthusiast and a huge single malt whisky fan 🥃 |

1mo

Mike van Zandwijk Rainish Lalai this shows the real power! Cool stuff

Like
Reply
Fabien Bénétou

WebXR consultant and prototypist

1mo

I imagine https://immersive-web.github.io/webxr-samples/proposals/mesh-detection.html should work, let me know if it does not.

sai Kiran

Senior XR Specialist | Creator | Ex-Nokia | AR/VR/MR | DigitalTransformation | Unity 3D

1mo

Actually, this feature was already a part of the previous release

Moji Hasan

Group Product Manager at Meta (AR/VR)

1mo

Great that you liked it. We’ve been working hard to make our developers lives easier. Looking forward to what the community builds on top. CC: Jay Goluguri Audrey Muller

Bruno F.

Principal Software Engineer at MVRK

1mo

Detecting and classifying where the objects in your space are is good and helpful but it isn't quite enough to place digital content that is meant to closely match. You need the be able to adjust the model after it is spawned in that box to perfectly fit the real object. At that point I think it makes sense to just extend the underlying Spatial Anchor system for your specific use. We've done this and it works pretty well in ideal conditions. Maybe some combination of the Spatial Anchor from the room set up, and the mesh of the space might enable automation of that process but I'm not sure. If anybody has tried it I'd love to hear how it went.

Like
Reply
Johnathon Vought

Just doing 3d work everyday.

1mo

I haven't seen it work this well in available apps yet. The ones I use sorta make a guess about whats going on but you don't have the ability to choose or correct it. Polycam, for example. But it is still useful and I've used it for specific jobs. For me, the most useful Lidar Scanning fast solution is still "Scaniverse" which even does pretty well with small objects like a seashell or food. Boxes of items or shelves full of items.

Like
Reply
Sebastian Sachse

Neo-Generalist | Event & Production Coordinator | Digital Media Specialist

1mo

the amount of data I would be sharing with Meta by let them scan my living space this way gives me headaches.

Brayden Clark

Chaos Inc | Marketing for Powerful Brands

1mo

At this point Meta has me literally drooling over every update.

See more comments

To view or add a comment, sign in

Explore topics