Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

Can I use the photokit sample on Vision OS?
I am a student developer We are trying to implement an application that allows you to take photos in visionOS mr mode and access the photos you took. Can the contents of the link below be used on visionOS? https://developer.apple.com/tutorials/sample-apps/capturingphotos-captureandsave/ I would really appreciate your reply. For reference, we plan to package the methods in swift and import the framework into Unity to use them.
0
0
16
5h
Help Building spatial video app using Quick Look preview
Hello everyone I am looking to build a simple app for displaying a spatial video using the quick look preview API. I have been following this video which is useful: https://developer.apple.com/videos/play/wwdc2024/10166/#:~:text=QuickLook%20is%20the%20system%20standard,just%20like%20the%20Photos%20app. I am new to building apps in Xcode, and I could do with some advice on how to build the rest of the project mentioned in the above video. I was wondering if there is source code or a project example available anywhere for an app the uses the Quick Look preview API?
0
0
14
7h
Timeline Animation in Reality Composer Pro
I'm using Reality Composer Pro Version 2.0 Version 2.0 (448.0.10.0.2) avaliable in Xcode_16_beta_4 When adding a animation from the Animation Library component on my armature to a timeline - the animation does not 'freeze' on the last frame. Is there a way to 'freeze' the first or last frames when adding animations to the timeline? And how should I expect the first and last keys on my animations to behave with the default 'rest pose' on the imported usd file?
0
0
77
1d
ImageAnchoringSource from URL
Hello, I was wondering how I can initialize an ImageAnchoringSource using https://developer.apple.com/documentation/realitykit/anchoringcomponent/imageanchoringsource/init(_:) When I construct one using a URL, it doesn't seem to be tracked and I see in the following when I debug print the component: ▿ 0 : AnchoringComponent ▿ target : Target ▿ referenceImage : 1 element ▿ from : ImageAnchoringSource ▿ url : Optional<URL> ▿ some : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _url : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _parseInfo : nil - _baseParseInfo : nil - name : nil - group : nil ▿ trackingMode : TrackingMode - trackingMode : 2 Is there a specific format for the parseInfo? When I use the same image to make an image anchoring source by group and name in AR Resources, it is tracked. Thank you!
1
0
76
1d
API Calls inside Vision OS Swift UI App
Hi, I'm brainstorming ideas for getting dynamic content inside my visionOS app on the Vision Pro. I have some data coming out of a piece of equipment, and reaching a cloud hub (something like IoT Hub on Azure). I want to get that data inside a visionOS app, ideally inside an attachment that is attached to some 3D entity inside my RealityView. Is something like this possible? Can someone give me some starter points on how I can enable a pipeline like this, and if there are any resources that I could use for reference.
1
0
111
2d
Getting main camera frame using CameraFrameProvider
Hello, I am trying to use the new Enterprise API to capture main camera frames using the CameraFrameProvider. Until now, I could not make it work. I followed the sample code provided in this thread (literally copy past it): https://forums.developer.apple.com/forums/thread/758364. When I run the application on the Vision Pro, no frame is captured. I get a message in the XCode's console that no entitlement is found. However, the entitlement is created and the license file is also in the project. Besides, all authorization keys are added in the plist file. What I am missing? How to know if the license file is wrong? Thank you.
2
0
104
2d
Window buttons not getting clicked when Scene Colliders Exist
Hi I am using this function to create collisions in my scene from Apple Developer Video I found. func processReconstructionUpdates() async { for await update in sceneReconstruction.anchorUpdates { let meshAnchor = update.anchor guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else {continue} switch update.event { case .added: let entity = ModelEntity() entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision = CollisionComponent(shapes: [shape], isStatic: true) entity.physicsBody = PhysicsBodyComponent() entity.components.set(InputTargetComponent()) meshEntities[meshAnchor.id] = entity contentEntity.addChild(entity) case .updated: guard let entity = meshEntities[meshAnchor.id] else { fatalError("...") } entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision?.shapes = [shape] case .removed: meshEntities[meshAnchor.id]?.removeFromParent() meshEntities.removeValue(forKey: meshAnchor.id) } } } The code works great. In the same immersive space I am opening a window: var body: some View { RealityView { content in // some other code here openWindow(id: "mywindowidhere") // some other code here } } The window opens in front of me, but I am not able to click or even hover on the buttons. At first I did not know why that was happening. But then I turned on pointer control and found out that the pointer is actually colliding with the wall. (the window is kinda inside the wall). That is why the pointer never reaches the window and the button never gets clicked. I initially thought this was a layering issue, but I was not able to find any documentation related to this. Is this a known issue and is there any way to fix this? Or I am doing anything wrong on my side?
1
0
71
2d
How to control continuous movement by long pressing on the GameController
struct GameSystem: System { static let rootQuery = EntityQuery(where: .has(GameMoveComponent.self) ) init(scene: RealityKit.Scene) { } func update(context: SceneUpdateContext) { let root = context.scene.performQuery(Self.rootQuery) for entity in root{ let game = entity.components[GameMoveComponent.self]! if let xMove = game.game.gc?.extendedGamepad?.dpad.xAxis.value , let yMove = game.game.gc?.extendedGamepad?.dpad.yAxis.value { print("x:\(xMove),y:\(yMove)") let x = entity.transform.translation.x + xMove * 0.01 let y = entity.transform.translation.z - yMove * 0.01 entity.transform.translation = [x , entity.transform.translation.y , y] } } } } I want to use the game controller's direction keys to control the continuous movement of Entity in visionOS. When I added a query for handle button presses in the ECS System, I found that the update interface was not called at a frequency of 30 frames per second. Instead, it executes once when I press or release the key. Is this what is the reason? I want to keep moving by holding down the controller button, is there a better solution? I hope this moving process will be smooth and not stuck.
1
0
79
3d
RealityKit scene with the Entity Component System
I'm following WWDC for interactive 3D content in reality composer pro and apple's document https://developer.apple.com/wwdc24/10102 https://developer.apple.com/documentation/realitykit/implementing-systems-for-entities-in-a-scene#Retrieve-entities-with-an-entity-query However, this simple code to declare a dummy Component and System has compile error /Users/Workspaces/repository/Packages/RealityKitContent/Sources/RealityKitContent/RobotComponent.swift:18:24 Static property 'query' is not concurrency-safe because non-'Sendable' type 'EntityQuery' may have shared mutable state // Define a query to return all entities with a MyComponent. private static let query = EntityQuery(where: .has(MyComponent.self)) // Initializer is required. Use an empty implementation if there's no setup needed. required init(scene: Scene) { } // Iterate through all entities containing a MyComponent. func update(context: SceneUpdateContext) { for entity in context.entities( matching: Self.query, updatingSystemWhen: .rendering ) { // Make per-update changes to each entity here. } } } I'm using XCode beta3 and project target visionos 2
1
0
105
3d
ArKit to capture data
ARKit to capture data What we want to do : use the ARKit to capture data around an object (pictures). Is there a way to : Increase the number of picture captured by default (120) to a higher number without increase the time required to capture data ? We managed to increase the number of pictures to 1000, but the data capture now lasts 20minutes, which is too long. Is there a way to capture a video instead of pictures ? Capture IMU data : how can we use the ARKit to capture IMU data around an object ?
4
0
52
3d
Xcode 16 can't target RealityKitContent on macOS 14?
I'm working on a multi-platform app (macOS and visionOS for now). In these early stages it’s easier to target the Mac, but I started with a visionOS project. One of the things the template creates is a RealityKitContent package dependency. I can target macOS 14.5 in Xcode, but when it goes to build the RealiityKitContent, I get this error: error: Building for 'macosx', but '14.0' must be &gt;= '15.0' [macosx] info: realitytool ["/Applications/Xcode-beta.app/Contents/Developer/usr/bin/realitytool" "compile" "--platform" "macosx" "--deployment-target" "14.0" … Unfortunately, I'm unwilling to update this machine to macOS 15, as it's too risky. Running macOS 15 in a VM is not possible (Apple Silicon). This strikes me as a bug, or severe shortcoming, of realitytool. This was introduced with visionOS 1.0, and should be able to target macOS &lt; 15. It's not really reasonable to use Xcode 15, since soon enough Apple will require I build with Xcode 16 for submission to the App Store. Is this a bug, or intentional?
2
1
110
3d
True depth map accuracy worse on streaming mode than photo
Hi, it seems the accuracy of the true depth map is far worse when streaming (using iPhone 13) with similar artefacts as shown in this post: https://forums.developer.apple.com/forums/thread/694147. However when taking static photos, the quality is pretty good, despite resolutions being the same (480x640). This is for an object <1m distance. Does anyone know how I can improve the accuracy when streaming?
0
0
108
3d
visionOS 2 full immersive space permission change?
Does visionOS 2 still prompt the user with a permission alert when a full immersive space is presented? In visionOS 1, the first time an app presented an immersive space, the user was prompted with an alert to grant permission. openImmersiveSpace would return an error code if the user opted not to grant permission. In visionOS 1, it was important to handle this case correctly. In visionOS 1, the Settings > Developer menu had an option to reset the immersive user's space permission prompting state so developers could test this interaction flow. In visionOS 2, I no longer see the full immersive space permissions alert. I can't remember if I saw it once, the first time visionOS 2.0 beta was installed, or if I never saw it at all. The Settings > Developer menu no longer has an option to reset the permission prompting state. I can't find any way to test the interaction flow in my app to make sure that it will work correctly for users. Does visionOS 2 no longer ask for full immersive space permission at all? I can't find this change documented anywhere. If visionOS 2 does prompt the user for permission, is there any way to reproduce and test this interaction flow so I can make sure my app handles it correctly? Thanks for taking the time to answer this question.
2
0
178
4d
RealityView not displaying content
I'm playing with visionOS and trying to get a usdz file to load in a RealityView. It works fine if I use a Model3D but if I use a RealityView nothing shows up. I'm just using the fender_stratocaster asset right off the apple web site so it seems like it should work. This is the code: RealityView { content in if let sphereEntity = try? await Entity(named: "fender_stratocaster") { content.add(sphereEntity) sphereEntity.position = [0,0,0] sphereEntity.transform.scale = [scale, scale, scale] let _ = print(sphereEntity) } } update: { content in if let sphereEntity = content.entities.first { sphereEntity.transform.scale = [scale, scale, scale] } Any clues as to why this is not showing would be appreciated.
3
0
154
4d