visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

SafeMode Eliminates two of my feedback issues, how best to diagnose?
I have two feedback issues under Sequoia that are fairly major features. FB14105190: iPhone Mirroring shows only a white rectangle. FB13888947: VisionOS Mirroring drops connections after seconds. Both these issues are features I use (VisionOS Mirroring) or would use (the other) if I could. On a suggestion from a colleague I booted in Safe Mode and the issues went away and everything worked as it should. I've sent this upline to Feedback. But am curious what I can do to diagnose this further. Maybe locate a culprit. In the olden days, the extension parade let you know what was loaded. My kextstatus shows nothing different. Is there anything else I can explore to find this? Thanks.
0
0
45
19h
WKWebView for general purpose web browser
I created a simple web browser using WKWebView, but as far as I can tell, there is not a way to auto-populate credentials or save credentials a user enters into a login form at a 3rd-party website like Netflix (i.e., not my own app domain). Is this correct? If this is wrong, what are the APIs to support this? My use case is that I want to create an immersive app in visionOS that includes a window that lets the user surf the web (among other things). Ideally, I could just use a Safari window in my immersive app, but I don't think this is possible either. My work around is to create my own web browser... which works, minus the credential issue. Is it possible to bring a Safari window into an immersive visionOS app's experience? (IMHO, that would be a great feature)
0
0
102
1d
API Calls inside Vision OS Swift UI App
Hi, I'm brainstorming ideas for getting dynamic content inside my visionOS app on the Vision Pro. I have some data coming out of a piece of equipment, and reaching a cloud hub (something like IoT Hub on Azure). I want to get that data inside a visionOS app, ideally inside an attachment that is attached to some 3D entity inside my RealityView. Is something like this possible? Can someone give me some starter points on how I can enable a pipeline like this, and if there are any resources that I could use for reference.
1
0
109
19h
EnvironmentLightingConfigurationComponent not working
Has anyone gotten EnvironmentLightingConfigurationComponent to work? I tried the code from https://developer.apple.com/documentation/realitykit/environmentlightingconfigurationcomponent to prevent a planet from being lit by the environment. My goal is that the side that isn't lit by the star appears pitch black. However, the code seems to have no effect on visionOS 2 and iPadOS 18 (I tried betas 1 through 4, on device, built with Xcode 16 beta 4). No matter if there is a PointLight or no light at all in the scene, no matter if I use SimpleMaterial or PhysicallyBasedMaterial, no matter if I use a texture or a color on the sphere. I filed a bug report, it's FB14470954. Or am I doing something wrong? Here's my code: var material = PhysicallyBasedMaterial() if let tex = try? await TextureResource(named: "planet.jpg") { material.baseColor = .init(texture: .init(tex)) material.emissiveIntensity = 0 let sphereMesh = MeshResource.generateSphere(radius: 0.5) let entity = ModelEntity() entity.components.set(ModelComponent(mesh: sphereMesh, materials: [material])) entity.position = [-1, 1.0, -1.0] let envLightingConfig = EnvironmentLightingConfigurationComponent(environmentLightingWeight: 0) entity.components.set(envLightingConfig) content.add(entity) }
1
1
121
18h
Getting main camera frame using CameraFrameProvider
Hello, I am trying to use the new Enterprise API to capture main camera frames using the CameraFrameProvider. Until now, I could not make it work. I followed the sample code provided in this thread (literally copy past it): https://forums.developer.apple.com/forums/thread/758364. When I run the application on the Vision Pro, no frame is captured. I get a message in the XCode's console that no entitlement is found. However, the entitlement is created and the license file is also in the project. Besides, all authorization keys are added in the plist file. What I am missing? How to know if the license file is wrong? Thank you.
1
0
98
12h
Window buttons not getting clicked when Scene Colliders Exist
Hi I am using this function to create collisions in my scene from Apple Developer Video I found. func processReconstructionUpdates() async { for await update in sceneReconstruction.anchorUpdates { let meshAnchor = update.anchor guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else {continue} switch update.event { case .added: let entity = ModelEntity() entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision = CollisionComponent(shapes: [shape], isStatic: true) entity.physicsBody = PhysicsBodyComponent() entity.components.set(InputTargetComponent()) meshEntities[meshAnchor.id] = entity contentEntity.addChild(entity) case .updated: guard let entity = meshEntities[meshAnchor.id] else { fatalError("...") } entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision?.shapes = [shape] case .removed: meshEntities[meshAnchor.id]?.removeFromParent() meshEntities.removeValue(forKey: meshAnchor.id) } } } The code works great. In the same immersive space I am opening a window: var body: some View { RealityView { content in // some other code here openWindow(id: "mywindowidhere") // some other code here } } The window opens in front of me, but I am not able to click or even hover on the buttons. At first I did not know why that was happening. But then I turned on pointer control and found out that the pointer is actually colliding with the wall. (the window is kinda inside the wall). That is why the pointer never reaches the window and the button never gets clicked. I initially thought this was a layering issue, but I was not able to find any documentation related to this. Is this a known issue and is there any way to fix this? Or I am doing anything wrong on my side?
1
0
71
11h
TabletopKit sample code won't build on Xcode 16 beta 4
The TabletopKit sample app builds fine with Xcode 16 beta 1. https://developer.apple.com/documentation/tabletopkit/tabletopkitsample I updated to the new beta 4 and downloaded an updated version of the Tabletopkit sample code but am now getting this error. Tabeletopkit Sample 1 issue SwiftUI.ToolbarContent:3:51 Main actor-isolated static method '_makeContent(content:inputs:resolved:)' cannot be used to satisfy nonisolated protocol requirement Add '@preconcurrency' to the 'ToolbarContent' conformance to defer isolation checking to run time '_makeContent(content:inputs:resolved:)' declared here If I go back to beta 1 it still builds OK. I tried its suggestion but it still won't build. Is there a workaround? I didn't see it listed.
1
4
123
1d
Setting the right height for visualize in a correct way VR180 3D video
Hi, I'm developing a simple app to visualize embedded VR180 3D video. I used a semisphere and projected the video as its material. The semisphere is in the ambient at a fixed y value of 1.35, which is good for a seated person, but not ideal for a standing person because the stereoscopic vision is not correct. In the AppleTV+ and Kandao applications, I noticed that the translation of the video is anchored to the Apple Vision Pro. I tried using AnchorEntity to the head with trackingMode .once, but there is the problem of rotation; the semisphere starts with the rotation of the head. Is there a solution, for example, to anchor the semisphere only to the translation and not to the rotation of the head?
4
0
95
1d
visionOS 2 full immersive space permission change?
Does visionOS 2 still prompt the user with a permission alert when a full immersive space is presented? In visionOS 1, the first time an app presented an immersive space, the user was prompted with an alert to grant permission. openImmersiveSpace would return an error code if the user opted not to grant permission. In visionOS 1, it was important to handle this case correctly. In visionOS 1, the Settings > Developer menu had an option to reset the immersive user's space permission prompting state so developers could test this interaction flow. In visionOS 2, I no longer see the full immersive space permissions alert. I can't remember if I saw it once, the first time visionOS 2.0 beta was installed, or if I never saw it at all. The Settings > Developer menu no longer has an option to reset the permission prompting state. I can't find any way to test the interaction flow in my app to make sure that it will work correctly for users. Does visionOS 2 no longer ask for full immersive space permission at all? I can't find this change documented anywhere. If visionOS 2 does prompt the user for permission, is there any way to reproduce and test this interaction flow so I can make sure my app handles it correctly? Thanks for taking the time to answer this question.
2
0
175
3d
Particle Systems flicker when partly behind transparent objects
I am having a difficult time to create particle systems in Reality Composer Pro (visionOS beta 3). They tend to start to flicker and all particles disappear and reappear in semi-random intervals. I can clearly see that happening with one effect that I put inside a small box consisting of 4 transparent walls that has a solid floor. When I change the view angle the particle system starts to flicker when viewed from below its emission height. I tried all combinations of particle rendering: billboard->free, additive etc and it does not change anything. I am using the default particle image. Any help appreciated
0
0
127
3d
Content inside volume gets clipped
I am using the Xcode visionOS debugging tool to visualize the bounds of all the containers, but it shows my Entity is inside the Volume. Then why does it get clipped? Is there something wrong with the debugger, or am I missing something? import SwiftUI @main struct RealityViewAttachmentApp: App { var body: some Scene { WindowGroup { ContentView() } .windowStyle(.volumetric) .defaultSize(Size3D(width: 1, height: 1, depth: 1), in: .meters) } } import SwiftUI import RealityKit import RealityKitContent struct ContentView: View { var body: some View { RealityView { content, attachments in if let earth = try? await Entity(named: "Scene", in: realityKitContentBundle) { content.add(earth) if let earthAttachment = attachments.entity(for: "earth_label") { earthAttachment.position = [0, -0.15, 0] earth.addChild(earthAttachment) } if let textAttachment = attachments.entity(for: "text_label") { textAttachment.position = [-0.5, 0, 0] earth.addChild(textAttachment) } } } attachments: { Attachment(id: "earth_label") { Text("Earth") } Attachment(id: "text_label") { VStack { Text("This is just an example") .font(.title) .padding(.bottom, 20) Text("This is just some random content") .font(.caption) } .frame(minWidth: 100, maxWidth: 300, minHeight: 100, maxHeight: 300) .glassBackgroundEffect() } } } }
1
0
95
20h
Multiview HLS with HDR
I have an HDR10+ encoded video that if loaded as a mov plays back on the Apple Vision Pro but when that video is encoded using the latest (1.23b) Apple HLS tools to generate an fMP4 - the resulting m3u8 cannot be played back in the Apple Vision Pro and I only get back a "Cannot Open" error. To generate the m3u8, I'm just calling mediafilesegmenter (with -iso-fragmented) and then variantplaylistcreator. This completes with no errors but the m3u8 will playback on the Mac using VLC but not on the Apple Vision Pro. The relevant part of the m3u8 is: #EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=40022507,BANDWIDTH=48883974,VIDEO-RANGE=PQ,CODECS="ec-3,hvc1.1.60000000.L180.B0",RESOLUTION=4096x4096,FRAME-RATE=24.000,CLOSED-CAPTIONS=NONE,AUDIO="audio1",REQ-VIDEO-LAYOUT="CH-STEREO" {{url}} Has anyone been able to use the HLS tools to generate fMP4s of MV-HEVC videos with HDR10?
0
0
170
6d
How to control the position of windows and volumes in immersive space
My app has a window and a volume. I am trying to display the volume on the right side of the window. I know .defaultWindowPlacement can achieve that, but I want more control over the exact position of my volume in relation to my window. I need the volume to move as I move the window so that it always stays in the same position relative to the window. I think I need a way to track the positions of both the window and the volume. If this can be achieved without immersive space, it would be great. If not, how do I do that in immersive space? Current code: import SwiftUI @main struct tiktokForSpacialModelingApp: App { @State private var appModel: AppModel = AppModel() var body: some Scene { WindowGroup(id: appModel.launchWindowID) { LaunchWindow() .environment(appModel) } .windowResizability(.contentSize) WindowGroup(id: appModel.mainViewWindowID) { MainView() .frame(minWidth: 500, maxWidth: 600, minHeight: 1200, maxHeight: 1440) .environment(appModel) } .windowResizability(.contentSize) WindowGroup(id: appModel.postVolumeID) { let initialSize = Size3D(width: 900, height: 500, depth: 900) PostVolume() .frame(minWidth: initialSize.width, maxWidth: initialSize.width * 4, minHeight: initialSize.height, maxHeight: initialSize.height * 4) .frame(minDepth: initialSize.depth, maxDepth: initialSize.depth * 4) } .windowStyle(.volumetric) .windowResizability(.contentSize) .defaultWindowPlacement { content, context in // Get WindowProxy from context based on id if let mainViewWindow = context.windows.first(where: { $0.id == appModel.mainViewWindowID }) { return WindowPlacement(.trailing(mainViewWindow)) } else { return WindowPlacement() } } ImmersiveSpace(id: appModel.immersiveSpaceID) { ImmersiveView() .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } } .immersionStyle(selection: .constant(.progressive), in: .progressive) } }
1
0
124
4d
Are PhysicsJoints supported?
I am trying to add joints via code in my visionOS app. My scenario requires me to combine models from Reality Composer Pro with entities and components from code to generate the dynamic result. I am using the latest visionOS beta and Xcode versions and there is no documentation about joints. I tried to add them via the available API but regardless of how I combine pins, joints and various component, my entities will not get restricted or stay fixated like they are when they are in a child/parent relationship. I am using RealityKit and RealityView in mixed mode. I also searched the whole internet for related information without finding anything. Any insights or pointers appreciated!
1
0
140
2d
visionOS console warning: Trying to convert coordinates between views that are in different UIWindows
Hello, I have an iOS app that is using SwiftUI but the gesture code is written using UIGestureRecognizer. When I run this app on visionOS using the "Designed for iPad" destination and try to use any of my gestures I see this warning in the console: Trying to convert coordinates between views that are in different UIWindows, which isn't supported. Use convertPoint:fromCoordinateSpace: instead. But I don't see any visible problems with the gestures. I see this warning printed out after the gesture takes place but before any of our gesture methods get kicked off. So now I am wondering if this is something we need to deal with or some internal work that needs to happen in UIKit. Does anyone have any thoughts on this?
2
0
225
3d