The documentation of every step along my journey to create this project.
The first week of work consisted of a few things. First, I needed to do some housekeeping and installing of software on the MacBook I was given in order to get started on my project. Second, I completed my capstone website so that I am able to update and blog my work. Lastly, I focused on finding documentation and tutorials to learn how to program in Swift for the visionOS. This gave me a good idea of what to work on and learn first so that I can get rolling on the main project.
This week was spent working on small projects and going through documentation to learn Swift and get familiar with the simulator and UI in Xcode. I completed two small applications: a chatbot app that dealt with simple UI and text boxes on the screen, and then a small weather displaying application that both helped me understand variables, syntax, properties, and more about Swift. Once that was done, I started working through VisionOS documentation, following a tutorial on a VisionOS app with spatial computing. This helped me get familiar with the syntax of Swift, developing a VisionOS application, and the documentation they have available for use.
I spent my time this week working through the VisionOS applications regarding ornaments, multiple windows, and 3d models in the reality space. These projects really helped me finish up the very general idea of developing applications with the Apple Vision. I was then able to actually hook up the headset with my laptop, and got a real look at what one of my simple applications looked like on the actual Apple Vision headset. This will give me a good idea going into the next week for starting my project.
Very awesome and cool progress this week. I started with working on Kyle's old Capstone project and "successfully" transferred it to a visionOS project instead of an IOS project. It wasn't exactly the same, as I was able to transfer over the UI, but the main function of the app still seemed to have some missing pieces. On the other hand, I started out with making sure that all of the visual permissions were set up and creating the ARKit virtual space. Using the debugger, I was successfully able to see the wireframe around everything including the built-in surface and object detection, allowing me to see walls and floors that it identified, as well as a wireframe mapping out the room. Once I did this, I took the walls that it did detect and was able to create a red ball that captured the anchor and center point of the wall. I was then able to use some math in order to paint a transparent plane over the area that it detected as the wall. This part was super finicky, and did not do a very great job of actually detecting the entire wall, so I think a next step will be being able to manually adjust where the wall is.
Below is a video regarding my current progress with this application. I wanted to get started on the UI and being able to select walls and have some sort of way of differentiating them from one another. To do this, I chose to display the world space coordinates of these walls when I selected them (this actually ended up being a little lamer than I thought because they were pretty similar in values to each other). But nonetheless, I was able to see the difference between walls and how it created each of them. I was able to use the headsets ability to see where the user is looking and make that my way of choosing walls. The user just had to pinch on the wall they are looking at, which is highlighted so you have a visualizer for which wall you are looking at, and that selects the wall for them. I made a little more progress since this video, and I am working on polishing up the UI so that I can eventually change colors on the walls.
This week was good progress, but not the progress I was planning for at the beginning of the week. An issue that has occurred, and that you can briefly see in the video, is that the walls cover up objects in front of them and aren't really "there" up against the wall. It is just creating the plane in the video feed and not as an object in the world. This is an issue, because how would people know how the color of the wall looks with their furniture? I struggled on this, and did not reach a possible fix that looked good. So, I decided to stray from that for now and work on the UI. I was successful in being able to recolor walls, hide modifiers, and stop updating of the ARKit so that it doesn't keep resetting. I thought this was good progress, but was still frustrated with the main issue. I did fix an issue where the immersive space did not open when you ran it solely on the Vision, but it worked while running from the laptop via cable, so now it will run no matter what.
I was stuck on trying multiple different ideas to try and fix the issue with the wall fighting with the mesh that the visionOS creates. I tried it with an example I found from an Apple dev page, using that example directly in my program, and making the wall 3-dimensional, yet nothing seemed to work. There were a few sliders I added, which included a z-axis margin, roughness, clearcoat. Roughness (matte vs glossy) and clearcoat (reflectivity) I only implemented in preparation for changing those features of the wall color. The z-offset helps with putting the plane farther away from the wall so the rendered wall does not interfere with the virtual wall in the world space. This issue is still present as the mesh doesn't look very good, but it doesn't paint over objects in the world. In light of being stuck, I did finally implement the corner balls to manually adjust the walls and they stay in the place the user puts them. I am looking to hopefully finally figure out the walls and hiding it behind the real world items.
The mesh planes are getting better. I have been comparing it with the debugger mesh and the surfaces it detects and I think I am doing about the best I can. The mesh is not nearly as clean and precise as I would like, but I think that is the hardware holding me back because the vision is giving me what it sees and I am doing what I can with it. I finished implementing the material modifications to help with realism. Doesn't seem to be anything with corner shadows, but I might have to make an occlusion map image to help manually shade the corners, but will be difficult with types of corners and shadowing them. Will look into that a bit more. Regardless, specular, clearcoat, and roughness are all implemented and help with dynamic lighting in a way. The only lighting part is the corners I just mentioned. I attached some videos that I missed recently regarding my progress and other things I have tried and how they look.
This week was some smaller items focused on cleaning up as we start to close in on this semester (crazy stuff actually, time flew by so fast). I added the ability to color all walls all at once. Might also add in the ability to select multiple walls at once for coloring if there are different rooms. I also worked on a gesture to pull the walls in and out of the wall instead of using the slider in the UI (more interactive). My next task I have started was working on trying to get doors some better occlusion on walls. I have some ideas in mind to try to make these carve out of the wall better. We will see my progress with that in this next week and I will try to post another video soon with that updated.
With what I have been working on with doors and windows, I came to the conclusion that my code works, but it doesn't work in reality because it isn't my fault. FYI, this is both partially serious and partially a joke. I took a look at the debugger because doors and windows were still not being occluded, and the wall was still covering them up. Well, it doesn't seem like the debugger/vision itself is even recognizing the doors and windows (correctly/well?). I am unsure how it works, and one of the videos below demonstrate this. Besides this, I had solved a bug with there being duplicate corner balls and moving them around didn't work that well because of this, and another bug with the walls not all being dragged at once when the checkbox is selected. I also made an attempt at shadowing with ambient occlusion, which you can see work a little better than before in the current version.
I found a similar provider to Scene Reconstruction and Plane Detection called Environment Light Estimation Provider. This provides a real-time estimated ambient intensity and color temperature that ARKit derives from the camera feed. I am going to try and use this to drive the ImageBasedLighting (IBL) probe on the scene, so all PBR walls automatically respond to actual room brightness. I am in the works, and it isn't fun so far and I have been trying to work on some debugging with it. I think this might be pretty helpful with lighting, otherwise I have just been prepping my presentation on Thursday!
Update: I had too many problems with it and didn't have enough time, so I dropped that but made one change that is actually really cool. I updated the planes so that RealityKit lights the PBR-rendered wall using the real-world environment captured by the visionOS. This was super sick because it will adjust the lighting of the wall based off of how bright/dark it is in the room. Lighter room = brighter, more vibrant wall. Darker room = darker, vibrant wall. And it is responsive, so it updates within a few seconds, and it does it incrementally as well based on the lighting of the room. It also seems to change the color a little bit based on warmth/coolness of the light source it detects. Not exactly helping ambient occlusion, but still very cool nonetheless.