Weekly Blog

The documentation of every step along my journey to create this project.

Week 01 2/2/2026

Getting Started

The first week of work consisted of a few things. First, I needed to do some housekeeping and installing of software on the MacBook I was given in order to get started on my project. Second, I completed my capstone website so that I am able to update and blog my work. Lastly, I focused on finding documentation and tutorials to learn how to program in Swift for the visionOS. This gave me a good idea of what to work on and learn first so that I can get rolling on the main project.

Week 02 2/8/2026

Learning Swift

This week was spent working on small projects and going through documentation to learn Swift and get familiar with the simulator and UI in Xcode. I completed two small applications: a chatbot app that dealt with simple UI and text boxes on the screen, and then a small weather displaying application that both helped me understand variables, syntax, properties, and more about Swift. Once that was done, I started working through VisionOS documentation, following a tutorial on a VisionOS app with spatial computing. This helped me get familiar with the syntax of Swift, developing a VisionOS application, and the documentation they have available for use.

Week 03 2/15/2026

Learning Swift Pt. 2

I spent my time this week working through the VisionOS applications regarding ornaments, multiple windows, and 3d models in the reality space. These projects really helped me finish up the very general idea of developing applications with the Apple Vision. I was then able to actually hook up the headset with my laptop, and got a real look at what one of my simple applications looked like on the actual Apple Vision headset. This will give me a good idea going into the next week for starting my project.

Week 04 2/21/2026

Identifying Walls

Very awesome and cool progress this week. I started with working on Kyle's old Capstone project and "successfully" transferred it to a visionOS project instead of an IOS project. It wasn't exactly the same, as I was able to transfer over the UI, but the main function of the app still seemed to have some missing pieces. On the other hand, I started out with making sure that all of the visual permissions were set up and creating the ARKit virtual space. Using the debugger, I was successfully able to see the wireframe around everything including the built-in surface and object detection, allowing me to see walls and floors that it identified, as well as a wireframe mapping out the room. Once I did this, I took the walls that it did detect and was able to create a red ball that captured the anchor and center point of the wall. I was then able to use some math in order to paint a transparent plane over the area that it detected as the wall. This part was super finicky, and did not do a very great job of actually detecting the entire wall, so I think a next step will be being able to manually adjust where the wall is.