Knight

V

Rook Photo

DEO CHESS TRACKING

Knight

Post #1 (2/7/2026)

So far in my capstone project, I have focused on establishing a strong foundation for my board chess tracking application. After researching different languages, I decided to use React as the primary language for building the app because it allows for better documentation and enables my application to run on various platforms. I found a YouTube tutorial to begin learning how to code with React, as I have never used it or interacted with it before. The video is around 11 hours long, and my goal by the end of this week is to have a good enough understanding of the language to where I can get access to the camera.

Post #2 (2/9/2026)

I have now started to work more in depth on my plan for this project and how I would like my interface to work, as well as the flow of my data and interface.
See below for an my current plan for the in game UI (User Interface).
User Interface

Post #3 (2/16/2026)

This week I gained access to the camera through react, and began looking forward at next steps. We determined that a good starting point would be to begin with analyzing still images, then dealing with capturing frames from the live camera later. My goal for this week is to begin experimenting with tracking these images, as well as getting sample videos of me playing chess so that we can test different possible angles, as well as potential issues that may arise that we haven't thought about yet. 2/16 interface

Post #4 (2/23/2026)

I am now diving into the meat of my project. Image Tracking. I have begun with coding in Python - a language I am much more familiar with than JavaScript, and will later transfer over the same logic to my JavaScript code. Currently I am able to identify the edges of the board, and drawing hough lines to detect the edges of the board (see first image below). Now that I am able to identify the edges of the board, my next steps are to begin to crop out all the unwanted parts of the image (see second image below) and begin to identify the squares within this image.

You can look at this PDF for a more detailed description of how I created my I hough lines .

2/23 Hough Lines

2/23 Hough Lines

Post #5 (3/1/2026)

This week I have continued to make progress on both my app as well as my image tracking logic. On my app, I have added a board to visualize the image recognition into my app (see below). The board takes a FEN and visualizes it on the board. This will allow me to easily link the visualized board to my image tracking logic, as long as I have it return the updated boards FEN. Here is the link for the youtube video that I used to learn how to create the board. 3/1 interface
Dr. McVey also go in the a new chessboard for me to test on, and I have begun working with that board, as the contrast between the squares is much higher than the board I have been testing on. I have added code so that the user is now able to select which edges are correct based off of the intersections found through edges. The first code that I added was to filter out all lines that are not either verticle, or horizontal. I learned through trail and error that a verticle tolerance of 25 degrees and a horizontal tolerance of 5 works best to pick up all wanted edges without adding too much extra noise. I then filter lines so that all lines that are too similar, to reduce noise on the image. I then loop through all verticle and horizontal lines and find all possible intersections. These intersections are then filtered to make sure we are only showing points within the image bounds.
3/1 interface
Once the user has selected the 4 edges, the board is then warped to appear flat so we can try and calculate the 8x8 square. This is where I am currently getting stuck and need to come up with ideas on how to calculate this.

Post #6 (3/8/2026)

This week has been a bit slower than others. I have basically just been reusing code that I have already written to complete the rest of the calibration process. In order to finish this process I need to go back and edit my function that generates the hough lines so that after each generation it checks to make sure that there was a minimum number of lines drawn. This should help with the calibration where we need user input to make sure that the user has the correct choices to select. I have found that the distance the user sets up this camera requires very different settings, so by adding this small part it should solve this issue.
I also have begun planning how I am actually detecting piece movement. Between the CS professors we managed to come up with 2 solutions (one of which is much more preferable than the other). The first and ideal solution is to detect the number of edges found in each of squares. By comparing previous runs to our current runs, we should be able to determine which of the pieces is moved by which square now has more edges than previously. If this does not work we have a backup plan of creating a neural net for piece recognition. The amount of data that I would have to collect to implement this however would be less than ideal.