Weeks 1-2
The past week and a half have been spent building the foundation for the rest of the semester. I have decided to develop in Visual Studio with a Windows Form Application. I have been able to connect the eyetracker to my machine and get some coordinates back. I have also been able to create some graphical interfaces with WinForms. The next step is to combine these two concepts and try to determine that someone is looking at a certain portion of the screen. I will just be using a very simple image, probably just split the screen into quadrants, and giving feedback when the coordinates overlap. If I can do that, it will be a great step in creating a more complicated search for objects.
Week 2
Since the last blog post, I have been attempting to show that I looked at an object, and do it without pressing a button. Using the System.Drawing.Graphics library I can draw on the screen. I set up a quick test screen with four corners, and used the gaze values that I was getting from the eyetracker to calculate which corner should be highlighted. The coordinate range from 0 to .999 on each axis. This next week I think I will try to find some more complicated images to use and place some more intricate highlights over the objects. On the horizon should also be getting the coordinate data and playing it back.
Week 3
Last week, I was able to prove that I at least glanced at an object. This was good, but I needed to be able to prove that someone looked at an object. This has been quite a bit tougher. I am still grappling with the relationship between the eye tracker coords, which are based off of the entire screen, and the coordinates of the hidden objects, which are currently bound to the window of the application, and possibly later, the image within the window. Everything is easy to keep track of if the window is full-screen. I don't think trying to account for the changing window size is as important as the structure of the game right now. Speaking of, I have been able to create a hiddenObject class which has an x, y, radius, and Lookcounter, which keeps track of how many ticks the gaze has been in its radius. When in full-screen, I have been able to match the coordinates, and make a shape appear over the coordinates after looking at it for about a second and a half.
Week 4
I know last week I said that the coordinates were "not as important as the structure of the game right now" last week, but that was actually the main development this week. In fact, I would say that the coordinate issue is resolved now. The application now opens in fullscreen, with no title box, it is all that the user sees. The image is now smaller inside the client window, off to the side so that the key of objects can live off to the right. This seems to be a reasonable way to organize the information, and keep the eyetracker coords and the coords of the client consistent. I also found some good hidden object pictures to use, because I have been using a goofy test image up until now (seen below), they are from the Highlights magazine, and they work especially well with what I am planning to do because they are black and white line art. This is good because I wanted to have the objects become colored in when they were found, both on the image and in the key.I will need to create the colored-in images and then line them up with the image so they can appear seamlessly. This will be tedious, and I am looking for ways to make this less tedious, one way may be to develop a back-end application to register the coords of my clicks when I click on objects so I have a list of coords for the objects. That will take some finagling but would be useful.