Weeks 1-2
The past week and a half have been spent building the foundation for the rest of the semester. I have decided to develop in Visual Studio with a Windows Form Application. I have been able to connect the eyetracker to my machine and get some coordinates back. I have also been able to create some graphical interfaces with WinForms. The next step is to combine these two concepts and try to determine that someone is looking at a certain portion of the screen. I will just be using a very simple image, probably just split the screen into quadrants, and giving feedback when the coordinates overlap. If I can do that, it will be a great step in creating a more complicated search for objects.
Week 2
Since the last blog post, I have been attempting to show that I looked at an object, and do it without pressing a button. Using the System.Drawing.Graphics library I can draw on the screen. I set up a quick test screen with four corners, and used the gaze values that I was getting from the eyetracker to calculate which corner should be highlighted. The coordinate range from 0 to .999 on each axis. This next week I think I will try to find some more complicated images to use and place some more intricate highlights over the objects. On the horizon should also be getting the coordinate data and playing it back.
Week 3
Last week, I was able to prove that I at least glanced at an object. This was good, but I needed to be able to prove that someone looked at an object. This has been quite a bit tougher. I am still grappling with the relationship between the eye tracker coords, which are based off of the entire screen, and the coordinates of the hidden objects, which are currently bound to the window of the application, and possibly later, the image within the window. Everything is easy to keep track of if the window is full-screen. I don't think trying to account for the changing window size is as important as the structure of the game right now. Speaking of, I have been able to create a hiddenObject class which has an x, y, radius, and Lookcounter, which keeps track of how many ticks the gaze has been in its radius. When in full-screen, I have been able to match the coordinates, and make a shape appear over the coordinates after looking at it for about a second and a half.
Week 4
I know last week I said that the coordinates were "not as important as the structure of the game right now" last week, but that was actually the main development this week. In fact, I would say that the coordinate issue is resolved now. The application now opens in fullscreen, with no title box, it is all that the user sees. The image is now smaller inside the client window, off to the side so that the key of objects can live off to the right. This seems to be a reasonable way to organize the information, and keep the eyetracker coords and the coords of the client consistent. I also found some good hidden object pictures to use, because I have been using a goofy test image up until now (seen below), they are from the Highlights magazine, and they work especially well with what I am planning to do because they are black and white line art. This is good because I wanted to have the objects become colored in when they were found, both on the image and in the key.I will need to create the colored-in images and then line them up with the image so they can appear seamlessly. This will be tedious, and I am looking for ways to make this less tedious, one way may be to develop a back-end application to register the coords of my clicks when I click on objects so I have a list of coords for the objects. That will take some finagling but would be useful.
Week 5
This past week was spent setting up an actual hidden object picture instead of my test image. I wanted it to be scaled to the height of the screen so I had to get a scale, which is the height of the screen divided by the height of the image. Multiplying the height and width of he image by this value scales it to the screen. Now, I wanted to try that method of getting the object coords, so I added to the mouse click event some code which alternates between two states, first and second click, using the coords of the clicks to create hidden objects and add them to the current image. The x, y (top left) width and height of the object rectangles are written to a file which is read in when the imnage is loaded. This is only meant to be done once. Then, I wanted to try to have the colored in versions of the rectangles show up when you look at the object. This is where my scaling came back to bite me, because it was not scaled the same as the normal image, taking the same rectangle from the color image was giving some offset, low resolution pixels instead of the "same" coords. So I basically had to undo the scaling when specifying what portion of the color image I wanted. This was quite annoying, but the coordinate mismatch is something that I have had to keep in mind and will keep in mind going forward. This next week I need to work on the key, synchronizing it with the main image, and organizing it properly.
Week 6
I was able to set up the key this week, which required getting separate images of each object, both in black and white and in color. I create boxes of a specified size, and scale the icon to that size. This makes it easy to stack the icons in a grid. This grid is sized to the screen by determining how many boxes can stack vertically on the screen. When the images are painted, there is a check for if the object is found, if so, the color version is displayed. This way it is easy to track what you still need to find. My main objective for next week is to tune the method for determining if a user has found an object, most likely utilizing some sort of coordinate history. Once that is done, I think I am done with the game-playing aspect of the application and should move onto the data gathering part.
Week 7
T'was the week before Spring Break, and I feel good with where I am at. I now have a reasonably complex way to determine that the user has found an object and mitigate the noise from the data. I store the previous thirty points of gaze data in a queue, and then store the average of those points' X and Y data in the fixationPoint variable. This is used instead of the raw gaze data. This will help with outliers. I also added a whole other page to look at, these can be chosen on the mainForm. My next step after the break is to be able to end the game and store the data for playback and analysis. A side project in the mean time could be to add a pause button, which should probably gray out the screen so you can't look at the image. Alright, see you later. I gotta go to Florida.