Well it seems as though the semester is drawing to a close and things are due this week. After looking back at this semester I still cannot believe that I did it. But now that the presentations and defenses are done and I'll be turning in my capstone tomorrow I'm thinking of all I have learned this semester. I feel very accomplished and am happy to say that I am really looking forward to turning this capstone in.
Last week Wednesday I presented my project and honestly thought it went very well. Program ran the way I hoped it would. Now its just to document my code and go to the defense.Well a lot has happened since my last post. I implemented buttons to start and stop the live video feed. Added a log button to view confirmed faces of people entering the room as well as a start processing button. The start processing button runs Derek's edited capstone (what I use for facial detection) in a loop taking all the snapshots captured by my program, getting rid of the images that do not have faces and moving the ones that do into a folder that the log button then retrieves. So this past week has been very productive thanks to Dr.McVey who helped troubleshoot various issues and brainstormed new ways to do things.Since my last post I have met with Dr.Pankratz and a few things have been determined. First of all we concluded that I have been spending way to much time on getting Ozeki to work with my program which has wasted a lot of time. So this being said I have dropped trying to make video capture work. What I am instead working on is getting the snapshot on a timer and using those images to determine if someone has come into the room. Once that is accomplished my program will then store the important information such as the persons face and arrival date and time. What I am currently attempting to implement is the timer and working on editing DerekH capstone given to me by Dr.Pankratz for the facial recognition. Since my last post I have spent a considerable amount of time on the Ozeki SDK. I managed to get it to agree with visual studio so that it will finally let me be able to compile and run my program but I am still having difficulty using the libraries objects to connect to my camera. So far I have gotten it to make video files but they either do not save correctly and end up as empty files or they crash the program. So based on this once I correctly get the library to work with my code not only will I be able to save video but also incorporate motion and facial detection, as they are other features in the library.
Since demos I have been working mainly on facial detection. After talking with Dr. Pankratz I heard that Kat found out a way that works relatively well so I reached out to her. She sent me the tutorial she used and so far I do not know how useful it'll be since it's using android studio. But regardless I'm still giving it a look. Besides that I am still trying to work with the Ozeki SDK to get it to work with a local network instead of an outward facing URL which will not work with the schools network. Besides that I have been trying to figure out motion detection with Carl which we have had some luck on the camera side but getting it connected to our software is becoming a challenge.
So far this week I have gotten a live feed that can be logged into as well as do basic controls for the camera as well as take a snapshot of the screen, which Carl figured out and helped me to get it to work with my system. It saves the screenshot based on time and date taken. My next two objectives are still to have the snapshot be taken when motion is detected as well as get the facial detection working from the SDK once a snapshot is taken. Based on this for the rest of this week I will be focusing on getting motion detection to connect to my c# project and save the image only if a face is detected.
During this spring break I have found an interesting open source SDK called Ozeki that has facial recognition built in. So far I have downloaded it and began to play around with it and it looks very promising. The issue I am currently working on is connecting it to the cameras live feed which is proving to be difficult. Once I do figure out the live feed connection the SDK supports recording video and other features that will be very useful and should help me with most of my project. Overall though I have found that with this project there has been a ton of research and setbacks. Finding something promising and realizing that it won't work has been the largest obstacles to this project and has taken me the most time. So hopefully this new SDK pans out.
This week has focused on researching facial detection. Upon meeting with Dr.Pankratz it was decided that since the Microsoft face API contacts the cloud and we are not sure what information it takes that other solutions be discovered. So with this in mind this week has been researching open source facial detection. So far I have found two possible solutions opencv and dlib. So far I have been comparing then to see which one would be better suited.
This week I have started looking at C# code. I found a Microsoft Face API written in C#. It looks very promising and using it I should be able to determine a persons face, even determining if a similar face comes into the cameras view so I don't count the same person twice. I am having some difficulty getting it to fully cooperate with me but its getting there. Also I am going to start to look into writing a little code to at least get a live feed from the camera. Hopefully soon ill be able to recognize a persons face in the camera in real time.
|