Archives for the month of: March, 2012

In the past few weeks, I’ve solved a lot of the previous “bugs” I was encountering.  I am now able to run my sample OpenCV program on my own computer and successfully send multiple socket commands to Scratch.  Here’s a summary of how I fixed my errors:

Error: Unable to send multiple commands to Scratch via a Socket connection

When I first wrote a Python program to send commands to Scratch, I was able to successfully send a command multiple times in a row.  After translating my Python program to C++ (and using a 3rd party Socket library), I noticed that I was now only able to send one command to Scratch.  Any subsequent commands were ignored.

Solution:  I started debugging by sending my Scratch commands to the localhost and examining them on the command line.  At first, I noticed nothing wrong.  The commands  looked identical to what I expected.  I then modified my previous Python program to send commands to the localhost.  These commands appeared to be tab-separated, whereas the C++ commands were each on a new line.  I figured out that the Socket library I was using automatically appended a new line to all information being written to a Socket.  I was able to easily fix my issue by removing this new line.

Error:  Strange runtime exception of type STD::string

Solution:  This error only occurred when I attempted to send data to invalid socket connection.  I just had to enclose the socket portion of my program in a try catch statement.

Computer Vision updates

This week, I also integrated motion tracking into my program.  I attempted to write a program that would track only moving, skin-colored objects.  Below is an image that shows all steps in my process.

First, I capture the image from a webcam.  Second, I convert the webcam image from RGB to an HSV image.  Then, I sharpen the HSV image using a Gaussian blur.  After sharpening, I try to filter out certain HSV values to filter skin color.  These values definitely need some tweaking – the current range of values captures skin color, but also the color of my walls and other pinkish tones.  After this step, I do some background subtraction/blob detection to find moving blobs in my image.  Then, I attempt to find only the skin colored blobs.

At the end of my experiments, I feel that with some tweaking I am ready to attempt gesture recognition with Hidden Markov Models.  However, I have begun to question the direction of my project.  I first decided to focus on gesture recognition and Scratch because I felt that gesture recognition would be something Scratchers would be able to create with.  Being able to use gesture recognition would allow many Scratchers to make awesome, AI-based games.

Now I am thinking that I should examine several more AI techniques and assess whether or not I feel they would be appropriate for educating children.    I feel that Scratch has many opportunities for AI that doesn’t require intensive processing and tweaking, but I’m also questioning AI’s appropriateness as an open-ended creative technology.

Constructivist technologies like Scratch are designed to act more like a “paintbrush” than a screen (see Mitch Resnick’s multiple papers about this metaphor).  Scratch provides the basic tools to create programs, but nothing excessive.  Many techniques in AI require complicated processing, much of which need to be “blackboxed” to make AI accessible.  Could I integrate AI into a Scratch-like language in a non-specific, not overly simplified way to preserve its paintbrush-like properties?  That remains to be seen.  What I do know is that computer vision may not be the place to start, though it would have high value for game creation.

It’s difficult to define how AI should be presented to children in a constructivist context.  How much emphasis should be placed on learning vs. how much emphasis should be placed on providing a new, open-ended construction tool?  Is AI essential enough to be available to non-computer scientists (notably children)?  I’m not 100% on any of these issues…I’ll try to address some of this in my paper.

Advertisements

This week, I finally managed to get some OpenCV working in C++.  Borrowing someone else’s laptop, I managed to write a program that captured skin-colored blobs and also sent data to Scratch in one go.  I’ll take a few screenshots when I get a chance!

Skin Segmentation

Initially, I attempted to use OpenCV’s dynamic skin detection example to filter skin color out of a video stream.  I experimented a bit with some of the detector’s parameters, but found that I was getting worse results that the results I got filtering in an HSV color space.  So, I went back to my previous method of sking detection.

Just as I did in my previous Python program from a few weeks ago, I was able to use OpenCV to transform an image into HSV color space and perform a basic skin filtering operation.  Instead of using static images like in the Python program I wrote, I was able to use images from a webcam.  I’m planning on doing a little tweaking of my filter values, but these seemed to work fairly well for a first pass.

Mat hueMask;

Mat satMaskLow;

Mat satMaskHigh;
threshold(frameChannels[0], hueMask, 50, 1, THRESH_BINARY_INV);
//mask sat 0.23-0.68
threshold(frameChannels[1], satMaskLow, 59, 1, THRESH_BINARY);
threshold(frameChannels[1], satMaskHigh, 173, 1, THRESH_BINARY_INV);
//59-173
filterFrame = hueMask & (satMaskLow & satMaskHigh);

I did notice that I seemed to have a bit of noise in my filtered image, but I think that may have been bad lighting conditions.  In any case, my HSV color segmenter seemed to perform better than the dynamic one from OpenCV…although it also detected a skin-colorish pillow.  Perhaps I’ll refine my filtering to only include *moving* skin-colored objects.

Blob Detection

The next step was to use OpenCV’s blob detector to grab the largest areas of skin in the webcam stream.   I was able to get this up and running and could adjust the parameters to include only fairly-large blobs.  I attempted to create KeyPoints of all blobs larger than a certain size and get the blobs’ radius.  The radius values didn’t seem to be correct, however (they all seemed small!).   Going to investigate this further.

Sending Data to Panther/Scratch

As I did previously, I used a nice socket library to send data from my C++ program to Scratch.  This seemed to be working, but I noticed a bug!  It seems like I am only able to send one command from C++ to Scratch – all subsequent commands are ignored.  I think this may be an issue with how I format/terminate my commands in C++.  I did not have this problem when I sent commands from a Python program to Scratch – maybe I am overlooking some subtlety of string formatting in C++?  I plan to investigate this further this week.

Next week is spring break, so I’ll have lots of time to work!

To Do

-Fix Scratch socket issue (must be able to send more than one message)

-Experiment with blob detection

-Track only *moving* blobs (background subtraction from previous frame

-Get this running on *my* computer (have an error, but getting that sorted out tonight with some help)

 

Short update this week!

After having so much trouble with OpenCV for C++, I started exploring a few other options.  I started by looking at OpenCV for Processing.  Unfortunately, after installing the OpenCV libraries, I was unable to run any of the sample code.  Many other frustrated Windows users had experienced the same problem I had, and I found no solutions.

The Processing website had several other libraries for blob detection and other vision-related processes.  I installed a few of these examples and was able to use the code.  These examples were much more light-weight than OpenCV and would most likely be sufficient for my purposes.

Now, I’m in the process of reworking blob detection examples to detect hands.