In the past few weeks, I’ve solved a lot of the previous “bugs” I was encountering. I am now able to run my sample OpenCV program on my own computer and successfully send multiple socket commands to Scratch. Here’s a summary of how I fixed my errors:
Error: Unable to send multiple commands to Scratch via a Socket connection
When I first wrote a Python program to send commands to Scratch, I was able to successfully send a command multiple times in a row. After translating my Python program to C++ (and using a 3rd party Socket library), I noticed that I was now only able to send one command to Scratch. Any subsequent commands were ignored.
Solution: I started debugging by sending my Scratch commands to the localhost and examining them on the command line. At first, I noticed nothing wrong. The commands looked identical to what I expected. I then modified my previous Python program to send commands to the localhost. These commands appeared to be tab-separated, whereas the C++ commands were each on a new line. I figured out that the Socket library I was using automatically appended a new line to all information being written to a Socket. I was able to easily fix my issue by removing this new line.
Error: Strange runtime exception of type STD::string
Solution: This error only occurred when I attempted to send data to invalid socket connection. I just had to enclose the socket portion of my program in a try catch statement.
Computer Vision updates
This week, I also integrated motion tracking into my program. I attempted to write a program that would track only moving, skin-colored objects. Below is an image that shows all steps in my process.
First, I capture the image from a webcam. Second, I convert the webcam image from RGB to an HSV image. Then, I sharpen the HSV image using a Gaussian blur. After sharpening, I try to filter out certain HSV values to filter skin color. These values definitely need some tweaking – the current range of values captures skin color, but also the color of my walls and other pinkish tones. After this step, I do some background subtraction/blob detection to find moving blobs in my image. Then, I attempt to find only the skin colored blobs.
At the end of my experiments, I feel that with some tweaking I am ready to attempt gesture recognition with Hidden Markov Models. However, I have begun to question the direction of my project. I first decided to focus on gesture recognition and Scratch because I felt that gesture recognition would be something Scratchers would be able to create with. Being able to use gesture recognition would allow many Scratchers to make awesome, AI-based games.
Now I am thinking that I should examine several more AI techniques and assess whether or not I feel they would be appropriate for educating children. I feel that Scratch has many opportunities for AI that doesn’t require intensive processing and tweaking, but I’m also questioning AI’s appropriateness as an open-ended creative technology.
Constructivist technologies like Scratch are designed to act more like a “paintbrush” than a screen (see Mitch Resnick’s multiple papers about this metaphor). Scratch provides the basic tools to create programs, but nothing excessive. Many techniques in AI require complicated processing, much of which need to be “blackboxed” to make AI accessible. Could I integrate AI into a Scratch-like language in a non-specific, not overly simplified way to preserve its paintbrush-like properties? That remains to be seen. What I do know is that computer vision may not be the place to start, though it would have high value for game creation.
It’s difficult to define how AI should be presented to children in a constructivist context. How much emphasis should be placed on learning vs. how much emphasis should be placed on providing a new, open-ended construction tool? Is AI essential enough to be available to non-computer scientists (notably children)? I’m not 100% on any of these issues…I’ll try to address some of this in my paper.