Archives for the month of: February, 2012

Today in Analog Circuits class, we built our first operational amplifier circuit.  We used an LM358 op amp to amplify the input from an electret microphone.   I’ve wanted to learn about op amps for a while now, so I’m excited that my circuit works!

 

Here I have a volunteer whistling into a mic.  The op amp outputs to a speaker.

 

Advertisements

OpenCV and Webcam

The first time I installed OpenCV, I intended to use it with Python and thus put the files in my Python27 directory.  At that point, the C++ sample files were able to use my on-board webcam to capture and display video.

After uninstalling/rebuilding OpenCV in a different directory, I was no longer able to get my webcam working.  I tried switching version of OpenCV and rebuilding, installing OpenCV patches, and reinstalling my webcam drivers – nothing seemed to work.  Ultimately, I ended up using a different webcam (borrowed from the ER) and was able to show video.

C++

This week, I also started learning the basics of C++ and generally just refreshing my programming knowledge.  I went through a tutorial and did a couple of small sample programs.  I also started experimenting a tiny bit with C++ and OpenCV (once I finally got my webcam working).

Sending Data to Panther/Scratch

In order to use OpenCV with Scratch (or any modified versions of Scratch, like Panther), I needed to find a way to transmit data from a C++ program to a Scratch program.  Handily, Scratch has an easy way to do this.  If you right click on one of Scratch’s sensor blocks, you have the option of enabling remote sensing.  This allows you to send data to Scratch from another program.  The Scratch Wiki provides a few code examples for sending data from programs written in Python, Java/Processing, and Flash.  They also provide a remote sensing protocol so that you can figure out how to properly format messages to send to Scratch in any language of your choosing.  With the protocol, you can create messages that will change variables in Scratch, broadcast a message, or get access to a Scratch program’s global variables.

I used the remote sensing protocol and a handy C++ socket class to successfully send commands to a Scratch program.   Here’s  a function I wrote to format a message to Scratch:

string sendScratchCommand(string cmd){

int n = cmd.length();
string header = “”;

//calculate message length data

char b1 = (char) ((n >> 24) & 0xFF);
char b2 = (char) ((n >> 16) & 0xFF);
char b3 = (char) ((n >> 8) & 0xFF);
char b4 = (char) (n & 0xFF);

header+=b1;
header+=b2;
header+=b3;
header+=b4;
header.append(cmd);

return header;
}

And here’s where I connect to Scratch via a socket connection:

int main() {

//enclose in a try/catch later

SocketClient s(“127.0.0.1”, 42001);

for (int i = 0; i<10; i++){

string to_send = sendScratchCommand(“broadcast \”beat\””);
s.SendLine(to_send);

}

return 0;

}

Finally, here’s a simply program I wrote to respond to my C++ program:

The Scratch cat changes color when it receives my message!

 

 

 

 

Splitting Video Stream

Because OpenCV AND Scratch will most likely both need webcam data, I needed to find a way to “split” my webcam stream.  I got some recommendations online and installed a free program, SplitCam.  SplitCam is supposed to allow you to use webcam data in many, many applications at once.  So far, I haven’t been able to route webcam data through SplitCam but I will keep exploring.  Ideally, I’d like to be able to split webcam data through my C++ programs, but I have no idea if that’s possible.

Accomplished:

-Use sockets in C++ to send broadcasts/data to Scratch/Panther

-Complete an introduction to C++

-Successfully use a webcam with OpenCV (had to use an external webcam)

-Explore options for splitting webcam stream (so that both Panther and OpenCV can access video data)

-Begin playing with OpenCV and webcam

To Do:

-Split webcam data!

-Get back on track with doing gesture recognition with OpenCV

-Talk to Nancy Hechinger about constructivism/borrow books

Today in analog circuits, we built a simple circuit with an awesome 555 timer.  Check out my breadboarding skillz and watch the LEDs blink!

Note: at the beginning of the video, the LEDs are blinking too fast for the camera to pick up.  It appears as though one LED is on and one LED is off – in reality, they were both blinking at opposite intervals.

This week was pretty frustrating on the whole, but I made some last-minute advances!  Installing OpenCV for C++ was a pain and now it seems that OpenCV is having problems accessing my webcam (it was working with a previous compilation).  Here’s a summary of what went on:

Installing OpenCV for C++

Because OpenCV with Python was 1) potentially slow and 2) starting to become more cumbersome by the minute, I decided to switch to using OpenCV with C++.  One caveat is that I don’t actually know C++ (not to mention I haven’t used OpenCV), but I have access to people who OpenCV professionally (yay!).

With some help, I got OpenCV installed (which included installing MingGW, helping me use CMake, and building and compiling).  After all of that I tried to run a sample face tracking program.  Unfortunately, my webcam is showing up as a black box!  The webcam was working previously, so I need to do some more investigation…

WebCam with Scratch-like languages:

I spent a lot of time looking at Squeak, being confused, and looking at other blocks-language alternative.  Integrating video into a Scratch-like language seemed really intimidating, and I was worried that it would be impossible for me to complete by myself.  However, after some searching, I found a solution (Panther, a Scratch extension).  Here are all of the options I considered:

OpenBlocks -A Java-based blocks language generator released by MIT.  OpenBlocks seemed very versatile and (I’m assuming) you could use it to create a visual representation of any other language.  I thought I might be able to use a more familiar language, handle video input/machine learning, and use OpenBlocks to create a nice blocks version.  HOWEVER, after downloading OpenBlocks (and emacs for editing purposes), I did not have much luck.  I also found OpenBlocks confusing – they had some Java Docs, but there was so much to go through.  Ick.

Squeak – Ahh Squeak.  Squeak is such a strange language (at least to me), so I was at a total loss when thinking of how to integrate video input.  The most recent version of Squeak is version 4.3, but Scratch runs on a modified version of Squeak 2.8.  There are several pre-written Squeak libraries, some even for video, but they were spottily documented and none worked with Squeak 2.8!

Scratch – I spent some time looking at the latest version of Scratch (and its Squeak code).  Scratch offers some webcam support.  Users can take a picture with a webcam and use the image as a sprite’s costume.  The only downside is that you cannot access the webcam in code or do any kind of streaming.  Still, I figured if you could take a picture with a webcam, there must be SOME support for streaming video.  Now I just need to have some convenient blocks for USING this functionality…which brings me to Panther.

(As a side note, I also made a simple block in Scratch)

Panther – Panther is an extension of Scratch that integrates more advanced features – including manipulation of the webcam!  Panther also lets you do file I/O, which may be useful for getting data to/from OpenCV.  I still have to explore Panther a lot more, but I think it’s a great platform for this project!

Here’s an example of me using webcam data in a Scratch-like program:

The blocks I used:

To summarize…

Things accomplished:

  • Install OpenCV for C++
  • Learn to make a block in Scratch
  • Find a way to integrate webcams and a Scratch-like language (WHEW)
  • Get a decent text editor for C++ programming (Notepad++ with added modules)

Things to do:

  • Get OpenCV to work with my webcam
  • Re-implement skin detection in C++
  • Use blob tracking in C++
  • Start working on gesture recognition
  • Send data from C++ program to Panther
  • Create more custom blocks in Panther

For sensor workshop, we had an assignment to display the activity of a sensor in time.  My first idea was to use a Hall effect sensor to measure how much my pet mice run in their wheel in a given day.  However, I think this would be better suited to our next assignment (data logging), so I decided to hold off.

I was all ready to dive into sensors until I realized…I have no Arduino.  Oops!  So it turns out I’ve never actually purchased an Arduino, except for my sewable Lilypad.  I did, however, have some ATMega chips with an Arduino bootloader (only $4!).  I first breadboarded the ATMega 328s, added an external oscillator (16 mHz), and some capacitors.  I also put a small capacitor between the chip’s reset line and the FTDI’s DTR that would allow my FTDI to reset the Arduino if needed (otherwise it is super annoying to program and you have to manually reset the board at the right time).  Then I hooked up my FTDI’s RX, TX, and GND lines and did a test to ensure I could program the chip.   SUCCESS!

My next step was to hook up a sensor.  I really didn’t have anything too fancy lying around, until I randomly found an LV-MaxSonar-EZ ultrasonic sensor (I don’t think this sensor is actually mine -where did it come from?!).   MaxSonar makes several types of ultrasonic sensor – each with a different range, sensitivity, and angle of measurement. My sensor didn’t seem to have the exact part number on it (aside from being from the EZ line), so I wasn’t exactly sure of its specs.

Experiment 1


For my sensor in time test, I decided that I wanted to use the ultrasonic sensor to roughly map the topography of objects on a table.  I set up some random boxes, cans, etc. and, using the handy ruler on the table, took a sensor measurement every 2 centimeters.  I then graphed my findings in Excel.

Experiment 2

This time, I connected my Arduino program to Processing and mapped the ultrasonic sensor’s readings in real time.  Unfortunately, the readings were really jumpy!  I found some nice low-pass filter code and added it to the Arduino program.  You can adjust how much emphasis to place on a new reading vs. the past couple of readings.

What I DON”T like about graphing the ultrasonic readings in time is that this doesn’t show how the current ultrasonic reading compares to ultrasonic readings done in the surrounding area.  It would be really great to have both an acceleration and and an ultrasonic sensor reading shown on a graph.  As you move the ultrasonic sensor faster, the graph would also move faster and take more readings.

This weekend I went on a sensor walk around Cambridge, MA.  I tried to look out for sensors that I could actually use in a project, not so much random buttons, etc.  Here are a few of the sensors that I found:

Bike Speedometer (Hall effect sensor)

The first sensor I encountered on my sensor walk was a Hall effect sensor on a bike wheel.  The picture below shows a magnet mounted on the spokes with a Hall effect sensor strapped onto the bike frame.  A Hall effect sensor emits an output voltage in response to a magnetic field – as the bike wheel turns, the magnet will be detected by the sensor, which will then output a voltage.  This particular module will then wirelessly send sensor data from the Hall effect sensor to an LCD bike computer mounted elsewhere on the bike. In turn, the bike computer will calculate and display the bike’s speed based on the number of bike wheel rotations the Hall effect sensor detects in a given period of time.

You can buy a complete bike speedometer kit here for about $9.  From the website, the product seems fairly straightforward.  The only setup required is to calibrate the speedometer to work with your bike wheel’s circumference.

You can also order a no-frills Hall effect sensor without a magnet or bike computer from Sparkfun.

I have some plans to get a Hall effect sensor for our next assignment (data logging)!

Note: I saw another bike with some El Wire on it (oooh), but it only had a measly off/on switch (snore).

The next sensors I encountered were the solar cells and photoresistors on many outdoor lighting kits.  The first example I found was solar cell atop of a small, LED garden light.  This light looked similar to various generic outdoor lighting brands, including Moonrays and Sunforce.

On a second, larger outdoor lamp, I noticed a basic photoresistor.  Instead of being powered by a solar cell, this light most likely had another power source but used the photoresistor to determine when to turn on and turn off.

Sparkfun has super cheap photoresistors as well as several solar panels, from teensy (it takes four to power an LED) to small (charge your iPhone!) to a 10 Watt model that outputs up to 8v and 1.5 A.

The third sensor I saw were some motion detectors at Walgreens and FoodMaster (woohoo).  These sensors are used to open automatic doors for customers.  I couldn’t identify the part number of these sensors, but they may be a PIR sensor as shown here.  PIR sensors (or passive infrared sensors) measures IR light that radiates from objects in the area.   When people walk in front of inanimate objects, the IR light they radiate is different than the IR light radiating from the inanimate objects.  The PIR sensor can tell that the IR light in its field of view is changing!  PIRs are also nice because they aren’t bothered by rogue shopping carts moving into view.

Once sensor that was really cool (but not pictured as I saw it when I didn’t have my camera) was an external temperature sensor attached to a thermostat.  My friends noticed that their bedrooms were often too hot or too cold, but because the thermostat measured the temperature of the hallway, it was unable to adjust the heating/cooling accordingly.  They replaced the thermostat’s internal temperature sensor with one they wired up themselves.  They then placed the new, external temperature sensor in one of their bedrooms and voila – the temperature of their bedrooms was properly regulated.

Other sensors I saw (not described in detail):

-doorbells

-some kind of proximity sensor (on a gate to an apartment parking lot)

-buttons on cheesy Valentine’s plushies at Walgreens

-buttons on crosswalks

-switch on light-up Vday cards (when you open the card, the switch competes a circuit)

For my Research Algorithms course, I’m investigating the possibility of making AI more accessible to children, especially in a constructionist-oriented learning environment.  More specifically, I want to allow children to train and use machine learning techniques in the games and programs that they can write with languages like Scratch.

I have a few main goals:

  • Successfully simplify 1-2 machine learning techniques and make them easy to train and use.
  • Integrate these techniques into a Scratch-like programming language.
  • Allow novices to create programs using the simplified machine learning techniques.

Tasks I need to complete include:

  • Implement chosen machine learning techniques (gesture recognition and potentially facial feature tracking).  I’ll most likely be using OpenCV with Python for this task.
  • Learn how to create new blocks for a Scratch-like language, either by using Squeak or a Javascript-based block generator.  I will explore both options and ultimately choose the option that allows for the best integration with my python script while still letting kids make fun programs.
  • Integrate webcam data, code for training/running machine learning algorithms, and Scratch-like language.

Initial Plan for the Semester:

Feb 9 – 16 – Work with OpenCV to implement basic gesture recogntion with HMMs (requires skin filtering, and implementing forward-backward algorithm) – may be offline.
Feb 16 – 23 – Continue working on gesture recognition.  Try to get an online method working with webcam.  Explore training and identifying different gestures – how can users be guided to train gestures that are not overly complex?
Feb 23 – 29 – Investigate both Squeak and Javascript-based blocks.  Examine how to send data from a Python script to desired platform.  For Squeak, gauge how feasible embedding video in a Scratch-like program would be (email around as well).
March 1-8 – Begin creating my own blocks in either Squeak or JavaScript.

March 8-15 – Attempt to use either Squeak or JS blocks to create a simple program using gesture recognition (two programs need not be well-integrated).

March 15-22 – Work on more seamlessly integrating blocks language, webcam stream, and Python script.

March 22-29 -If feasible, work on implementing a second machine learning algorithm (most likely facial feature tracking).

March 22 – April 5 – Make adjustments to programs and write a couple of sample programs.’