January 27, 2020

11.5: Computer Vision: Color Tracking – Processing Tutorial



In this tutorial, I demonstrate how to analyze the pixels of an image to track an object of a specific color.

Link to the previous Computer Vision video: https://youtu.be/h8tk0hmWB44

Support this channel on Patreon: https://patreon.com/codingtrain

Send me your questions and coding challenges!

Contact: https://twitter.com/shiffman

Links discussed in this video:
Computer Vision for Artists and Designers Essay by Golan Levin: http://www.flong.com/texts/essays/essay_cvad/
Image Processing in Computer Vision by Golan Levin: http://openframeworks.cc/ofBook/chapters/image_processing_computer_vision.html

Source Code for the Video Lessons: https://github.com/CodingTrain/Rainbow-Code

p5.js: https://p5js.org/
Processing: https://processing.org

Computer Vision videos: https://www.youtube.com/playlist?list=PLRqwX-V7Uu6aG2RJHErXKSWFDXU4qo_ro

Coding Challenges: https://www.youtube.com/playlist?list=PLRqwX-V7Uu6ZiZxtDDRCi6uhfTH4FilpH

Help us caption & translate this video!

http://amara.org/v/QbrO/

📄 Code of Conduct: https://github.com/CodingTrain/Code-of-Conduct

source

20 thoughts on “11.5: Computer Vision: Color Tracking – Processing Tutorial

  1. When I was trying it out it said "No library found for processing.video
    Libraries must be installed in a folder named 'libraries' inside the sketchbook folder (see the Preferences window)" would you know how I could fix this? thank you so much!

  2. Is it possible to make the code so that it automatically tracks the reddest colour, without the mousePressed?
    I have been trying for a quite a while now to make it start that up automatically, but with no luck

  3. Yo coding train, I got an question, I am developing an vision system for the first robotics competition, is there any way I can color the pixels of our target and track a target?

  4. I built this with python but cdist (for the color-distance) is really slow. What am I doing wrong? Takes 0,3 seconds for all pixels in one frame (640×480)

  5. Hey, I have been trying to adapt that example to a Raspberry Pi, however, I keep getting errors like "ArrayOutOfBoundsException" here "color currentColor = video.pixels[loc];". It would be highly appreciable if you could help me with that. Thank you.

  6. Very good demonstation.

    I 've problem with hypermedia.video.

    "No library found for hypermedia.video
    Libraries must be installed in a folder named 'libraries' inside the sketchbook folder (see the Preferences window)"

    How should i solve this issue?

    I use processing 3x
    and i tried with processing 2x still doesn't work.

    import hypermedia.video.*;
    import java.awt.*;
    import processing.serial.*;

    OpenCV opencv;

    // contrast/brightness values
    int contrast_value = 0;
    int brightness_value = 0;

    Serial port;

    // track the current servo for control, position, and default positions.
    char servoTiltPosition = 90;
    char servoPanPosition = 90;
    char tiltChannel = 0;
    char panChannel = 1;
    // the x,y coordinates of the middle of the face.
    int midFaceY = 0;
    int midFaceX = 0;
    // define the center of the screen.
    int midScreenY = (height/2);
    int midScreenX = (width/2);
    int midScreenWindow = 10; //error margin for centering.
    int stepSize = 1; //degree of change per iteration.
    Rectangle[] faces;

    void setup() {

    size( 320, 240 );

    opencv = new OpenCV( this );
    opencv.capture( width, height ); // open video stream
    opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT ); // load detection description, here-> front face detection : "haarcascade_frontalface_alt.xml"

    // print usage
    println(Serial.list()); // reveals which com teh arduino is on.
    port = new Serial(this, Serial.list()[0], 57600); // set baud rate to match arduino.
    println( "Drag mouse on X-axis inside this sketch window to change contrast" );
    println( "Drag mouse on Y-axis inside this sketch window to change brightness" );

    // send the initial angles to the Arduino.
    port.write(tiltChannel); // set tilt servo ID
    port.write(servoTiltPosition); // sends tilt angle
    port.write(panChannel); // set pan servo ID
    port.write(servoPanPosition); // sends pan angle
    }

    public void stop() {
    opencv.stop();
    super.stop();
    }

    void draw() {

    // grab a new frame, convert to gray
    opencv.read();
    opencv.convert( GRAY );
    opencv.contrast( contrast_value );
    opencv.brightness( brightness_value );

    //display image
    image( opencv.image(), 0, 0 );
    // proceed detection
    faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );

    // draw face area(s)
    noFill();
    stroke(255,0,0);
    for( int i=0; i<faces.length; i++ ) {
    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height );
    }
    }

    {
    // Find out if any faces were detected.
    if(faces.length > 0)
    {
    //If a face was found, find the midpoint of the first face in the frame.
    //NOTE: .x and .y of the face rectangle corresponds to the UPPER LEFT corner of the rectangle,
    //so they are adjusted to find the true midpoint.
    midFaceY = faces[0].y + (faces[0].height/2);
    midFaceX = faces[0].x + (faces[0].width/2);

    //Find out if the Y component of the face is below the midpoint
    if(midFaceY < (midScreenY – midScreenWindow)){
    if(servoTiltPosition >= 5)servoTiltPosition -= stepSize; //If it is below the middle of the screen, update teh tilt positions variable to lower the tilt servo.
    }
    //Find out if the Y component of the face is above the midpoint.
    else if(midFaceY > (midScreenY + midScreenWindow)){
    if(servoTiltPosition <= 175)servoTiltPosition += stepSize; //Update the tilt positionvariable to raise the tilt servo.
    }
    //Find out if the X component of the face is to the left of the midpoint.
    if(midFaceX < (midScreenX – midScreenWindow)){
    if(servoPanPosition >= 5)servoPanPosition -= stepSize; //Update the pan position to move the servo to teh left.
    }
    //Find out if the X component of the face is to the right of the midpoint.
    else if(midFaceX > midScreenX + midScreenWindow){
    if(servoPanPosition <= 175)servoPanPosition += stepSize; //Update the pan position variable to move the servo to the right.
    }
    }
    }

    {
    //Update the servo positions by sending teh serial command to the Arduino.
    port.write(tiltChannel);
    port.write(servoTiltPosition);
    port.write(panChannel);
    port.write(servoPanPosition);
    delay(1);
    }

    /**
    * Changes contrast/brigthness values
    */
    void mouseDragged() {
    contrast_value = (int) map( mouseX, 0, width, -128, 128 );
    brightness_value = (int) map( mouseY, 0, width, -128, 128 );
    }

  7. I there, is it possible to start tracking a color without the mouseClick? The question is, is it possible for exemple, tracking the red color as the video starts (without any action)?

  8. Why dont you track the position between the last average and the new found redish pixels and value them, so the nearer get a higher value so then you dont get red on your skin/lips mixed with the red cup

  9. why do I have this error?

    "A library relies on native code that's not available.
    Or only works properly when the sketch is run as a 32-bit application.
    Could not run the sketch (Target VM failed to initialize).
    For more information, read revisions.txt and Help → Troubleshooting."

    on the
    String[] cameras = Capture.list();
    code

Comments are closed.