OpenKinect 和 Processing – 无法显示 Z 坐标



这段代码来自Daniel Shiffman(下图)。我试着读出Z坐标。我一点也不知道该怎么做,所以请帮助我将不胜感激。

AveragePointTracking.pde

// Daniel Shiffman
// Tracking the average location beyond a given depth threshold
// Thanks to Dan O'Sullivan
// http://www.shiffman.net
// https://github.com/shiffman/libfreenect/tree/master/wrappers/java/processing
import org.openkinect.*;
import org.openkinect.processing.*;
// Showing how we can farm all the kinect stuff out to a separate class
KinectTracker tracker;
// Kinect Library object
Kinect kinect;
void setup() {
  size(640,600);
  kinect = new Kinect(this);
  tracker = new KinectTracker();
}
void draw() {
  background(255);
  // Run the tracking analysis
  tracker.track();
  // Show the image
  tracker.display();
  // Let's draw the raw location
  PVector v1 = tracker.getPos();
  fill(50,100,250,200);
  noStroke();
  ellipse(v1.x,v1.y,10,10);
  // Let's draw the "lerped" location
  //PVector v2 = tracker.getLerpedPos();
  //fill(100,250,50,200);
  //noStroke();
  //ellipse(v2.x,v2.y,20,20);
  // Display some info
  int t = tracker.getThreshold();
  fill(0);
  text("Location-X: " + v1.x,10,500);
  text("Location-Y: " + v1.y,10,530);
  text("Location-Z: ",10,560);
  text("threshold: " + t,10,590);
}
void stop() {
  tracker.quit();
  super.stop();
}

KinectTracker.pde

class KinectTracker {
  // Size of kinect image
  int kw = 640;
  int kh = 480;
  int threshold = 500;
  // Raw location
  PVector loc;
  // Interpolated location
  PVector lerpedLoc;
  // Depth data
  int[] depth;

  PImage display;
  KinectTracker() {
    kinect.start();
    kinect.enableDepth(true);
    // We could skip processing the grayscale image for efficiency
    // but this example is just demonstrating everything
    kinect.processDepthImage(true);
    display = createImage(kw,kh,PConstants.RGB);
    loc = new PVector(0,0);
    lerpedLoc = new PVector(0,0);
  }
  void track() {
    // Get the raw depth as array of integers
    depth = kinect.getRawDepth();
    // Being overly cautious here
    if (depth == null) return;
    float sumX = 0;
    float sumY = 0;
    float count = 0;
    for(int x = 0; x < kw; x++) {
      for(int y = 0; y < kh; y++) {
        // Mirroring the image
        int offset = kw-x-1+y*kw;
        // Grabbing the raw depth
        int rawDepth = depth[offset];
        // Testing against threshold
        if (rawDepth < threshold) {
          sumX += x;
          sumY += y;
          count++;
        }
      }
    }
    // As long as we found something
    if (count != 0) {
      loc = new PVector(sumX/count,sumY/count);
    }
    // Interpolating the location, doing it arbitrarily for now
    lerpedLoc.x = PApplet.lerp(lerpedLoc.x, loc.x, 0.3f);
    lerpedLoc.y = PApplet.lerp(lerpedLoc.y, loc.y, 0.3f);
  }
  PVector getLerpedPos() {
    return lerpedLoc;
  }
  PVector getPos() {
    return loc;
  }
  void display() {
    PImage img = kinect.getDepthImage();
    // Being overly cautious here
    if (depth == null || img == null) return;
    // Going to rewrite the depth image to show which pixels are in threshold
    // A lot of this is redundant, but this is just for demonstration purposes
    display.loadPixels();
    for(int x = 0; x < kw; x++) {
      for(int y = 0; y < kh; y++) {
        // mirroring image
        int offset = kw-x-1+y*kw;
        // Raw depth
        int rawDepth = depth[offset];
        int pix = x+y*display.width;
        if (rawDepth < threshold) {
          // A red color instead
          display.pixels[pix] = color(245,100,100);
        } 
        else {
          display.pixels[pix] = img.pixels[offset];
        }
      }
    }
    display.updatePixels();
    // Draw the image
    image(display,0,0);
  }
  void quit() {
    kinect.quit();
  }
  int getThreshold() {
    return threshold;
  }
  void setThreshold(int t) {
    threshold =  t;
  }
}

有两个主要步骤:

  1. 获取深度(KinectTracker已经在track()方法中做了)
  2. 获取当前像素的深度,使用偏移量在基于2D位置(x,y)的1D深度数组中找到一个位置(这也是在track()方法中完成的:int offset = kw-x-1+y*kw;)

请注意,坐标是镜像的,通常是这样计算索引的:

index = y*width+x
get()参考说明 中解释的

所以理论上你所需要做的就是在track()方法的末尾加上这样的内容:

lerpedLoc.z = depth[kw-((int)lerpedLoc.x)-1+((int)lerpedLoc.y)*kw];

一样:

void track() {
    // Get the raw depth as array of integers
    depth = kinect.getRawDepth();
    // Being overly cautious here
    if (depth == null) return;
    float sumX = 0;
    float sumY = 0;
    float count = 0;
    for(int x = 0; x < kw; x++) {
      for(int y = 0; y < kh; y++) {
        // Mirroring the image
        int offset = kw-x-1+y*kw;
        // Grabbing the raw depth
        int rawDepth = depth[offset];
        // Testing against threshold
        if (rawDepth < threshold) {
          sumX += x;
          sumY += y;
          count++;
        }
      }
    }
    // As long as we found something
    if (count != 0) {
      loc = new PVector(sumX/count,sumY/count);
    }
    // Interpolating the location, doing it arbitrarily for now
    lerpedLoc.x = PApplet.lerp(lerpedLoc.x, loc.x, 0.3f);
    lerpedLoc.y = PApplet.lerp(lerpedLoc.y, loc.y, 0.3f);
    lerpedLoc.z = depth[kw-((int)lerpedLoc.x)-1+((int)lerpedLoc.y)*kw];
  }

我现在不能用kinect测试,但这应该可以。我不确定你是否能得到正确像素或镜像像素的深度。唯一的其他选项是:

lerpedLoc.z = depth[((int)lerpedLoc.x)+((int)lerpedLoc.y)*kw];

有两种方法…

丹尼尔的代码现在访问坐标的方式是使用二维矢量(即。有X和Y)。你可以将其更改为三维矢量(所以它也存储Z坐标),并且OpenKinect库应该以与X和Y相同的方式返回Z坐标……我认为;-)(必须检查他的来源)。但是这将返回的每个像素的Z坐标,然后你必须循环,这很麻烦,计算成本很高…

现在,Daniel在这个例子中实际做的方式是找到一个特定的XY位置的深度,并返回给你,如果超过了某个阈值…这是你在KinectTracker中看到的rawDepth整数…所以它会测试这是否小于阈值(你可以改变),如果是,它会为这些像素上色并将它们写入图像缓冲区…然后你可以请求那个图像的XY坐标,例如,或者把它传递给blob检测例程,等等…

将此添加到void track()的末尾:

lerpedLoc.z = depth[kw-((int)lerpedLoc.x)-1+((int)lerpedLoc.y)*kw];

然后我改变了void draw()中的最后一个块来读取Z值:

// Display some info
int t = tracker.getThreshold();
fill(0);
text("Location-X: " + v1.x,10,500);
text("Location-Y: " + v1.y,10,530);
text("Location-Z: " + v2.z,10,560);  // <<Adding this worked!
text("threshold: " + t,10,590);

最新更新