图像到2D数组,然后在处理后从中获取timage



我正在学习图像处理技术,并有一些作业。在我的作业中,它要求我覆盖一个RBG到灰色图像。我把图像转换成二维矩阵,做了一些事情,当我从二维矩阵再覆盖到图像时,一些错误发生了。这是我的代码:

private static SampleModel samM;
public static int[][] imageToArrayPixel(File file) {
    try {
        BufferedImage img = ImageIO.read(file);
        Raster raster = img.getData();
        int w = raster.getWidth(), h = raster.getHeight();
        int pixels[][] = new int[w][h];
        for (int x = 0; x < w; x++) {
            for (int y = 0; y < h; y++) {
                pixels[x][y] = raster.getSample(x, y, 0);
                System.out.print("  " + pixels[x][y]);
            }
            System.out.println("");
        }
        samM = raster.getSampleModel();
        return pixels;
    } catch (Exception e) {
        e.printStackTrace();
    }
    return null;
}
public static java.awt.Image getImage(int pixels[][]) {
    int w = pixels.length;
    int h = pixels[0].length;
    WritableRaster raster = Raster.createWritableRaster(samM, new Point(0, 0));

    for (int i = 0; i < w; i++) {
        for (int j = 0; j < pixels[i].length; j++) {
            if (pixels[i][j] > 128) {
                raster.setSample(i, j, 1, 255);
            } else {
                raster.setSample(i, j, 1, 0);
            }
        }
    }
    BufferedImage image = new BufferedImage(w, h, BufferedImage.TYPE_BYTE_GRAY);
    image.setData(raster);
    File output = new File("check.jpg");
    try {
        ImageIO.write(image, "jpg", output);
    } catch (Exception e) {
        e.printStackTrace();
    }
    return image;
}
public static java.awt.Image getImageWithRBG(Pixel pixels[][]) {
    int w = pixels.length;
    int h = pixels[0].length;
    WritableRaster raster = Raster.createWritableRaster(samM, new Point(0, 0));
    int[] pixelValue = new int[3];
    for (int i = 0; i < w; i++) {
        for (int j = 0; j < h; j++) {
            pixelValue[0] = pixels[i][j].red;
            pixelValue[1] = pixels[i][j].blue;
            pixelValue[2] = pixels[i][j].green;
            raster.setPixel(j, i, pixelValue);
        }
    }
    BufferedImage image = new BufferedImage(h, w, BufferedImage.TYPE_CUSTOM);
    image.setData(raster);
    File output = new File("check.jpg");
    try {
        ImageIO.write(image, "jpg", output);
    } catch (Exception e) {
        e.printStackTrace();
    }
    return image;
}
public static void main(String[] args) throws IOException {
    int pixel[][] = imageToArrayPixel(new File("C:\Users\KimEricko\Pictures\1402373904964_500.jpg"));
    getImage(pixel);
}

这是我用来转换的图像:之前这是我收到的修复后的照片:在

我不明白为什么恢复后的照片只有原照片的1/3。我能做些什么来解决这个问题?

看起来像有一个bug在getimagewiththrbg,那光栅。setPixel(j, i, pixelValue);应该是光栅。setPixel(i, j, pixelValue);

setPixel和setSample有相似的输入:x then y

我不知道是否还有其他问题,这只是我注意到的第一件事。

最新更新