如何使用"imageWithData"保持正确的图像格式来保存和检索UIImages?



我正在创建和存储OpenGL ES1 3D模型,并希望将用作纹理的图像文件包含在与3D模型数据相同的文件中。我在以可用格式加载图像数据时遇到问题。我使用UIImageJPEGRepresentation来转换图像数据并将其存储到NSData中。然后,我将其与所有3D数据一起附加到NSMutableData对象,并将其写入一个文件。数据的写入和读取似乎没有错误,但我在尝试使用图像数据创建"CGImageRef"时遇到了问题,我用它来生成3D模型的纹理数据。在从文件加载图像数据之后,图像数据似乎处于无法识别的格式,因为当我尝试创建"CGImageRef"时,它会生成错误"CGContextDrawImage:invalid context 0x0".我很感激任何帮助。我现在很困惑。所有的数据大小和偏移加起来看起来都很好。保存和加载都不会出错。图像数据看起来有点不对劲,但我不知道为什么。

这是我的代码:

//======================================================
- (BOOL)save3DFile: (NSString *)filePath {

// load TEST IMAGE into UIIMAGE
UIImage *image = [UIImage imageNamed:@“testImage.jpg"];
// convert image to JPEG encoded NSDATA
NSData *imageData = UIImageJPEGRepresentation(image,1.0);
// Save length of imageData to global "imDataLen" to use later in “load3DFile”
imDataLen = [imageData length];
// TEST: this works fine for CGImageRef creation in “loadTexture” 
// traceView.image=[UIImage imageWithData:[imageData subdataWithRange:NSMakeRange(0, imageDataLen)]];  
// [self loadTexture];
// TEST: this also works fine for CGImageRef creation in “loadTexture” 
// traceView.image=[UIImage imageWithData:txImData];
// [self loadTexture];
fvoh.fileVersion  = FVO_VERSION;
fvoh.obVertDatLen = obVertDatLen;
fvoh.obFaceDatLen = obFaceDatLen;
fvoh.obNormDatLen = obNormDatLen;
fvoh.obTextDatLen = obTextDatLen;
fvoh.obCompCount  = obCompCount;
fvoh.obVertCount  = obVertCount;
fvoh.obElemCount  = obElemCount;
fvoh.obElemSize   = obElemSize;
fvoh.obElemType   = obElemType;
NSMutableData *obSvData;
obSvData=[NSMutableData dataWithBytes:&fvoh length:(sizeof(fvoh))];
[obSvData appendBytes:obElem   length:obFaceDatLen];
[obSvData appendBytes:mvElem   length:obVertDatLen];
[obSvData appendBytes:mvNorm   length:obNormDatLen];
[obSvData appendBytes:obText   length:obTextDatLen];
[obSvData appendBytes:&ds      length:(sizeof(ds))];
// next, we append image data, and write all data to a file
// seems to work fine, no errors, at this point
[obSvData appendBytes: imageData length:[imageData length]];  
BOOL success=[obSvData writeToFile: filePath atomically:YES];
return success; 
}
//======================================================
- (void) load3DFile:(NSString *)filePath {
NSData *fvoData;
NSUInteger offSet,fiLen,fhLen,dsLen;
[[FileList sharedFileList] setCurrFile:(NSString *)filePath];
fvoData=[NSData dataWithContentsOfFile:filePath];
fiLen=[fvoData length];
fhLen=sizeof(fvoh);
dsLen=sizeof(ds);
memcpy(&fvoh,[fvoData bytes],fhLen);offSet=fhLen;
//+++++++++++++++++++++++++++++++
obVertDatLen = fvoh.obVertDatLen;
obFaceDatLen = fvoh.obFaceDatLen;
obNormDatLen = fvoh.obNormDatLen;
obTextDatLen = fvoh.obTextDatLen;
obCompCount  = fvoh.obCompCount;
obVertCount  = fvoh.obVertCount;
obElemCount  = fvoh.obElemCount;
obElemSize   = fvoh.obElemSize;
obElemType   = fvoh.obElemType;
//+++++++++++++++++++++++++++++++
memcpy(obElem, [fvoData bytes]+offSet,obFaceDatLen);offSet+=obFaceDatLen;
memcpy(mvElem, [fvoData bytes]+offSet,obVertDatLen);offSet+=obVertDatLen;
memcpy(mvNorm, [fvoData bytes]+offSet,obNormDatLen);offSet+=obNormDatLen;
memcpy(obText, [fvoData bytes]+offSet,obTextDatLen);offSet+=obTextDatLen;
memcpy(&ds,    [fvoData bytes]+offSet,dsLen);offSet+=dsLen;
// the following seem to read the data into “imageData” just fine, no errors
// NSData *imageData = [fvoData subdataWithRange:NSMakeRange(offSet, imDataLen)];
// NSData *imageData = [fvoData subdataWithRange:NSMakeRange((fiLen-imDataLen), imDataLen)];
// NSData *imageData = [NSData dataWithBytes:[fvoData bytes]+offSet length: imDataLen];
NSData *imageData = [NSData dataWithBytes:[fvoData bytes]+(fiLen-imDataLen) length: imDataLen];
// but the contents of imageData seem to end up in an unexpected format, causing error: 
// “CGContextDrawImage: invalid context 0x0.” during CGImageRef creation in “loadTexture”
traceView.image=[UIImage imageWithData:imageData];
[self loadTexture];
}
//======================================================
- (void)loadTexture {
CGImageRef image=[traceView.image].CGImage;
CGContextRef texContext;GLubyte* bytes=nil;GLsizei width,height;
if(image){
width=(GLsizei)CGImageGetWidth(image);
height=(GLsizei)CGImageGetHeight(image);
bytes=(GLubyte*) calloc(width*height*4,sizeof(GLubyte));
texContext=CGBitmapContextCreate(bytes,width,height,8,width*4,CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast);
CGContextDrawImage(texContext,CGRectMake(0.0,0.0,(CGFloat)width,(CGFloat)height),image);
CGContextRelease(texContext);
}
if(bytes){
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,width,height,0,GL_RGBA,GL_UNSIGNED_BYTE,bytes);
free(bytes);
}
}
//======================================================

我没有收到这个问题的任何答案。我自己终于偶然发现了答案。当我执行save3DFile代码时,不是将图像数据添加到NSMutableData*obSvData,而是使用"appendBytes",如下所示:

[obSvData appendBytes: imageData length:[imageData length]];

我改为使用"appendData",如下所示:

[obSvData appendData: imageData];

其中imageData先前填充有UIImage的内容,并在如下过程中转换为JPEG格式:

NSData *imageData = UIImageJPEGRepresentation(image,1.0);

有关上下文,请参阅上面的完整代码列表。无论如何,使用"appendData"而不是"appendBytes"带来了所有的不同,并允许我将图像数据与所有其他3D模型数据(顶点、索引、法线等(存储在同一个文件中,可以毫无问题地重新加载所有数据,并成功地从一个文件创建具有纹理的3D模型。

相关内容

  • 没有找到相关文章

最新更新