如何将stb_truetype位图加载到OpenGL纹理中



我有以下代码,它从std::wstring创建一个位图,然后使用该位图保存到图像中。

#include <stb_image_write.h>
#include <stb_truetype.h>
unsigned char buffer[24 << 20];
stbtt_fontinfo font;
fread(buffer, 1, 1000000, fopen("../assets/fonts/unibody_8.ttf", "rb"));
stbtt_InitFont(&font, buffer, 0);
float scale = stbtt_ScaleForPixelHeight(&font, 48 * 2);
int ascent, descent;
stbtt_GetFontVMetrics(&font, &ascent, &descent, 0);
int baseline = (int)(ascent * scale);
int width = 0;
for (const auto &character : content) {
int advance, leftSideBearing;
stbtt_GetCodepointHMetrics(&font, character, &advance, &leftSideBearing);
texture.width += advance * scale;
}
int height = (ascent + (descent * -1)) * scale;
std::vector<unsigned char> pixels(
(size_t)(texture.width * texture.height), (unsigned char)0);
float xpos = 0.0f;
int characterIndex = 0;
while (content[characterIndex]) {
int advance, lsb, x0, y0, x1, y1;
float x_shift = xpos - (float)floor(xpos);
stbtt_GetCodepointHMetrics(
&font, content[characterIndex], &advance, &lsb);
stbtt_GetCodepointBitmapBoxSubpixel(
&font,
content[characterIndex],
scale,
scale,
x_shift,
0,
&x0,
&y0,
&x1,
&y1);
auto stride = width * (baseline + y0) + (int)xpos + x0;
stbtt_MakeCodepointBitmapSubpixel(
&font,
&pixels.at(0) + stride,
x1 - x0,
y1 - y0,
texture.width,
scale,
scale,
x_shift,
0,
content[characterIndex]);
xpos += (advance * scale);
if (content[characterIndex + 1]) {
int kernAdvance = stbtt_GetCodepointKernAdvance(
&font, content[characterIndex], content[characterIndex + 1]);
xpos += scale * kernAdvance;
}
++characterIndex;
}
// This step works fine, this means that the data in pixels is good
stbi_write_png("image.png", width, height, 1, pixels.data(), 0);

我现在想做的是将位图加载到OpenGL纹理中,但这一步似乎会使应用程序崩溃。

uint32_t id = 0;
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// This crashes the application, I guess it's because the data in pixels is not valid for an OpenGL texture
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGBA,
width,
height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
pixels.data());

我试图捕捉异常,但程序只是无声地崩溃,所以我不知道如何处理。

std::vector<unsigned char> pixels(
(size_t)(texture.width * texture.height), (unsigned char)0);

将像素数据设置为单色位图宽度*高度,每个像素8位。

但是这里:

glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGBA,
width,
height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
pixels.data());

您告诉GL,每个像素有32位,红色、绿色、蓝色和alpha各有8位。因此,GL将读取比您提供的内存多4倍的内存,读取超过缓冲区的末尾,如果幸运的话,还会在之后的某个时间点因某些分段错误而导致应用程序崩溃。

您没有指定正在使用的总账版本。对于现代GL,您可以使用GL_RED作为纹理格式,并直接通过着色器或纹理喷嘴设置处理到RGBA的转换。对于古代遗留GL,您可能必须使用GL_LUMINANCE格式。

您还必须注意,GL的默认解包行对齐方式是4字节,因此,如果宽度不是4的倍数,则还必须显式设置glPixelStore(GL_UNPACK_ALIGNMENT,1)以匹配您的数据对齐方式。

相关内容

  • 没有找到相关文章

最新更新