本文整理汇总了C++中osg::Image::getRowSizeInBytes方法的典型用法代码示例。如果您正苦于以下问题:C++ Image::getRowSizeInBytes方法的具体用法?C++ Image::getRowSizeInBytes怎么用?C++ Image::getRowSizeInBytes使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类osg::Image
的用法示例。
在下文中一共展示了Image::getRowSizeInBytes方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的C++代码示例。
示例1: CreateCGImageFromOSGData
/* Create a CGImageRef from osg::Image.
* Code adapted from
* http://developer.apple.com/samplecode/OpenGLScreenSnapshot/listing2.html
*/
CGImageRef CreateCGImageFromOSGData(const osg::Image& osg_image)
{
size_t image_width = osg_image.s();
size_t image_height = osg_image.t();
/* From Apple's header for CGBitmapContextCreate()
* Each row of the bitmap consists of `bytesPerRow' bytes, which must be at
* least `(width * bitsPerComponent * number of components + 7)/8' bytes.
*/
size_t target_bytes_per_row;
CGColorSpaceRef color_space;
CGBitmapInfo bitmap_info;
/* From what I can figure out so far...
* We need to create a CGContext connected to the data we want to save
* and then call CGBitmapContextCreateImage() on that context to get
* a CGImageRef.
* However, OS X only allows 4-component image formats (e.g. RGBA) and not
* just RGB for the RGB-based CGContext. So for a 24-bit image coming in,
* we need to expand the data to 32-bit.
* The easiest and fastest way to do that is through the vImage framework
* which is part of the Accelerate framework.
* Also, the osg::Image data coming in is inverted from what we want, so
* we need to invert the image too. Since the osg::Image is const,
* we don't want to touch the data, so again we turn to the vImage framework
* and invert the data.
*/
vImage_Buffer vimage_buffer_in =
{
(void*)osg_image.data(), // need to override const, but we don't modify the data so it's safe
image_height,
image_width,
osg_image.getRowSizeInBytes()
};
void* out_image_data;
vImage_Buffer vimage_buffer_out =
{
NULL, // will fill-in in switch
image_height,
image_width,
0 // will fill-in in switch
};
vImage_Error vimage_error_flag;
// FIXME: Do I want to use format, type, or internalFormat?
switch(osg_image.getPixelFormat())
{
case GL_LUMINANCE:
{
bitmap_info = kCGImageAlphaNone;
target_bytes_per_row = (image_width * 8 + 7)/8;
//color_space = CGColorSpaceCreateWithName(kCGColorSpaceGenericGray);
color_space = CGColorSpaceCreateDeviceGray();
if(NULL == color_space)
{
return NULL;
}
// out_image_data = calloc(target_bytes_per_row, image_height);
out_image_data = malloc(target_bytes_per_row * image_height);
if(NULL == out_image_data)
{
OSG_WARN << "In CreateCGImageFromOSGData, malloc failed" << std::endl;
CGColorSpaceRelease(color_space);
return NULL;
}
vimage_buffer_out.data = out_image_data;
vimage_buffer_out.rowBytes = target_bytes_per_row;
// Now invert the image
vimage_error_flag = vImageVerticalReflect_Planar8(
&vimage_buffer_in, // since the osg_image is const...
&vimage_buffer_out, // don't reuse the buffer
kvImageNoFlags
);
if(vimage_error_flag != kvImageNoError)
{
OSG_WARN << "In CreateCGImageFromOSGData for GL_LUMINANCE, vImageVerticalReflect_Planar8 failed with vImage Error Code: " << vimage_error_flag << std::endl;
free(out_image_data);
CGColorSpaceRelease(color_space);
return NULL;
}
break;
}
case GL_ALPHA:
{
bitmap_info = kCGImageAlphaOnly;
target_bytes_per_row = (image_width * 8 + 7)/8;
// According to:
// http://developer.apple.com/qa/qa2001/qa1037.html
// colorSpace=NULL is for alpha only
color_space = NULL;
//.........这里部分代码省略.........