1、前言
在「简单了解 iOS CVPixelBuffer (中)」中,我们了解了颜色空间RGB
和YUV
的区别以及相关的背景知识,最后对CVPixelBuffer
中的kCVPixelFormatType
相关类型进行了解读。我们已经对CVPixelBuffer
有了初步的了解,在这篇文章中,我们将继续聊聊CVPixelBuffer
在使用过程中的一些格式转换;
RGB和YUV格式转换
在很多场景下,我们需要将不同的颜色空间进行转换,以此来解决对应的工程性问题。 以下是转换公式:
1.1 YUV -> RGB
R = Y + 1.13983 * V
G = Y - 0.39465 * U - 0.58060 * V
B = Y + 2.03211 * U
1.2 RGB -> YUV
Y = 0.299 * R + 0.587 * G + 0.114 * B
U = -0.14713 * R - 0.28886 * G + 0.436 * B
V = 0.615 * R - 0.51499 * G - 0.10001 * B
2、iOS中常见格式转换
在iOS中RGB
和YUV
互相转换的方法会使用到libyuv开源库,打开此链接需要梯子,目前国内也有,需要的自取libyuv开源库·国内仓库
iOS在CVPixelBuffer
转换的上会很复杂,对buffer操作之前需要执行加锁方法CVPixelBufferLockBaseAddress
进行保护,在处理完后,执行解锁buffer方法CVPixelBufferUnlockBaseAddress
。
2.1 NV12 to I420
2.2 NV12 to BGRA
【学习地址】:FFmpeg/WebRTC/RTMP/NDK/Android音视频流媒体高级开发
【文章福利】:免费领取更多音视频学习资料包、大厂面试题、技术视频和学习路线图,资料包括(C/C++,Linux,FFmpeg webRTC rtmp hls rtsp ffplay srs 等等)有需要的可以点击1079654574加群领取哦~
/// NV12 to BGRA
+ (CVPixelBufferRef)RGBAPixelBufferWithNV12:(CVImageBufferRef)pixelBufferNV12{
CVPixelBufferLockBaseAddress(pixelBufferNV12, 0);
//图像宽度(像素)
size_t pixelWidth = CVPixelBufferGetWidth(pixelBufferNV12);
//图像高度(像素)
size_t pixelHeight = CVPixelBufferGetHeight(pixelBufferNV12);
//y_stride
size_t src_stride_y = CVPixelBufferGetBytesPerRowOfPlane(pixelBufferNV12, 0);
//uv_stride
size_t src_stride_uv = CVPixelBufferGetBytesPerRowOfPlane(pixelBufferNV12,1);
//获取CVImageBufferRef中的y数据
uint8_t *src_y = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBufferNV12, 0);
//获取CMVImageBufferRef中的uv数据
uint8_t *src_uv =(unsigned char *) CVPixelBufferGetBaseAddressOfPlane(pixelBufferNV12, 1);
// 创建一个空的32BGRA格式的CVPixelBufferRef
NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
CVPixelBufferRef pixelBufferRGBA = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
pixelWidth,pixelHeight,kCVPixelFormatType_32BGRA,
(__bridge CFDictionaryRef)pixelAttributes,&pixelBufferRGBA);//kCVPixelFormatType_32BGRA
if (result != kCVReturnSuccess) {
NSLog(@"Unable to create cvpixelbuffer %d", result);
return NULL;
}
result = CVPixelBufferLockBaseAddress(pixelBufferRGBA, 0);
if (result != kCVReturnSuccess) {
CFRelease(pixelBufferRGBA);
NSLog(@"Failed to lock base address: %d", result);
return NULL;
}
// 得到新创建的CVPixelBufferRef中 rgb数据的首地址
uint8_t *rgb_data = (uint8*)CVPixelBufferGetBaseAddress(pixelBufferRGBA);
// 使用libyuv为rgb_data写入数据,将NV12转换为BGRA
size_t bgraStride = CVPixelBufferGetBytesPerRowOfPlane(pixelBufferRGBA,0);
int ret = NV12ToARGB(src_y, (int)src_stride_y, src_uv, (int)src_stride_uv, rgb_data,(int)bgraStride, (int)pixelWidth, (int)pixelHeight);
if (ret) {
NSLog(@"Error converting NV12 VideoFrame to BGRA: %d", result);
CFRelease(pixelBufferRGBA);
return NULL;
}
CVPixelBufferUnlockBaseAddress(pixelBufferRGBA, 0);
CVPixelBufferUnlockBaseAddress(pixelBufferNV12, 0);
return pixelBufferRGBA;
}
2.3 CVPixelBufferRef to UIImage
以下方法可以将视频帧转成单张图片(比较适用于间隔时间长的截图,高频的使用这个方法很可能会引起内存的问题)
/// buffer to image
+ (UIImage *)convert:(CVPixelBufferRef)pixelBuffer {
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext createCGImage:ciImage
fromRect:CGRectMake(0, 0, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer))];
UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
CGImageRelease(videoImage);
return uiImage;
}
2.4 CGImageRef to CVPixelBufferRef
以下方法会通过单张图片转成一个PixelBuffer (适用于将某一帧图片转成Buffer添加字幕或者美颜贴纸等等)
/// image to buffer
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image {
NSDictionary *options = @{
(NSString*)kCVPixelBufferCGImageCompatibilityKey : @YES,
(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey : @YES,
(NSString*)kCVPixelBufferIOSurfacePropertiesKey: [NSDictionary dictionary]
};
CVPixelBufferRef pxbuffer = NULL;
CGFloat frameWidth = CGImageGetWidth(image);
CGFloat frameHeight = CGImageGetHeight(image);
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
frameWidth,
frameHeight,
kCVPixelFormatType_32BGRA,
(__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata,
frameWidth,
frameHeight,
8,
CVPixelBufferGetBytesPerRow(pxbuffer),
rgbColorSpace,
(CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformIdentity);
CGContextDrawImage(context, CGRectMake(0,
0,
frameWidth,
frameHeight),
image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
2.5 Buffer Data to UIImage
以下方法会通过内存数据转成图片 (根据内存的地址去取出存储的buffer并生成图片,其实这里的内存的地址指向的就是Buffer)
// NV12 to image
+ (UIImage *)YUVtoUIImage:(int)w h:(int)h buffer:(unsigned char *)buffer {
//YUV(NV12)-->CIImage--->UIImage Conversion
NSDictionary *pixelAttributes = @{(NSString*)kCVPixelBufferIOSurfacePropertiesKey:@{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
w,
h,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer,0);
void *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
// Here y_ch0 is Y-Plane of YUV(NV12) data.
unsigned char *y_ch0 = buffer;
unsigned char *y_ch1 = buffer + w * h;
memcpy(yDestPlane, y_ch0, w * h);
void *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
// Here y_ch1 is UV-Plane of YUV(NV12) data.
memcpy(uvDestPlane, y_ch1, w * h * 0.5);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
if (result != kCVReturnSuccess) {
NSLog(@"Unable to create cvpixelbuffer %d", result);
}
// CIImage Conversion
if (@available(iOS 13.0, *)) {
CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext createCGImage:coreImage
fromRect:CGRectMake(0, 0, w, h)];
UIImage *finalImage = [[UIImage alloc] initWithCGImage:videoImage];
CVPixelBufferRelease(pixelBuffer);
CGImageRelease(videoImage);
return finalImage;
}
return nil;
};
2.6 Buffer To NSData
+ (NSData *)dataFrompixelBuffer:(CVPixelBufferRef)pixelBuffer {
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
size_t pixelWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t pixelHeight = CVPixelBufferGetHeight(pixelBuffer);
size_t y_size = pixelWidth * pixelHeight;
size_t uv_size = y_size / 2;
uint8_t *yuv_frame = (uint8_t *)malloc(uv_size + y_size);
uint8_t *y_frame = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yuv_frame, y_frame, y_size);
uint8_t *uv_frame = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(yuv_frame + y_size, uv_frame, uv_size);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
NSData *data = [NSData dataWithBytesNoCopy:yuv_frame length:y_size + uv_size];
return data;
}
iOS中格式转换涉及到C、OC、C++
的一些方法,所以很多方法看起来会非常的冗余,需要一定的基础和持续的学习。还有很多是通过OpenGL来绘制图像的方法,更是难得看懂,所以初学者做好笔记,保持耐心,把常用的方法梳理起来,最后封装到工具类中。等到机缘巧合的时候再来深入了解。
3、参考文献
原文链接:简单了解 iOS CVPixelBuffer (下) – 掘金
原文地址:https://blog.csdn.net/irainsa/article/details/129759097
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如若转载,请注明出处:http://www.7code.cn/show_44280.html
如若内容造成侵权/违法违规/事实不符,请联系代码007邮箱:suwngjj01@126.com进行投诉反馈,一经查实,立即删除!