Streaming technology has improved significantly, giving rise to higher and higher video resolutions from those at or below 480p (which are known as standard definition or SD for short) to those at or above 720p (high definition, or HD for short).
The video resolution is vital for all apps. A research that I recently came across backs this up: 62% of people are more likely to negatively perceive a brand that provides a poor-quality video experience, while 57% of people are less likely to share a poor-quality video. With this in mind, it's no wonder that there are so many emerging solutions to enhance video resolution.
One solution is HDR — high dynamic range. It is a post-processing method used in imaging and photography, which mimics what a human eye can see by giving more details to dark areas and improving the contrast. When used in a video player, HDR can deliver richer videos with a higher resolution.
Many HDR solutions, however, are let down by annoying restrictions. These can include a lack of unified technical specifications, high level of difficulty for implementing them, and a requirement for videos in ultra-high definition. I tried to look for a solution without such restrictions and luckily, I found one. That's the HDR Vivid SDK from HMS Core Video Kit. This solution is packed with image-processing features like the opto-electronic transfer function (OETF), tone mapping, and HDR2SDR. With these features, the SDK can equip a video player with richer colors, higher level of detail, and more.
I used the SDK together with the HDR Ability SDK (which can also be used independently) to try the latter's brightness adjustment feature, and found that they could deliver an even better HDR video playback experience. And on that note, I'd like to share how I used these two SDKs to create a video player.
App Development
Preparations
i. Check whether the device is capable of decoding an HDR Vivid video. If the device has such a capability, the following function will return true.
public boolean isSupportDecode() {
// Check whether the device supports MediaCodec.
MediaCodecList mcList = new MediaCodecList(MediaCodecList.ALL_CODECS);
MediaCodecInfo[] mcInfos = mcList.getCodecInfos();
for (MediaCodecInfo mci : mcInfos) {
// Filter out the encoder.
if (mci.isEncoder()) {
continue;
}
String[] types = mci.getSupportedTypes();
String typesArr = Arrays.toString(types);
// Filter out the non-HEVC decoder.
if (!typesArr.contains("hevc")) {
continue;
}
for (String type : types) {
// Check whether 10-bit HEVC decoding is supported.
MediaCodecInfo.CodecCapabilities codecCapabilities = mci.getCapabilitiesForType(type);
for (MediaCodecInfo.CodecProfileLevel codecProfileLevel : codecCapabilities.profileLevels) {
if (codecProfileLevel.profile == HEVCProfileMain10
|| codecProfileLevel.profile == HEVCProfileMain10HDR10
|| codecProfileLevel.profile == HEVCProfileMain10HDR10Plus) {
// true means supported.
return true;
}
}
}
}
// false means unsupported.
return false;
}
ii. Parse a video to obtain information about its resolution, OETF, color space, and color format. Then save the information in a custom variable. In the example below, the variable is named as VideoInfo.
public class VideoInfo {
private int width;
private int height;
private int tf;
private int colorSpace;
private int colorFormat;
private long durationUs;
}
iii. Create a SurfaceView object that will be used by the SDK to process the rendered images.
// surface_view is defined in a layout file.
SurfaceView surfaceView = (SurfaceView) view.findViewById(R.id.surface_view);
iv. Create a thread to parse video streams from a video.
Rendering and Transcoding a Video
i. Create and then initialize an instance of HdrVividRender.
HdrVividRender hdrVividRender = new HdrVividRender();
hdrVividRender.init();
ii. Configure the OETF and resolution for the video source.
// Configure the OETF.
hdrVividRender.setTransFunc(2);
// Configure the resolution.
hdrVividRender.setInputVideoSize(3840, 2160);
When the SDK is used on an Android device, only the rendering mode for input is supported.
iii. Configure the brightness for the output. This step is optional.
hdrVividRender.setBrightness(700);
iv. Create a Surface object, which will serve as the input. This method is called when HdrVividRender works in rendering mode, and the created Surface object is passed as the inputSurface parameter of configure to the SDK.
Surface inputSurface = hdrVividRender.createInputSurface();
v. Configure the output parameters.
- Set the dimensions of the rendered Surface object. This step is necessary in the rendering mode for output.
// surfaceView is the video playback window.
hdrVividRender.setOutputSurfaceSize(surfaceView.getWidth(), surfaceView.getHeight());
- Set the color space for the buffered output video, which can be set in the transcoding mode for output. This step is optional. However, when no color space is set, BT.709 is used by default.
hdrVividRender.setColorSpace(HdrVividRender.COLORSPACE_P3);
- Set the color format for the buffered output video, which can be set in the transcoding mode for output. This step is optional. However, when no color format is specified, R8G8B8A8 is used by default.
hdrVividRender.setColorFormat(HdrVividRender.COLORFORMAT_R8G8B8A8);
vi. When the rendering mode is used as the output mode, the following APIs are required.
hdrVividRender.configure(inputSurface, new HdrVividRender.InputCallback() {
@Override
public int onGetDynamicMetaData(HdrVividRender hdrVividRender, long pts) {
// Set the static metadata, which needs to be obtained from the video source.
HdrVividRender.StaticMetaData lastStaticMetaData = new HdrVividRender.StaticMetaData();
hdrVividRender.setStaticMetaData(lastStaticMetaData);
// Set the dynamic metadata, which also needs to be obtained from the video source.
ByteBuffer dynamicMetaData = ByteBuffer.allocateDirect(10);
hdrVividRender.setDynamicMetaData(20000, dynamicMetaData);
return 0;
}
}, surfaceView.getHolder().getSurface(), null);
vii. When the transcoding mode is used as the output mode, call the following APIs.
hdrVividRender.configure(inputSurface, new HdrVividRender.InputCallback() {
@Override
public int onGetDynamicMetaData(HdrVividRender hdrVividRender, long pts) {
// Set the static metadata, which needs to be obtained from the video source.
HdrVividRender.StaticMetaData lastStaticMetaData = new HdrVividRender.StaticMetaData();
hdrVividRender.setStaticMetaData(lastStaticMetaData);
// Set the dynamic metadata, which also needs to be obtained from the video source.
ByteBuffer dynamicMetaData = ByteBuffer.allocateDirect(10);
hdrVividRender.setDynamicMetaData(20000, dynamicMetaData);
return 0;
}
}, null, new HdrVividRender.OutputCallback() {
@Override
public void onOutputBufferAvailable(HdrVividRender hdrVividRender, ByteBuffer byteBuffer,
HdrVividRender.BufferInfo bufferInfo) {
// Process the buffered data.
}
});
new HdrVividRender.OutputCallback() is used for asynchronously processing the returned buffered data. If this method is not used, the read method can be used instead. For example: hdrVividRender.read(new BufferInfo(), 10);
// 10 is a timestamp, which is determined by your app.
viii. Start the processing flow.
hdrVividRender.start();
ix. Stop the processing flow.
hdrVividRender.stop();
x. Release the resources that have been occupied.
hdrVividRender.release();
hdrVividRender = null;
During the above steps, I noticed that when the dimensions of Surface change, setOutputSurfaceSize has to be called to re-configure the dimensions of the Surface output.
Besides, in the rendering mode for output, when WisePlayer is switched from the background to the foreground or vice versa, the Surface object will be destroyed and then re-created. In this case, there is a possibility that the HdrVividRender instance is not destroyed. If so, the setOutputSurface API needs to be called so that a new Surface output can be set.
Setting Up HDR Capabilities
HDR capabilities are provided in the class HdrAbility. It can be used to adjust brightness when the HDR Vivid SDK is rendering or transcoding an HDR Vivid video.
i. Initialize the function of brightness adjustment.
HdrAbility.init(getApplicationContext());
ii. Enable the HDR feature on the device. Then, the maximum brightness of the device screen will increase.
HdrAbility.setHdrAbility(true);
iii. Configure the alternative maximum brightness of white points in the output video image data.
HdrAbility.setBrightness(600);
iv. Make the video layer highlighted.
HdrAbility.setHdrLayer(surfaceView, true);
v. Configure the feature of highlighting the subtitle layer or the bullet comment layer.
HdrAbility.setCaptionsLayer(captionView, 1.5f);
Summary
Video resolution is an important influencer of user experience for mobile apps. HDR is often used to post-process video, but it is held back by a number of restrictions, which are resolved by the HDR Vivid SDK from Video Kit.
This SDK is loaded with features for image processing such as the OETF, tone mapping, and HDR2SDR, so that it can mimic what human eyes can see to deliver immersive videos that can be enhanced even further with the help of the HDR Ability SDK from the same kit. The functionality and straightforward integration process of these SDKs make them ideal for implementing the HDR feature into a mobile app.