iOS uses FFmpeg command line

Official Documentation FFmpeg is a set of open source computer programs that can be used to record, convert digital audio and video, and convert them into streams. It provides a complete solution for recording, converting and streaming audio and video. The code of FFmpeg consists of two parts, one part is library and the other part is tool. The api is all in the library, if you directly adjust the api to operate the video, you need to write c or c++. The other part is the tool, which uses the command line, so you don’t need to code yourself to realize the video operation process.

1. Functions of each module:

libavformat: used for the generation and analysis of various audio and video packaging formats; libavcodec: used for various types of sound and image encoding and decoding; libavutil: contains some public tool functions; libswscale: used for video scene scaling and color mapping conversion; libpostproc: used for post-effect processing; ffmpeg: a tool provided by the project, which can be used for format conversion, decoding or real-time encoding of TV cards, etc.; ffsever: an HTTP multimedia real-time broadcast streaming server; ffplay: a simple player, Use ffmpeg library to parse and decode, display via SDL;

This article mainly introduces how to integrate the FFmpeg library into the iOS project and use FFmpeg commands when the FFmpeg library has been compiled. As for how to compile a lot of tutorials, I am too lazy to post them and search by myself.

2. Import the configuration of the iOS project and the configuration using the command line tool

1. After the compilation is successful, you will get a directory like FFmpeg-iOS, which contains two subdirectories of lib and include. Drag FFmpeg-iOS directly into the project\

2. Add the required system dependent libraries, Build Phases – Link Binary With Libraries, add libz.tbd, libbz2.tbd, libiconv.tbd, CoreMedia.framework, VideoToolbox.framework, AudioToolbox.framework

3. Set the Header Search Paths path to point to the include directory in the project

  • The command + B can be compiled successfully here. If you only use the FFmpeg API, you can configure it here. The configuration of using the command line below is based on the FFmpeg4.2 version. There may be some small differences between different versions. difference

4. Find these files in the source code ffmpeg-4.2/fftools directory and import them into the project, as shown in the figure below

The config.h file is in the scratch folder at the same level as the ffmpeg-4.2 folder. There are different architectures under the scratch, and the real machine can use arm64

5. Modify the code of the command line tool

  • Search these header files globally, and comment all references

#include "compat/va_copy.h"
#include "libavresample/avresample.h"
#include "libpostproc/postprocess.h"
#include "libavutil/libm.h"
#include "libavutil/time_internal.h"
#include "libavutil/internal.h"
#include "libavformat/network.h"
#include "libavcodec/mathops.h"
#include "libavformat/os_support.h"
#include "libavutil/thread.h"
  • In the ffmpeg.c file, comment the following function calls and import the system header file #include

nb0_frames = nb_frames = mid_pred(ost->last_nb0_frames[0],
                                          ost->last_nb0_frames[1],
                                          ost->last_nb0_frames[2]);
?
ff_dlog(NULL, "force_key_frame: n:%f n_forced:%f prev_forced_n:%f t:%f prev_forced_t:%f -> res:%f\\
",
                    ost->forced_keyframes_expr_const_values[FKF_N],
                    ost->forced_keyframes_expr_const_values[FKF_N_FORCED],
                    ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_N],
                    ost->forced_keyframes_expr_const_values[FKF_T],
                    ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_T],
                    res);
  • Comment the following two lines of code in the print_all_libs_info function in the cmdutils.c file

PRINT_LIB_INFO(avresample, AVRESAMPLE, flags, level);
PRINT_LIB_INFO(postproc, POSTPROC, flags, level);
Comment the following two lines of code in the ffmpeg_opt.c file

{ "videotoolbox", videotoolbox_init, HWACCEL_VIDEOTOOLBOX, AV_PIX_FMT_VIDEOTOOLBOX },
?
{ "videotoolbox_pixfmt", HAS_ARG | OPT_STRING | OPT_EXPERT, { & amp;videotoolbox_pixfmt}, "" },
  • To solve the problem of duplication of the mian function, modify as follows

Add function declaration under

ffmpeg.h file:
int ffmpeg_main(int argc, char **argv);
In the ffmpeg.c file:
The main function is changed to ffmpeg_main; mainly to avoid the existence of two main functions
  • Modify the App crash problem after executing the ffmpeg_main method once

I saw some articles saying to modify the exit_program function in cmdutils.c, and comment out the code for exiting the process. I still have problems in my actual measurement.
The final modification is to change all the places where the exit_program function is called in the ffmpeg.c file to call the ffmpeg_cleanup function. 

Note: Some situations that will occur if all are changed to ffmpeg_cleanup currently, such as executing the command to view video information, there is no output path set in this command, and the original code goes to the branch that judges that there is no output path and directly calls exit_program to exit process, after changing to call ffmpeg_cleanup, the code will continue to go down and cause some crashes when accessing null pointers. This kind of individual situation can be handled by yourself, but in general usage scenarios, there will be input and output

  • Modify the problem of accessing null pointers when calling ffmpeg_main multiple times:

In the ffmpeg.c file
Reset the counter in the ffmpeg_cleanup method, add the code to reset the count before term_exit(); this line of code
The modification is as follows:
nb_filtergraphs = 0;
nb_output_files = 0;
nb_output_streams = 0;
nb_input_files = 0;
nb_input_streams = 0;
term_exit();

All of the above have been modified, command + B, the compilation is successful! You can use the command line, just import the header file #import “ffmpeg.h” where you use it.

[Learning address]: FFmpeg/WebRTC/RTMP/NDK/Android audio and video streaming media advanced development

[Article Benefits]: Receive more audio and video learning packages, Dachang interview questions, technical videos and learning roadmaps for free, including (C/C++, Linux, FFmpeg webRTC rtmp hls rtsp ffplay srs, etc. ) If you need it, you can click1079654574Add to the group to receive~

3. The use of the command line and the processing progress callback

Here I encapsulate a method using cmd, the code is as follows

#import "HEFFmpegTools.h"
#import "ffmpeg.h"
?
@implementation HEFFmpegTools
?
///Execute the ffmpeg command, " " is the split marker
 + (void)runCmd:(NSString *)commandStr completionBlock:(void(^)(int result))completionBlock {
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
        // Split commands into command arrays according to " "
        NSArray *argv_array = [commandStr componentsSeparatedByString:(@" ")];
        // Convert the OC object to the corresponding C object
        int argc = (int)argv_array. count;
        char** argv = (char**)malloc(sizeof(char*)*argc);
        for(int i=0; i < argc; i ++ ) {
            argv[i] = (char*)malloc(sizeof(char)*1024);
            strcpy(argv[i],[[argv_array objectAtIndex:i]UTF8String]);
        }
        
        // Pass in the number of instructions and the instruction array, result==0 means success
        int result = ffmpeg_main(argc,argv);
        NSLog(@"Execute FFmpeg command: %@, result = %d",commandStr,result);
?
        dispatch_async(dispatch_get_main_queue(), ^{
            completionBlock(result);
        });
    });
}
?
@end

Use the sample code of the runCmd function to rotate a video 90 degrees and save it to the album

let videoPath = NSSearchPathForDirectoriesInDomains(.cachesDirectory, .userDomainMask, true).first! + "/tempvideo.mp4"
let inputVideo = "(Bundle.main.bundlePath)/ffm_video2.mp4"
let transformCmd = "ffmpeg -i (inputVideo) -y -vf rotate=PI/2 (videoPath)";
HEFFmpegTools.runCmd(transformCmd) { (result) in
    if FileManager.default.fileExists(atPath: videoPath) {
        print("Save to album");
        UISaveVideoAtPathToSavedPhotosAlbum(videoPath, nil, nil, nil)
    }
}

Get processing progress 1. Create a Cocoa Touch Class arbitrarily as a bridging file referenced in .c, delete all the code in the .h file, and declare the function

///Get the duration of the input source file
void setDuration(long long duration);
///Get the current processing progress
void setCurrentTimeFromProgressInfo(char *progressInfo);

Delete the code in .m and only keep the reference of the header file, the implementation is as follows

#import "HEFFmpegBridge.h"
#import <Foundation/Foundation.h>
?
static long long totalDuration = 0;
?
void setDuration(long long duration) {
    
    //duration accuracy to microseconds
    //For example, the video length is 00:00:24.53, and the duration will be 24533333
    //For example, the video length is 00:01:16.10, and the duration will be 76100000
    printf("\\
 fileDuration = %lld\\
",duration);
    totalDuration = duration;
}
?
void setCurrentTimeFromProgressInfo(char *progressInfo) {
    //progressInfo
    //e.g. frame= 1968 fps=100 q=31.0 size= 4864kB time=00:01:06.59 bitrate= 598.3kbits/s speed=3.38x
    //printf("\\
 ctime = %s\\
", progressInfo);
    
    NSString *progressStr = [NSString stringWithCString:progressInfo encoding:NSUTF8StringEncoding];
    NSArray *infoArray = [progressStr componentsSeparatedByString:@" "];
    NSString *timeString = @"";
    for (NSString *info in infoArray) {
        if ([info containsString:@"time"]) {
            timeString = [info componentsSeparatedByString:@"="].lastObject;//e.g. 00:01:16.10, accurate to ten milliseconds
        }
    }
    NSArray *hmsArray = [timeString componentsSeparatedByString:@":"];
    if (hmsArray. count != 3) {
        return;
    }
    long long hours = [hmsArray[0] longLongValue];
    long long minutes = [hmsArray[1] longLongValue];
    long long seconds = 0;
    long long mseconds = 0;
    NSArray *tempArr = [hmsArray[2] componentsSeparatedByString:@"."];
    if (tempArr. count == 2) {
        seconds = [tempArr. firstObject longLongValue];
        mseconds = [tempArr. lastObject longLongValue];
    }
    long long currentTime = (hours * 3600 + minutes * 60 + seconds) * 1000000 + mseconds * 10000;
    double progress = [[NSString stringWithFormat:@"%.2f",currentTime * 1.0 / totalDuration] doubleValue];
    NSLog(@"progress = %.2f",progress);
// //ffmpeg operations are performed in sub-threads
// dispatch_async(dispatch_get_main_queue(), ^{
// //Update progress UI
// });
}

As can be seen from the above code, the callback of the processing progress is mainly obtained through setDuration to obtain the total duration of the input source file, setCurrentTimeFromProgressInfo to obtain the current processed duration, and then calculate the current progress according to the ratio of the two. See the code directly for details

2. When to call these two functions?

  • In the open_input_file function in the ffmpeg_opt.c file, call setDuration(ic->duration); after err = avformat_open_input( &ic, filename, file_iformat, &o->g->format_opts);

  • In the print_report function in the ffmpeg.c file, call setCurrentTimeFromProgressInfo(buf.str) before fflush(stderr);

After reading these, if you find it troublesome, you can directly use the library I compiled and configured. You can directly use the github address for import. If you find it helpful, give me a

4. Arrangement of command line usage

Command basic format: ffmpeg [global_options] {[input_file_options] -i input_url} … {[output_file_options] output_url} …

Common parameter configurations: -f Mandatory to specify the encoding format -i Output source -t Specify the input and output duration -r Specify the frame rate, that is, the number of frames within 1S -threads Specify the number of threads -c:v Specify the encoding format of the video -ss Specify -b:v specify bitrate -b:v 2500k specify output file video bitrate 2500kbit/s -s specify resolution -y override output -filter specify filter -vf specify video filter -an specify -vn specifies decontext for video -sn specifies decontext for subtitles -dn specifies decontaminates data streams -codec: copy copies all streams without re-encoding

Original link: iOS uses FFmpeg command line – Nuggets