How to use Visual Studio to encapsulate the code you write into a dynamic library for others to use

We have already implemented the format conversion function implemented by ourselves using classes. Now how do we package our functions so that others can use them conveniently? This article learned how to use Visual Studio for packaging.

1. Dynamic link library template

Visual Studio provides dynamic link library templates. When we create a new project, just select the template, which is very convenient.

After the new creation, we will get four such files. If we use the files it provides to avoid any problems, we can create a few new files and write our own code.

2. Header file definition

The header file is the most critical place, allowing others to access our code directly through our header file. This is the interface. The header file should be as simple as possible, just throw in the class definition, and we will implement the rest in the cpp file. Moreover, in addition to system libraries, do not include other self-defined libraries in the header file, otherwise others will not be able to find the included things when they use it.

//MyTranscoder.h
#pragma once
#include <stdlib.h>
#include <stdint.h>
#include <string>

#ifdef MYTRANSCODER_EXPORTS
#define MYTRANSCODER_API __declspec(dllexport)
#else
#define MYTRANSCODER_API __declspec(dllimport)
#endif

class MyTranscoderImpl;

class MYTRANSCODER_API NoCopyable
{
protected:
NoCopyable() = default;
virtual ~NoCopyable() = default;
NoCopyable(NoCopyable const & amp; other) = delete;
NoCopyable & amp; operator=(NoCopyable const & amp; other) = delete;
NoCopyable(NoCopyable & amp; & amp; other) = delete;
NoCopyable & amp; operator=(NoCopyable & amp; & amp; other) = delete;
};

class MYTRANSCODER_API MyTranscoder : public NoCopyable {
public:
MyTranscoder();

~MyTranscoder() override;

bool transCode();

private:
MyTranscoderImpl* fImpl;
};

There is another key point here. We use dynamic library dll compilation. If we want to get the static library lib, we must add the export flag MYTRANSCODER_API. In addition, in the project settings, properties–>C/C++–>Preprocessor–>Preprocessor definition, just add the MYTRANSCODER_EXPORTS macro definition.

In order to make the header file more concise, our MyTranscoder class here is simply an interface, and we will implement an additional MyTranscoderImpl class to implement the function.

It doesn’t matter what files you want to include in the MyTranscoderImpl.h file. Anyway, when others use it, they don’t need to worry about other things except the interface header file. We directly introduce all the ffmpeg libraries we want to use here.

//MyTranscoderImpl.h
#pragma once
#include <string>
extern "C" {
#include <libavformat/avformat.h>
#include <libavcodec/avcodec.h>
#include <libavutil/avutil.h>
#include <libavutil/imgutils.h>
#include <libswscale/swscale.h>
}
#include <thread>
#include <mutex>
#include <condition_variable>
#include "SCOPEG.h"
#include <queue>
//Add additional header files related to the current class declaration here

class MyTranscoderImpl {
public:
MyTranscoderImpl();

~MyTranscoderImpl();
\t
bool formatConver();
private:
void decodeThread1();
void encodeThread1();
    //Decode parameter package structure
    struct DecodeParamPacket {
        std::string inputFile;
        AVFormatContext* inputFormatContext;
        AVStream* audioStream;
        int videoStreamIndex;
        int audioStreamIndex;
        AVStream* videoStream;
    };
    //Encoding parameter package structure
    struct EncodeParamPacket {
        std::string outputFileName;
        std::string format;
        AVStream* audioStream;
        AVStream* videoStream;
    };
    DecodeParamPacket fDecodeParam;
    EncodeParamPacket fEncodeParam;
    queue<AVFrame*> fFrameQueue;//frame queue
    std::mutex fMtx; // Mutex, ensuring mutual exclusion of thread access
    std::condition_variable fCv; // Condition variable, used for communication between threads
    int fRet = 0;
    bool fEnd = false;//global end flag
};

3. Corresponding function implementation cpp file

First of all, MyTranscoder.cpp corresponding to MyTranscoder.h just writes an interface, so just write some cpp files casually.

//MyTranscoder.cpp
#include "pch.h"
#include "MyTranscoderImpl.h"
#include "../include/MyTranscoder.h"


MyTranscoder::MyTranscoder() {
fImpl = new MyTranscoderImpl();
}

MyTranscoder::~MyTranscoder() {
delete fImpl;
}

bool MyTranscoder::transCode() {
return fImpl->formatConver();
}



Then there is MyTranscoderImpl.cpp, where we put all the decoding and encoding functions we wrote before.

#include "pch.h"
#include "MyTranscoderImpl.h"
//Add additional header files related to the current class implementation here


MyTranscoderImpl::MyTranscoderImpl() {

}

MyTranscoderImpl::~MyTranscoderImpl() {

}


bool MyTranscoderImpl::formatConver()
{
    std::string inputFileName, outputFileName, format;
    std::cout << "Please enter the input file name (with suffix):";
    std::cin >> inputFileName;
    std::cout << "Please enter the output format (avi, mp4, wmv, mkv, flv...):";
    std::cin >> format;
    std::cout << "Please enter the output file name (with suffix):";
    std::cin >> outputFileName;
    /*inputFilename = "cartoonTrim.mp4";
    Format = "avi";
    outputFilename = "Multithreading.avi";*/
    fDecodeParam.inputFile = inputFileName;
    fEncodeParam.outputFileName = outputFileName;
    fEncodeParam.format = format;

    avformat_network_init(); // Initialize the network library
    AVStream* audioStream = nullptr;
    AVFormatContext* inputFormatContext = nullptr;
    AVStream* videoStream = nullptr;
    //Open input file
    if (avformat_open_input( & amp;inputFormatContext, fDecodeParam.inputFile.c_str(), nullptr, nullptr) != 0) {
        std::cout << "Unable to open input file" << std::endl;
        return false;
    }
    ON_SCOPE_EXIT{ avformat_close_input( & amp;inputFormatContext); };
    // Get stream information
    if (avformat_find_stream_info(inputFormatContext, nullptr) < 0) {
        std::cout << "Unable to obtain input file stream information" << std::endl;
        return false;
    }
    //Find video stream and audio stream index
    int videoStreamIndex = -1;
    int audioStreamIndex = -1;
    for (int i = 0; i < inputFormatContext->nb_streams; i + + ) {
        if (inputFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
            videoStreamIndex = i;
        }
        else if (inputFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
            audioStreamIndex = i;
        }
    }
    if (videoStreamIndex == -1 || audioStreamIndex == -1) {
        std::cout << "Video stream not found" << std::endl;
        return false;
    }
    // Get the audio stream and share the input audio parameters to the audio encoder
    audioStream = inputFormatContext->streams[audioStreamIndex];
    videoStream = inputFormatContext->streams[videoStreamIndex];

    fDecodeParam.inputFormatContext = inputFormatContext;
    fDecodeParam.audioStream = audioStream;
    fDecodeParam.videoStreamIndex = videoStreamIndex;
    fDecodeParam.audioStreamIndex = audioStreamIndex;
    fDecodeParam.videoStream = videoStream;

    fEncodeParam.audioStream = audioStream;
    fEncodeParam.videoStream = videoStream;
    //Decoding thread
    std::thread decodeThr( & amp;MyTranscoderImpl::decodeThread1,this);
    //encoding thread
    std::thread encodeThr( & amp;MyTranscoderImpl::encodeThread1,this);
    decodeThr.join();
    encodeThr.join();
    return true;
}
//Decoding thread
void MyTranscoderImpl::decodeThread1() {
    AVCodecContext* videoCodecContext = nullptr;
    AVCodecContext* audioCodecContext = nullptr;
    // allocate frame object
    AVFrame* videoFrame = av_frame_alloc();
    AVFrame* audioFrame = av_frame_alloc();
    AVPacket* inputPacket = av_packet_alloc();
    ON_SCOPE_EXIT{ av_frame_free( & amp;videoFrame); };
    ON_SCOPE_EXIT{ av_frame_free( & amp;audioFrame); };
    ON_SCOPE_EXIT{ av_packet_free( & amp;inputPacket); };
    if (!videoFrame || !audioFrame || !inputPacket) {
        std::cout << "Failed to allocate frame object" << std::endl;
        return;
    }

    // Get the video decoder
    const AVCodec* videoCodec = avcodec_find_decoder(fDecodeParam.videoStream->codecpar->codec_id);
    if (!videoCodec) {
        std::cout << "Video decoder not found" << std::endl;
        return;
    }

    //Create and open the video decoder context
    videoCodecContext = avcodec_alloc_context3(videoCodec);
    if (!videoCodecContext) {
        std::cout << "Failed to create video decoder context" << std::endl;
        return;
    }
    ON_SCOPE_EXIT{ avcodec_free_context( & amp;videoCodecContext); };
    //Video stream parameters to fill in the context context
    avcodec_parameters_to_context(videoCodecContext, fDecodeParam.videoStream->codecpar);
    if (avcodec_open2(videoCodecContext, videoCodec, nullptr) < 0) {
        std::cout << "Failed to open video decoder" << std::endl;
        return;
    }

    // Get the audio encoder
    const AVCodec* audioCodec = avcodec_find_decoder(fDecodeParam.audioStream->codecpar->codec_id);
    if (!audioCodec) {
        std::cout << "Failed to obtain audio encoder" << std::endl;
        return;
    }
    //Create and open the audio decoder context
    audioCodecContext = avcodec_alloc_context3(audioCodec);
    if (!audioCodecContext) {
        std::cout << "Failed to create audio encoder context" << std::endl;
        return;
    }
    ON_SCOPE_EXIT{ avcodec_free_context( & amp;audioCodecContext); };

    //Audio stream parameters fill context
    avcodec_parameters_to_context(audioCodecContext, fDecodeParam.audioStream->codecpar);
    if (avcodec_open2(audioCodecContext, audioCodec, nullptr) < 0) {
        std::cout << "Failed to open audio encoder" << std::endl;
        return;
    }
    //Print input information
    av_dump_format(fDecodeParam.inputFormatContext, 0, fDecodeParam.inputFile.c_str(), 0);

    //decoding
    while (av_read_frame(fDecodeParam.inputFormatContext, inputPacket) >= 0) {
        if (inputPacket->stream_index == fDecodeParam.videoStreamIndex) {
            fRet = avcodec_send_packet(videoCodecContext, inputPacket);
            if (fRet < 0) {
                break;
            }
            while (fRet >= 0) {
                fRet = avcodec_receive_frame(videoCodecContext, videoFrame);
                if (fRet == AVERROR(EAGAIN) || fRet == AVERROR_EOF) {
                    break;
                }
                else if (fRet < 0) {
                    std::cout << "Video decoding ret exception" << std::endl;
                    return;
                }
                //Transmit frames to the queue, create a new AVFrame_ variable to avoid using the same address for each frame
                videoFrame->quality = 1;//Audio and video flag
                AVFrame* videoFrame_ = av_frame_clone(videoFrame);
                //If the frame queue is greater than or equal to 50, wait for wake-up
                unique_lock<mutex> lock2(fMtx);
                while (fFrameQueue.size() >= 50)
                    fCv.wait(lock2);
                //Push the frame into the queue
                fFrameQueue.push(videoFrame_);
                //After pushing into the queue, wake up the encoding thread
                fCv.notify_one();
                break;
            }
            av_packet_unref(inputPacket);
        }
        else if (inputPacket->stream_index == fDecodeParam.audioStreamIndex) {
            //Audio stream processing
            fRet = avcodec_send_packet(audioCodecContext, inputPacket);
            if (fRet < 0) {
                break;
            }
            while (fRet >= 0) {
                fRet = avcodec_receive_frame(audioCodecContext, audioFrame);
                if (fRet == AVERROR(EAGAIN) || fRet == AVERROR_EOF) {
                    break;
                }
                else if (fRet < 0) {
                    std::cout << "Audio decoding ret exception" << std::endl;
                    return;
                }
                //Transmit the frame to the queue and create a new AVFrame variable
                AVFrame* audioFrame_ = av_frame_clone(audioFrame);
                //If the frame queue is greater than or equal to 50, wait for wake-up
                unique_lock<mutex> lock2(fMtx);
                while (fFrameQueue.size() >= 50)
                    fCv.wait(lock2);
                fFrameQueue.push(audioFrame_);
                //Wake up the encoding thread
               fCv.notify_one();
                break;
            }
            av_packet_unref(inputPacket);
        }
    }
    //After decoding is completed, wake up for the last time
    fCv.notify_one();
    //Set the end global variable to notify the encoding thread to end
    fMtx.lock();
    fEnd = true;
    fMtx.unlock();
}
//encoding thread
void MyTranscoderImpl::encodeThread1()
{
    AVFormatContext* outputFormatContext = nullptr;
    SwsContext* swsContext = nullptr;
    AVCodecID videoCodecId;
    AVCodecID audioCodecId;
    AVPacket* videoOutputPacket = av_packet_alloc();
    AVPacket* audioOutputPacket = av_packet_alloc();
    ON_SCOPE_EXIT{ av_packet_free( & amp;videoOutputPacket); };
    ON_SCOPE_EXIT{ av_packet_free( & amp;audioOutputPacket); };
    if (!videoOutputPacket || !audioOutputPacket) {
        std::cout << "Failed to allocate frame object" << std::endl;
        return;
    }
    { // Codec control
        if (fEncodeParam.format == "avi")
        {
            videoCodecId = AV_CODEC_ID_MPEG2VIDEO;
            audioCodecId = AV_CODEC_ID_PCM_S16LE;
        }
        else if (fEncodeParam.format == "mp4")
        {
            videoCodecId = AV_CODEC_ID_H264;
            audioCodecId = AV_CODEC_ID_AAC;
        }
        else if (fEncodeParam.format == "wmv")
        {
            videoCodecId = AV_CODEC_ID_MSMPEG4V3;
            audioCodecId = AV_CODEC_ID_WMAV2;
        }
        else if (fEncodeParam.format == "mkv")
        {
            videoCodecId = AV_CODEC_ID_H264;
            audioCodecId = AV_CODEC_ID_MP3;
        }
        else if (fEncodeParam.format == "flv")
        {
            videoCodecId = AV_CODEC_ID_H264;
            audioCodecId = AV_CODEC_ID_AAC;
        }
        else {
            std::cout << "Conversion to this format is not supported" << std::endl;
            return;
        }
    }
    //Create a context for the output file
    avformat_alloc_output_context2( & amp;outputFormatContext, nullptr, nullptr, fEncodeParam.outputFileName.c_str());
    if (!outputFormatContext) {
        std::cout << "Failed to create context for output file" << std::endl;
        return;
    }
    ON_SCOPE_EXIT{ avformat_free_context(outputFormatContext); };

    //Add video stream to output context
    AVStream* outVideoStream = avformat_new_stream(outputFormatContext, nullptr);
    if (!outVideoStream) {
        std::cout << "Failed to add video stream to output file" << std::endl;
        return;
    }
    outVideoStream->id = outputFormatContext->nb_streams - 1;
    avcodec_parameters_copy(outVideoStream->codecpar, fEncodeParam.videoStream->codecpar);
    outVideoStream->codecpar->codec_tag = 0;

    //Set video encoder
    const AVCodec* outVideoCodec = avcodec_find_encoder(videoCodecId);
    if (!outVideoCodec) {
        std::cout << "Failed to set video encoder" << std::endl;
        return;
    }
    AVCodecContext* outVideoCodecContext = avcodec_alloc_context3(outVideoCodec);
    if (!outVideoCodecContext) {
        std::cout << "Failed to set video encoder context" << std::endl;
        return;
    }
    ON_SCOPE_EXIT{ avcodec_free_context( & amp;outVideoCodecContext); };
    //Video encoder parameter settings
    {
        //avcodec_parameters_to_context(outVideoCodecContext, outVideoStream->codecpar);
        outVideoCodecContext->codec_id = videoCodecId;
        outVideoCodecContext->time_base.den = 25;
        outVideoCodecContext->time_base.num = 1;
        outVideoCodecContext->gop_size = 13;
        outVideoCodecContext->bit_rate = 8000000;
        outVideoCodecContext->refs = 0;
        outVideoCodecContext->max_b_frames = 10;
        outVideoCodecContext->width = 1920;
        outVideoCodecContext->height = 1080;
        outVideoCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;
    }
    //Copy parameters from the output context to the output stream
    avcodec_parameters_from_context(outVideoStream->codecpar, outVideoCodecContext);


    //Open video encoder
    if (avcodec_open2(outVideoCodecContext, outVideoCodec, nullptr) < 0) {
        std::cout << "Unable to open video encoder" << std::endl;
        return;
    }

    //Add audio stream to output file
    AVStream* outAudioStream = avformat_new_stream(outputFormatContext, nullptr);
    if (!outAudioStream) {
        std::cout << "Failed to add audio stream to output file" << std::endl;
        return;
    }

    outAudioStream->id = outputFormatContext->nb_streams - 1;
    //Copy the output audio stream parameters
    avcodec_parameters_copy(outAudioStream->codecpar, fEncodeParam.audioStream->codecpar);
    outAudioStream->time_base.den = 11025;
    outAudioStream->time_base.num = 256;
    outAudioStream->codecpar->bit_rate = 320018;
    outAudioStream->codecpar->profile = 1;
    outAudioStream->codecpar->sample_rate = 44100;
    outAudioStream->codecpar->frame_size = 1024;
    av_channel_layout_default( & amp;outAudioStream->codecpar->ch_layout, 2);
    //outAudioStream->codecpar->ch_layout.nb_channels = 2;
    outAudioStream->codecpar->codec_tag = 0;
    //outAudioStream->codecpar->ch_layout.order = AV_CHANNEL_ORDER_NATIVE;
    //outAudioStream->codecpar->ch_layout.u.mask = 0x03;
    //outAudioStream->codecpar->channels = 1;

    //Set audio encoder
    const AVCodec* outAudioCodec = avcodec_find_encoder(audioCodecId);
    if (!outAudioCodec) {
        std::cout << "Failed to set audio encoder" << std::endl;
        return;
    }
    AVCodecContext* outAudioCodecContext = avcodec_alloc_context3(outAudioCodec);
    if (!outAudioCodecContext) {
        std::cout << "Failed to set audio encoder context" << std::endl;
        return;
    }
    ON_SCOPE_EXIT{ avcodec_free_context( & amp;outAudioCodecContext); };
    //Audio encoder parameters
    avcodec_parameters_to_context(outAudioCodecContext, outAudioStream->codecpar);
    outAudioCodecContext->codec_id = audioCodecId;
    outAudioCodecContext->time_base = fEncodeParam.audioStream->time_base;
    //outAudioCodecContext->time_base.den = 51111100;
    //outAudioCodecContext->time_base.num = 1;
    //outAudioCodecContext->sample_rate = 43110;
    outAudioCodecContext->sample_fmt = AV_SAMPLE_FMT_S16;
    //av_channel_layout_default( & amp;outAudioCodecContext->ch_layout, 2);
    avcodec_parameters_from_context(outAudioStream->codecpar, outAudioCodecContext);
    if (fEncodeParam.format == "flv")
    {
        outAudioCodecContext->sample_fmt = AV_SAMPLE_FMT_FLTP;
        av_channel_layout_default( & amp;outAudioCodecContext->ch_layout, 3);

    }
    //Open the audio encoder
    if (avcodec_open2(outAudioCodecContext, outAudioCodec, nullptr) < 0) {
        std::cout << "Unable to open audio encoder" << std::endl;
        return;
    }

    //Open output file
    if (!(outputFormatContext->oformat->flags & amp; AVFMT_NOFILE)) {
        if (avio_open( & amp;outputFormatContext->pb, fEncodeParam.outputFileName.c_str(), AVIO_FLAG_WRITE) < 0) {
            std::cout << "Unable to open output file" << std::endl;
            return;
        }
    }

    //Write output file header
    if (avformat_write_header(outputFormatContext, nullptr) < 0) {
        std::cout << "Unable to write output file header" << std::endl;
        return;
    }
    //Print out relevant information
    av_dump_format(outputFormatContext, 0, fEncodeParam.outputFileName.c_str(), 1);
    int nVideoCount = 0;
    int nAudioCount = 0;
    bool queueSizeLess = 0;
    bool queueIsEmpty = 0;
    //Start encoding
    while (1)
    {
        //Waiting for decoding to wake up
        unique_lock<mutex> lock1(fMtx);
        while (fFrameQueue.empty())
            fCv.wait(lock1);
        AVFrame* frame = fFrameQueue.front();
        fFrameQueue.pop();
        lock1.unlock();
        //coding
        {
            if (frame->quality == 1) {
                // Encode video frames
                frame->pts = (int64_t)(40 * (nVideoCount) / av_q2d(outVideoCodecContext->time_base) / 1000.0);//Time
                nVideoCount + + ;
                fRet = avcodec_send_frame(outVideoCodecContext, frame);
                if (fRet < 0) {
                    break;
                }
                while (fRet >= 0) {
                    fRet = avcodec_receive_packet(outVideoCodecContext, videoOutputPacket);
                    if (fRet == AVERROR(EAGAIN) || fRet == AVERROR_EOF) {
                        break;
                    }
                    else if (fRet < 0) {
                        std::cout << "Video encoding ret exception" << std::endl;
                        return;
                    }

                    av_packet_rescale_ts(videoOutputPacket, outVideoCodecContext->time_base, outVideoStream->time_base);
                    videoOutputPacket->stream_index = outVideoStream->index;

                    //Write video frames to output file
                    fRet = av_interleaved_write_frame(outputFormatContext, videoOutputPacket);
                    if (fRet < 0) {
                        break;
                    }
                }
            }
            else {
                // Encode audio frames
                //Frame->pts = (int64_t)( (nAudioCount) / av_q2d(outAudioCodecContext->time_base) / 44100.0);//Time
                frame->pts = nAudioCount * 1024;
                nAudioCount + + ;
                fRet = avcodec_send_frame(outAudioCodecContext, frame);
                if (fRet < 0) {
                    break;
                }

                while (fRet >= 0) {
                    fRet = avcodec_receive_packet(outAudioCodecContext, audioOutputPacket);
                    if (fRet == AVERROR(EAGAIN) || fRet == AVERROR_EOF) {
                        break;
                    }
                    else if (fRet < 0) {
                        std::cout << "Audio encoding ret exception" << std::endl;
                        return;
                    }

                    av_packet_rescale_ts(audioOutputPacket, outAudioCodecContext->time_base, outAudioStream->time_base);
                    audioOutputPacket->stream_index = outAudioStream->index;

                    //Write audio frames to the output file
                    fRet = av_interleaved_write_frame(outputFormatContext, audioOutputPacket);
                    if (fRet < 0) {
                        break;
                    }
                }
            }
            //Release each new AVFrame
            av_frame_free( & amp;frame);
            //If the number of queue frames is less than 50, wake up and continue decoding.
            fMtx.lock();
            queueSizeLess = fFrameQueue.size() < 50 ? 1 : 0;
            queueIsEmpty = fFrameQueue.empty() ? 1 : 0;
            fMtx.unlock();
            if (queueSizeLess)
                fCv.notify_one();
        }
        //End the entire encoding thread
        if (fEnd & amp; & amp; queueIsEmpty) { break; }
    }
    //Write to the end of the output file
    av_write_trailer(outputFormatContext);
}

4. Generate library

Right-click our project name, then click Generate, and it’s OK. You will get the following four files in the directory below.

5. Cited by others

Others only need the header files and lib library files to use our code. Create a new project to simulate others using our code. The header file is relatively simple. Just download the .h file and then include it.

#include <iostream>
#include "../include/MyTranscoder.h"

int main()
{
    
    clock_t start, end;
    start = clock();
    MyTranscoder transCoder;
    if (!transCoder.transCode()) {
        std::cout << "Failed to convert!" << std::endl;
        return -1;
    }
    std::cout << "Conversion complete!" << std::endl;
    end = clock();
    std::cout << "time = " << double(end - start) / CLOCKS_PER_SEC << "s" << std::endl;
    return 0;
\t
}

The library file needs to be simply configured

Properties–Linker–General–Additional library directory: Then put the path to the lib file just now

Properties–Linker–Input–Additional dependencies: Throw the MyTranscoder.lib library in

This is almost OK. Beginners, record the learning process.