Android 13 – Media Framework (13) – OpenMax (1)

In this section we will learn about the Android OpenMax framework. After understanding the framework, we will go back to understand ACodec, connect MediaCodec – ACodec – OpenMax, and understand the creation control of components and the flow of buffers.
This article is a personal study note. If there are any errors, please point them out.

I divided the Android OpenMax framework into 3 parts to learn:

  1. media.codec service: HIDL service under vendor, used to query the platform’s encoding and decoding capabilities and create/manage encoding and decoding components;
  2. OpenMax IL: OpenMax framework standard interface, the underlying encoding and decoding components must implement these interfaces;
  3. OMXNodeInstance: OpenMax AL completes the encapsulation and calling of the OpenMax IL layer and provides it to the upper layer ACodec call;

In this section, let’s first understand the relevant code paths:

  1. hardware/interfaces/media/omx/1.0: The HIDL service interface provided by media.codec is defined in the directory. The ones we come into contact with more often are IOmx.hal and IOmxStore.hal , IOmxStore.hal and IOmxNode.hal;
  2. frameworks/av/services/mediacodec: The directory is the media.codec service implementation file. After compilation, it will generate [email protected], which is located in the /vendor/bin/hw directory of the board;
  3. frameworks/av/media/libstagefright/omx: The directory contains the Bn-side implementation of media.codec service and some tools OMXUtils.cpp;
  4. frameworks/native/headers/media_plugin/media/openmax: The directory contains the standard interfaces of OpenMax. The underlying Omx Component needs to implement these standard interfaces, and the upper layer ACodec also needs to be called according to the standard interfaces;
  5. frameworks/av/media/libmedia/omx
    frameworks/av/media/libmedia/omx/1.0:
    The above two directories contain encapsulation of HIDL calls. There are two types of encapsulation, one is a class starting with LW (Legacy Wrapper), and the other is a class starting with TW (Treble Wrapper).

Next let’s see how these files are used?

media.codec As a HIDL service, it must first have an interface definition. We check the hardware/interfaces/media/omx/1.0 directory and we can find that OpenMax related class definitions are all capitalized I, followed by Omx (where O is uppercase and mx is lowercase).

Then look at the frameworks/av/media/libstagefright/omx/1 directory. Under the path, you can see Omx.cpp and OmxStore.cpp. These two are media .codec‘s native implementation, but we don’t seem to see IOmxNode? Don’t worry, let’s continue reading below.

After implementing the service-related files, you need to open the process to start the service. The relevant code is under frameworks/av/services/mediacodec. We can easily see this process by reading the code of main_codecservice.cpp Provides two services IOmx and IOmxStore. The specific code will not be expanded here, so the IOmxNode mentioned above is not a service, but For the content provided by the service, the next question is where is the content implemented?

After the service is started, we need to obtain and call the service. Here we need to look at the ACodec code:

 sp<CodecObserver> observer = new CodecObserver(notify);
    sp<IOMX>omx;
    sp<IOMXNode>omxNode;
    status_t err = NAME_NOT_FOUND;
    OMXClient client;
    if (client.connect(owner.c_str()) != OK) {<!-- -->
        return false;
    }
    omx = client.interface();
    int prevPriority = androidGetThreadPriority(tid);
    err = omx->allocateNode(componentName.c_str(), observer, & amp;omxNode);

Here we see that ACodec does not obtain the IOmx service, but uses OMXClient to encapsulate the service acquisition process, and then calls its interface to return the obtained service agent. However, something to note here is that the type of the returned agent is IOMX (all three letters are capital letters), is not the IOmx mentioned before. What happened in it?

status_t OMXClient::connect(const char* name) {<!-- -->
    using namespace ::android::hardware::media::omx::V1_0;
    if (name == nullptr) {<!-- -->
        name = "default";
    }
    sp<IOmx> tOmx = IOmx::getService(name);
    if (tOmx.get() == nullptr) {<!-- -->
        ALOGE("Cannot obtain IOmx service.");
        return NO_INIT;
    }
    if (!tOmx->isRemote()) {<!-- -->
        ALOGE("IOmx service running in passthrough mode.");
        return NO_INIT;
    }
    mOMX = new utils::LWOmx(tOmx);
    ALOGI("IOmx service obtained");
    return OK;
}

From OMXClient::connect we can see that the service proxy type obtained internally is still IOmx, but the proxy is encapsulated again. IOmx is an object of Treble type, and LWOmx is an object of Legacy type.

We all know that it is troublesome to call Treble object methods. When you want to return the return value of a function call, you need to construct a Lambda function; the use of Legacy objects is an object that conforms to our regular usage habits. Therefore, the purpose of encapsulating IOmx into LWOmx is to encapsulate HIDL calls and simplify use.

WOmx.h is located in frameworks/av/media/libmedia/include/media/omx/1.0. You can see that it is inherited from IOMX. Look at IOMX .h You can find that its method name is consistent with the service provided by IOmx, so our conjecture is verified here: IOMX is a reference to IOmx Encapsulation of proxy calls.

Similarly, after calling the IOmx service to obtain the IOmxNode object, it must also be encapsulated into the LW type for subsequent use:

status_t LWOmx::allocateNode(
        char const* name,
        sp<IOMXObserver> const & observer,
        sp<IOMXNode>* omxNode) {<!-- -->
    status_t fnStatus;
    status_t transStatus = toStatusT(mBase->allocateNode(
            name, new TWOmxObserver(observer),
            [ & amp;fnStatus, omxNode](Status status, sp<IOmxNode> const & amp; node) {<!-- -->
                fnStatus = toStatusT(status);
                *omxNode = new LWOmxNode(node);
            }));
    return transStatus == NO_ERROR ? fnStatus : transStatus;
}

The above all talks about the mediaserver process using the service proxy of the media.codec process. Is there any situation where it is called in reverse? Of course there is.

Still looking at LWOmx::allocateNode, we will pass in a CodecObserver object to receive Omx Callback, but CodecObserver is inherited from BnOMXObserver. There will be a problem here. CodecObserver will It cannot be passed to the media.codec process through HIDL calls, so LWOmx::allocateNode wraps the CodecObserver into TWOmxObserver before the call so that the object can be transmitted through HIDL.

struct TWOmxObserver : public IOmxObserver {<!-- -->
    sp<IOMXObserver> mBase;
    TWOmxObserver(sp<IOMXObserver> const & amp; base);
    Return<void> onMessages(const hidl_vec<Message> & tMessages) override;
};

TWOmxObserver inherits from the IOmxObserver interface, so it can be transmitted in HIDL, which is also the role of TW (Treble Wrapper).

Look at frameworks/av/media/libmedia/include/media/omx/1.0/WOmxObserver.h. There is also LWOmxObserver in it. We have already mentioned its function above, which is to pass the mediaserver process. The TWOmxObserver object is encapsulated to simplify HIDL calls.

In this section, we give a brief introduction to the files and classes related to Android OpenMax. After understanding these, we can ignore some middle layers when tracing the code. If you have related writing questions later, you can also refer here.