Web (h5) and Electron implement recording software

Background

image.png
In today’s increasingly advanced world of modern technology, our computers and mobile phone devices are capable of completing many daily tasks, one of which is recording. Whether it is recording meetings, making voice memos, or composing music, recording is one of the functions we often use in our lives. This article will introduce how to use Electron, webrtc and React to build an efficient, cross-platform recording software to help users easily complete various recording tasks .

rAudio.gif

Tools

  • electron
  • react
  • antd
  • webrtc

Introduction to Electron and React

Collaboration between Electron and React Electron is a platform based on Chromium and Node.js Development framework for building cross-platform desktop applications. By combining with React, complex user interfaces and interactive experiences can be easily implemented, which can not only meet user needs but also improve development efficiency. Therefore, applying Electron and React to the development of recording software can provide users with a better experience, and from a developer’s perspective, it is more convenient and efficient.

Introduction to webrtc

Application of WebRTC technology WebRTC is an open standard for real-time communication between browsers. It provides audio, video and data transmission capabilities and is suitable for A wide range of application scenarios. In recording software, we can use WebRTC technology to realize real-time recording function and support online sharing and collaboration. Users can share the recording content with others through web links, and realize real-time online playback and comments, which greatly improves the flexibility and usability of recording.

Implementation

To implement the recording function in WebRTC and React, we can follow the following steps:

Step 1: Set up WebRTC audio stream

  1. Import the getUserMedia function, which is a function provided by WebRTC for obtaining audio and video streams.
  2. Use the getUserMedia function to request user authorization to obtain the audio stream and save it in state in the React component.
  3. Bind the audio stream to the HTML5 audio or video element to preview the recording in real time.
// html
<div
        className={<!-- -->`${<!-- -->styles.recordAudio} ${<!-- -->
                window.isElectron ? styles.electron : styles.web
        }`}
>
        <div className="timer">
                <Timer
                        seconds={<!-- -->timer.seconds}
                        minutes={<!-- -->timer.minutes}
                        hours={<!-- -->timer.hours}
                />
        </div>
        <Wavesurfer ref={<!-- -->wavesurferRef} />
</div>
// js
const wavesurferRef = useRef<any>(); // Sound wave image object
const mediaStream = useRef<MediaStream>(); //Media stream object
const mediaRecorder = useRef<MediaRecorder>(); // Media recorder object
const recordedChunks = useRef<Blob[]>([]); // Store recorded audio data
const audioTrack = useRef<any>(); // Audio track object
const [isPause, setIsPause] = useState(false); // Mark whether to pause
const [isRecording, setIsRecording] = useState(false); // Mark whether recording is taking place
const [isMute, setIsMute] = useState(false); // Whether the mark is mute

function startRecording() {<!-- -->
    navigator.mediaDevices
        .getUserMedia({<!-- --> audio: true })
        .then((stream) => {<!-- -->
                mediaStream.current = stream;
                audioTrack.current = stream.getAudioTracks()[0];
                audioTrack.current.enabled = true; // Enable audio track
                mediaRecorder.current = new MediaRecorder(stream);
                mediaRecorder.current.start();
                setIsRecording(true);
                wavesurferRef.current.play();
                timer.start();
                console.log("Start recording...");
        })
        .catch((error) => {<!-- -->
                console.error("Unable to obtain microphone permission:", error);
        });
}

Step 2: Recording Control

  1. Create React components, including buttons to start, pause, and stop recording.
<div className="recorderTools">
    <Button
            shape="circle"
            icon={<!-- --><BsTrash />}
            className="toolbarIcon resetBtn"
            title="Delete"
            disabled={<!-- -->!isRecording}
            onClick={<!-- -->stopRecording}
    />
    {<!-- -->isRecording ? (
            <Button
                    danger
                    type="primary"
                    shape="circle"
                    icon={<!-- --><BsCheckLg />}
                    className="toolbarIcon stopBtn"
                    title="Save"
                    disabled={<!-- -->!isRecording}
                    onClick={<!-- -->saveRecording}
            />
    ) : (
            <Button
                    danger
                    type="primary"
                    shape="circle"
                    icon={<!-- --><BsRecordFill />}
                    className="toolbarIcon playBtn"
                    title="Start"
                    onClick={<!-- -->startRecording}
            />
    )}

    {<!-- -->isPause ? (
            <Button
                    type="primary"
                    shape="circle"
                    icon={<!-- --><BsPlayFill />}
                    className="toolbarIcon resumeBtn"
                    title="Continue"
                    disabled={<!-- -->!isRecording}
                    onClick={<!-- -->resumeRecording}
            />
    ) : (
            <Button
                    type="primary"
                    shape="circle"
                    icon={<!-- --><BsPauseFill />}
                    className="toolbarIcon pauseBtn"
                    title="Pause"
                    disabled={<!-- -->!isRecording}
                    onClick={<!-- -->pauseRecording}
            />
    )}
</div>
  1. Use the MediaRecorder interface, which is an interface provided by WebRTC for recording audio and video.
  2. In the click event of the start recording button, create a new MediaRecorder instance with the audio stream as input.
  3. Define settings such as file format and audio encoding for recordings.
  4. Bind ondataavailable and onstop event listeners to handle audio data during recording and operations at the end of recording respectively.
  5. In the click event of the stop recording button, call the stop method to stop recording and trigger the onstop event.
mediaRecorder.current = new MediaRecorder(stream);
mediaRecorder.current.addEventListener("dataavailable", (e) => {<!-- -->
    if (e.data.size > 0) {<!-- -->
        recordedChunks.current.push(e.data);
    }
});
mediaRecorder.current.addEventListener("stop", () => {<!-- -->
    isSave.current & amp; & amp; exportRecording();
});
//Mute
function muteRecording() {<!-- -->
    if (audioTrack.current) {<!-- -->
        audioTrack.current.enabled = false; // Turn off the audio track
        setIsMute(true);
        console.log("Recording has been muted");
    }
}
// Unmute
function unmuteRecording() {<!-- -->
    if (audioTrack.current) {<!-- -->
        audioTrack.current.enabled = true; // Enable audio track
        setIsMute(false);
        console.log("Unmute");
    }
}
// Resume recording
function resumeRecording() {<!-- -->
    if (isPause & amp; & amp; mediaRecorder.current.state === "paused") {<!-- -->
        mediaRecorder.current.resume();
        setIsPause(false);
        wavesurferRef.current.play();
        timer.start();
        console.log("Resume recording...");
    }
}
//Pause recording
function pauseRecording() {<!-- -->
    if (!isPause & amp; & amp; mediaRecorder.current.state === "recording") {<!-- -->
        mediaRecorder.current.pause();
        setIsPause(true);
        wavesurferRef.current.pause();
        timer.pause();
        console.log("Recording has been paused");
    }
}

Step 3: Save and export the recording file

  1. In the onstop event, store the recording data in the state of the React component.
  2. Added save and export buttons for saving recordings to files or exporting to other formats.
  3. In the click event of the save button, convert the recording data into a Blob object, and use URL.createObjectURL to generate the URL of the file.
  4. You can use the download attribute of the tag to bind the URL to href so that users can download the recording file.
//Stop recording and export the recorded audio data as a Blob object
function stopRecording() {<!-- -->
    if (isRecording) {<!-- -->
            mediaRecorder.current.stop();
            mediaStream.current?.getTracks().forEach((track) => track.stop());
            setIsRecording(false);
            timer.reset(null, false);
            wavesurferRef.current.reset();
            recordedChunks.current = [];
            console.log("Recording completed!");
    }
}
//Export the recorded audio file
function saveRecording() {<!-- -->
    stopRecording();
    isSave.current = true;
}

//Export the recorded audio file
function exportRecording() {<!-- -->
    if (recordedChunks.current.length > 0) {<!-- -->
            const blob = new Blob(recordedChunks.current, {<!-- --> type: "audio/webm" });
            const url = URL.createObjectURL(blob);
            if (window.electronAPI) {<!-- -->
                    window.electronAPI.sendRaDownloadRecord(url);
            } else {<!-- -->
                    const link = document.createElement("a");
                    link.href = url;
                    link.download = `pear-rec_${<!-- --> + new Date()}.webm`;
                    link.click();
                    recordedChunks.current = [];
                    isSave.current = false;
            }
    }
}

It should be noted that WebRTC and React only provide the underlying functions of recording, such as obtaining audio streams, recording and encoding, etc., and the specific interface design and logic control of the recording software need to be developed according to actual needs. At the same time, in order to ensure compatibility on different browsers and platforms, some browser-specific processing and adaptation may be required.

Summary

Through the combination of Electron, WebRTC and React, we can build an innovative recording software that integrates real-time recording and online sharing functions. Provide users with a more flexible and high-quality recording experience. At the same time, developers can also enjoy the development efficiency and convenience brought by Electron and React. We believe that such recording software will become the best solution for users’ recording needs and promote further innovation in the field of recording.

Q & amp;A

  • Q: Is there source code?

Of course, the address is as follows: github.com/027xiguapi/…. If you are interested, you can discuss it together. We also welcome everyone to fork and star