vue3 + springboot3 large file breakpoint resume upload

Preface:

1. Requirements: Files above 10MB support breakpoint resumption and require a real-time progress bar.

2. There are currently two ways to implement breakpoint resumption on the Internet. The main difference is whether the action of cutting files is on the front end or the back end. (This article implements cutting files on the front end, that is, the slice of the front-end file stream. function, which will convert the file into bobl type data and pass it to the backend).

3. It mainly uses the two values of the total number of shards and the currently uploaded shard subscript to judge and execute through the database.

4.Related references:

Resume uploading after breakpoint: https://www.bilibili.com/video/BV1sv411p7Ee/
The concept of resumed downloading: https://blog.csdn.net/yjxkq99/article/details/128942133

5. When uploading fragmented files, you will need to create a separate folder to store the fragmented files. Resumable upload means that the upload is interrupted after a part of the file has been uploaded. When the file is uploaded again, it will continue from the previously uploaded file. Upload the remaining files.

6. The article will introduce the implementation ideas and flow charts, as well as the implementation effects and problems encountered during the development process.

Ideas:

1. The first step is to verify the file interface:

Obtain the MD5 encryption of file information by the front end, and the file stream. For the size, type, file name, file size, fragment size, total number of fragments of the file, the size of each intercepted fragment required to calculate the file (stored in Fragmented data set), whether it is the last fragment, file ID and other main parameters, save the information of this file in the database, and will be returned in this interface and handed over to the front end for judgment.

2. The second step of breakpoint resume transmission interface:

Traverse the fragmented data collection in the verification file interface, call the fragmented upload interface, and upload the file stream and file ID intercepted by the slice function each time. The calculation logic of the intercepted file size is completed by the backend (accurate to bytes ). After each upload is completed, the backend will return the identifier of whether it is the last fragment of the file.

Large file breakpoint resume transfer flow chart:

Database design:

SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;

----------------------------
-- Table structure for sys_file
----------------------------
DROP TABLE IF EXISTS `sys_file`;
CREATE TABLE `sys_file` (
  `file_id` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL COMMENT 'Primary key ID',
  `business_id` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NULL DEFAULT NULL COMMENT 'Business primary key ID',
  `bus_mode` varchar(30) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NULL DEFAULT 'SYSTEM' COMMENT 'Business module type',
  `server_file_name` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NULL DEFAULT NULL COMMENT 'File name in the object storage server',
  `file_name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NULL DEFAULT NULL COMMENT 'file name',
  `file_type` varchar(30) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NULL DEFAULT NULL COMMENT 'file type',
  `file_url` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NULL DEFAULT NULL COMMENT 'file path',
  `file_size` decimal(30, 0) NULL DEFAULT NULL COMMENT 'File size (number of bytes)',
  `source_type` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NULL DEFAULT 'MANUAL' COMMENT 'source type',
  `chunk_flag` tinyint NULL DEFAULT 0 COMMENT 'Fragmentation flag: fragmentation-1; no fragmentation-0;',
  `chunk_folder` varchar(30) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NULL DEFAULT NULL COMMENT 'Cut folder',
  `chunk_md5` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NULL DEFAULT NULL COMMENT 'Fragmented MD5 encryption',
  `shard_total` decimal(10, 0) NULL DEFAULT NULL COMMENT 'Total number of shards',
  `shard_index` decimal(10, 0) NULL DEFAULT 0 COMMENT 'Current shard number',
  `shard_size` decimal(30, 0) NULL DEFAULT NULL COMMENT 'Size of each shard',
  PRIMARY KEY (`file_id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_bin COMMENT = 'Attachment table' ROW_FORMAT = Dynamic;

SET FOREIGN_KEY_CHECKS = 1;

Front-end core code implementation:

vue3 upload file component:
<el-form
            :label-position="top"
            :inline=true
            :model="fileForm"
            label-width="200px"
        >
          <el-form-item label="File information">
            <el-upload
                class="upload-demo"
                :on-change="onChange"
                :on-remove="onRemove"
                :auto-upload="false"
            >
              <el-button size="small" type="primary">Select file</el-button>
            </el-upload>
          </el-form-item>
          <el-form-item>
            <el-button type="danger" size="large" @click="onUpload" round>Upload files</el-button>
          </el-form-item>
</el-form>
How to upload files:
 onUpload() {
      // When uploading files, carry other parameters to submit the content.
      // First request the verification file interface to verify whether the file has been uploaded and which shard it has been uploaded to.
      let realFile = this.fileForm.file.raw;

      let fileType = realFile.name.substring(realFile.name.lastIndexOf(".") + 1, realFile.name.length);
      let filePwd = realFile.name + realFile.size + fileType + realFile.lastModified;
      let filePwdMd5 = CryptoJS.MD5(filePwd).toString();

      let verifyData = new FormData();
      verifyData.append('file', realFile);
      verifyData.append('fileName', realFile.name);
      verifyData.append('fileSize', realFile.size);
      verifyData.append('lastModified', realFile.lastModified);
      verifyData.append('chunkMd5', filePwdMd5);
      //Business related fields
      verifyData.append('businessId', '1111111');
      verifyData.append('busMode', 'DEV_MODEL');

      // Verify the file and return the number of file fragments, file size, file ID, and interception information required for file fragmentation
      verifyFile(verifyData).then(res => {
        this.fileInfo = res.data.data;
        //The last fragment identifier
        if (this.fileInfo.endFlag) {
          // progress bar
          this.Progress.progress = 100;
        } else {
          //Progress bar calculation
          this.Progress.progress = ((this.fileInfo.shardIndex / this.fileInfo.shardTotal) * 0.1) * 1000;
        }
        this.uploadShardFile(realFile);
      });
    },
How to resume downloading after breakpoint:
 uploadShardFile(realFile) {
      // Intercept the file stream according to the return value of the verification interface, use a for loop to repeatedly call the breakpoint resume file interface
      const asyncUpload = async () => {
        for (let i = 0; i <this.fileInfo.shardList.length; i + + ) {
          let item = this.fileInfo.shardList[i]
          let shardFile = realFile.slice(item.shardStart, item.shardEnd);
          let data = new FormData();
          data.append('fileId', this.fileInfo.fileId);
          data.append('file', shardFile);
          data.append('shardIndex', item.shardIndex);
          const res = await breakpointUpload(data)
          let uploadData = res.data.data;
          console.log(uploadData)
          //The last fragment identifier
          if (uploadData.endFlag) {
            //Progress bar calculation
            this.Progress.progress = 100;
          } else {
            //Progress bar calculation
            this.Progress.progress = Math.round(((uploadData.shardIndex / uploadData.shardTotal) * 0.1) * 1000);
          }
          console.log(((uploadData.shardIndex / uploadData.shardTotal) * 0.1) * 1000)
          console.log(uploadData.shardIndex)
        }
      }
      asyncUpload()

      this.getTableDate();
    },

Backend core code implementation:

Verification file interface:
 public FileVerify verifyFile(FileVerify fileVerify) {
        //When verifying the file, the file information will be saved in the database.
        log.info("fileVerify-->{}", fileVerify);
        LambdaQueryWrapper<FileInfo> queryWrapper = Wrappers.lambdaQuery(new FileInfo());
        queryWrapper.eq(StrUtil.isNotBlank(fileVerify.getChunkMd5()), FileInfo::getChunkMd5, fileVerify.getChunkMd5());
        queryWrapper.eq(StrUtil.isNotBlank(fileVerify.getFileId()), FileInfo::getFileId, fileVerify.getFileId());

        FileInfo existFile = fileMapper.selectOne(queryWrapper);
        // When editing a file, its file ID and file MD5 encryption are both passed in as parameters. If the file is still not found, an exception will be thrown.
        Assert.isFalse(StrUtil.isNotBlank(fileVerify.getChunkMd5()) & amp; & amp; StrUtil.isNotBlank(fileVerify.getFileId()) & amp; & amp; Objects.isNull(existFile), "File ID and file MD5 encryption All parameters are being passed, and if the file is still not found at this time, an exception will be thrown.");
        if (Objects.isNull(existFile)) {
            //Save the file information into the database and return the file ID and MD5
            existFile = getFileInfoByVerifyFile(fileVerify);
            fileMapper.insert(existFile);
        }
        fileVerify.setFileId(existFile.getFileId());
        fileVerify.setChunkMd5(existFile.getChunkMd5());
        fileVerify.setChunkFlag(existFile.getChunkFlag());
        fileVerify.setShardTotal(existFile.getShardTotal());
        fileVerify.setShardSize(existFile.getShardSize());
        fileVerify.setShardIndex(existFile.getShardIndex());

        if (existFile.getShardIndex().equals(existFile.getShardTotal())) {
            fileVerify.setChunkFlag(Boolean.TRUE);
        }

        fileVerify.setFile(null);

        // Generate a list of file fragment values
        List<FileVerify.ChunkShard> shardList = getShardList(fileVerify);
        fileVerify.setShardList(shardList);
        fileVerify.setEndFlag(CollUtil.isEmpty(shardList));

        return fileVerify;
    }

    /**
     * Construct the start value and end value of the fragments that need to be intercepted from the file, which are used in the front-end for loop to call the fragment upload interface.
     */
    private List<FileVerify.ChunkShard> getShardList(FileVerify fileVerify) {
        BigDecimal shardTotal = fileVerify.getShardTotal();
        List<FileVerify.ChunkShard> shardList = CollUtil.newArrayList();

        for (int i = fileVerify.getShardIndex().intValue(); i < shardTotal.intValue(); i + + ) {
            FileVerify.ChunkShard chunkShard = new FileVerify.ChunkShard();
            chunkShard.setShardTotal(fileVerify.getShardTotal());
            chunkShard.setShardIndex(BigDecimal.valueOf(i));
            chunkShard.setShardStart(BigDecimal.valueOf(i).multiply(fileVerify.getShardSize()));
            chunkShard.setShardEnd(BigDecimal.valueOf(i).multiply(fileVerify.getShardSize()).add(fileVerify.getShardSize()));
            if (i == shardTotal.intValue() - 1) {
                //If it is the last fragment, then intercept the size of the file
                chunkShard.setShardEnd(BigDecimal.valueOf(fileVerify.getFileSize()));
            }
            shardList.add(chunkShard);
        }

        return shardList;
    }

    // Each 20MB is used as a shard
    private int shardSizeInt = 10 * 1024 * 1024;

    /**
     * Encapsulate the parameter information required by the verification file interface
     */
    private FileInfo getFileInfoByVerifyFile(FileVerify fileVerify) {
        FileInfo fileInfo = new FileInfo();
        fileInfo.setBusinessId(fileVerify.getBusinessId());
        fileInfo.setBusMode(fileVerify.getBusMode());
        fileInfo.setFileName(fileVerify.getFileName());
        fileInfo.setFileType(FileNameUtil.getSuffix(fileVerify.getFileName()));
        fileInfo.setFileSize(BigDecimal.valueOf(fileVerify.getFileSize()));

        fileInfo.setServerFileName(UUID.randomUUID().toString(true).toUpperCase() + "." + fileInfo.getFileType());

        fileInfo.setChunkMd5(fileVerify.getChunkMd5());
        fileInfo.setChunkFlag(Boolean.TRUE);
        fileInfo.setChunkFolder(LocalDate.now().format(DateTimeFormatter.ofPattern("yyyy/MM/dd")) + "/" + RandomUtil.randomString(6).toUpperCase());

        fileInfo.setShardIndex(BigDecimal.ZERO);
        //Upload files every 20MB
        //The number of bytes is divided by 20MB, that is, divided by 20*1024*1024
        BigDecimal shardSize = BigDecimal.valueOf(shardSizeInt);
        fileInfo.setShardSize(shardSize);
        BigDecimal shardTotal = fileInfo.getFileSize().divide(shardSize, 0, RoundingMode.HALF_UP);
        fileInfo.setShardTotal(shardTotal);

        return fileInfo;
    }
Breakpoint resume download interface:
 public FileVerify breakpointUpload(FileVerify fileVerify) {
        Assert.notBlank(fileVerify.getFileId(), "The file ID cannot be blank.");
        FileInfo existFile = fileMapper.selectById(fileVerify.getFileId());
        Assert.notNull(existFile, "This file information was not found");
        fileVerify.setShardTotal(existFile.getShardTotal());

        // upload files
        MultipartFile file = fileVerify.getFile();
        // Customize the name of the shard and the upload path
        String shardFileName = getShardFileName(existFile, fileVerify.getShardIndex());
        MinioUtils.me().upLoaderShardFile(file, shardFileName);
        log.info("The name of the successfully uploaded shard file is ->{}", shardFileName);
        //Update file information
        FileInfo fileInfo = new FileInfo();
        fileInfo.setFileId(fileVerify.getFileId());
        //Update the uploaded file fragment subscript
        fileInfo.setShardIndex(fileVerify.getShardIndex());
        fileMapper.updateById(fileInfo);
        log.error("Update file fragmentation information to database-->{}", JSONUtil.toJsonStr(fileInfo));
        fileVerify.setFile(null);
        // If the uploaded fragment subscript is exactly equal to (-1 of the total), it is considered that the fragment of the last file has been uploaded successfully. All fragmented files can be merged.
        if (fileVerify.getShardIndex().equals(existFile.getShardTotal().subtract(BigDecimal.ONE))) {
            fileVerify.setEndFlag(Boolean.TRUE);
            //Start merging fragmented files
            log.error("All fragmented files here have been uploaded, and the files are being merged...");
            mergeShardFile(existFile);
        }
        return fileVerify;
    }

    /**
     * Merge multiple fragmented files.
     */
    private void mergeShardFile(FileInfo existFile) {
        // Get all the fragmented file streams, assemble them into one stream, and write them to the remote file.
        BigDecimal shardTotal = existFile.getShardTotal();
        byte[] allFileByte = new byte[existFile.getFileSize().intValue()];
        int startLength = 0;
        for (int i = 0; i < shardTotal.intValue(); i + + ) {
            existFile.setShardIndex(BigDecimal.valueOf(i));
            // Build the names of all shard files in the file server
            // Get the file stream from the file server by file name and merge it into one file stream.
            String fileName = getShardFileName(existFile, BigDecimal.valueOf(i));
            byte[] fileByte = MinioUtils.me().getFileStream(fileName);
            /**
             * System.arraycopy(src, srcPos, dest, destPos, length)
             * Parameter analysis:
             * src: byte source array
             * srcPos: intercept the starting position of the source byte array (position 0 is valid)
             * dest,: byte destination array (array stored after interception)
             * destPos: the starting position of the array stored after interception (position 0 is valid)
             *length: intercepted data length
             */
            System.arraycopy(fileByte, 0, allFileByte, startLength, fileByte.length);
            startLength = startLength + fileByte.length;
        }
        InputStream inputStream = new ByteArrayInputStream(allFileByte);
        // Then upload the file stream to the minio server
        //Specify the stored file name
        String finalFileName = getFinalFileName(existFile);
        // Upload files
        MinioUtils.me().upLoaderFileByByte(inputStream, finalFileName);
        log.info("The name of the successfully uploaded fragmented file is ->{}", finalFileName);
        //Update file information
        FileInfo fileInfo = new FileInfo();
        fileInfo.setFileId(existFile.getFileId());
        //Update the uploaded file fragment subscript
        fileInfo.setFileUrl(finalFileName);
        log.error("Update successful data-->{}", JSONUtil.toJsonStr(fileInfo));
        fileMapper.updateById(fileInfo);
    }

    /**
     * Generate the final required file name + path
     */
    private String getFinalFileName(FileInfo existFile) {
        return existFile.getChunkFolder() + "/" + existFile.getServerFileName();
    }

    /**
     * Generate file fragment name + path
     */
    private String getShardFileName(FileInfo fileInfo, BigDecimal shardIndex) {
        if (Objects.isNull(shardIndex)) {
            return fileInfo.getChunkFolder() + "/" + fileInfo.getServerFileName();
        }
        return fileInfo.getChunkFolder() + "/" + fileInfo.getServerFileName() + "." + shardIndex;
    }

Problems encountered during the process:

1. Is the process of file encryption performed on the front end or on the back end? It needs to be calculated on the front end.

2. What is the limit on the size of file fragments? Some examples found on the Internet: 5MB-10MB, here it is set as 10MB.

3. When the front-end loops through the fragmented data collection to call the interface, the interface call time will be inconsistent. One of the fragments is uploaded first, and an error occurs when updating the data when updating the database? The front-end uses async + await to obtain asynchronous blocking interface returns. Parameters, ensure that the multipart upload interface is called in the order of the multipart data collection.

4. Should the calculation of file fragmentation be placed on the front end? Or on the back end? Use the bigdecmail type on the back end, and round up without retaining decimal places during calculation.

The final effect:

Project source code address:

Front-end: hulunbuir-front/src/views/pages/FilePageView.vue – Gitee.com

Backend: hulunbuir-study/src/main/java/com/hulunbuir/study/infra/FileServiceImpl.java – Gitee.com