Spring Boot implements large file fragment upload

Spring Boot implements large file fragment upload

Add dependencies in Maven

The following dependencies need to be added to the project:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
 
<dependency>
    <groupId>commons-fileupload</groupId>
    <artifactId>commons-fileupload</artifactId>
    <version>1.4</version>
</dependency>

Create a controller class

Create a controller class to handle upload requests. In this class, the following operations need to be implemented:

  • Receive uploaded files
  • Divide the file into parts
  • save each part to a temporary file on disk
  • When all parts have been uploaded, combine them into one complete file

Here is a sample code:

@RestController
public class FileUploadController {<!-- -->
 
    private static final String UPLOAD_DIRECTORY = "/tmp/uploads";
 
    @PostMapping("/upload")
    public ResponseEntity<String> upload(@RequestParam("file") MultipartFile file,
                                          @RequestParam("fileName") String fileName,
                                          @RequestParam("chunkNumber") int chunkNumber,
                                          @RequestParam("totalChunks") int totalChunks) throws IOException {<!-- -->
        
        
        File uploadDirectory = new File(UPLOAD_DIRECTORY);
        if (!uploadDirectory. exists()) {<!-- -->
            uploadDirectory.mkdirs();
        }
 
        File destFile = new File(UPLOAD_DIRECTORY + File.separator + fileName + ".part" + chunkNumber);
        FileUtils.copyInputStreamToFile(file.getInputStream(), destFile);
 
        if (chunkNumber == totalChunks) {<!-- --> // If all parts were uploaded, combine them into a complete file
            String targetFilePath = UPLOAD_DIRECTORY + File.separator + fileName;
            for (int i = 1; i <= totalChunks; i ++ ) {<!-- -->
                File partFile = new File(UPLOAD_DIRECTORY + File.separator + fileName + ".part" + i);
                try (FileOutputStream fos = new FileOutputStream(targetFilePath, true)) {<!-- -->
                    FileUtils. copyFile(partFile, fos);
                    partFile.delete();
                }
            }
        }
 
        return ResponseEntity.ok("Upload successful");
    }
}

Front-end implementation

On the front end, you need to use JavaScript to split the file into parts and send the chunked data to the back end. Here is a sample code:

javascript copy code function uploadFile(file) {<!-- -->
    const chunkSize = 1024 * 1024; // size of each part (1MB)
    const totalChunks = Math.ceil(file.size / chunkSize); // headquarters fraction
 
    let currentChunk = 1;
    let startByte = 0;
 
    while (startByte < file.size) {<!-- --> // Split the file into multiple parts and upload each part
        const endByte = Math.min(startByte + chunkSize, file.size);
        const chunk = file. slice(startByte, endByte);
 
        const formData = new FormData();
        formData.append('file', chunk);
        formData.append('fileName', file.name);
        formData.append('chunkNumber', currentChunk);
        formData.append('totalChunks', totalChunks);
 
        axios. post('/upload', formData);
 
        startByte += chunkSize;
        currentChunk++;
    }
}

Through the above steps, you can use Spring Boot to upload large files in pieces.

The second type:

package cn.js.Controller;

import org.apache.commons.io.FileUtils;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import javax.servlet.http.HttpServletRequest;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.InputStream;
import java.util.Arrays;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;

 /**
   *@description:
  * 1. Front-end implementation:
  *
  * Use JavaScript to implement the front-end upload code. When sending a request from the front end, cut the file into several small blocks (each block size is between 1MB and 10MB),
  * and sent to the backend. When all chunks have been uploaded, a "merge" request is sent to the backend.
  *
  * 2. Backend implementation:
  *
  * In the backend controller, first receive the uploaded file and divide it into chunks according to the specified size. Each chunk is then stored as a temporary file.
  * When all chunks have been uploaded, these temporary files are merged into a complete file.
  *
  * The specific implementation method is as follows:
   **/


  /**
    *@description:
   * In the above code, we defined an interface "/file/upload" for receiving uploaded chunks. When a block is received,
   * First judge whether it is the first block, if yes, create a new empty file. Then write the current block to the file.
   * If it is the last block, call the mergeFile() method to merge all blocks into a complete file.
   *
   * In the mergeFile() method, we first get all temporary files prefixed with guid,
   * and sort by filename. Then write the contents of each file into the destination file, starting with the first file,
   * and delete the temporary files. Finally, output the message that the merge is complete.
   *
   * In this way, Spring Boot can be used to upload large files in pieces. have to be aware of is,
   * In order to prevent file name conflicts, a unique guid can be generated for each upload task and used
   * as a prefix to the filename. In addition, if the upload task has not been completed for a long time, temporary files need to be cleaned up regularly.
    **/
@RestController
@RequestMapping("/file")
public class FileUploadController {<!-- -->

    private final String UPLOAD_PATH = "D:/upload/";

    @PostMapping("/upload")
    public String upload(HttpServletRequest request) throws Exception {<!-- -->
        String fileName = request. getHeader("fileName");
        String guid = request. getHeader("guid");
        int chunkIndex = Integer.parseInt(request.getHeader("chunkIndex"));//The index of the fragmented file starts from 0
        int totalChunks = Integer.parseInt(request.getHeader("totalChunks"));//The total number of indexes of fragmented files

        // get the uploaded file
        File file = new File(UPLOAD_PATH + fileName);

        // If it is the first block, create a new file
        if (chunkIndex == 0) {<!-- -->
            file. createNewFile();
        }

        // write the current block to the file
        FileOutputStream fos = new FileOutputStream(file, true);
        InputStream is = request. getInputStream();
        byte[] buf = new byte[1024];
        int len;
        while ((len = is.read(buf)) != -1) {<!-- -->
            fos.write(buf, 0, len);
        }

        // If it is the last block, merge the files
        if (chunkIndex == totalChunks - 1) {<!-- -->
            mergeFile(file, guid);
        }

        

        return "success";
    }

    private void mergeFile(File file, String guid) throws Exception {<!-- -->
        String fileName = file. getName();
        String ext = fileName. substring(fileName. lastIndexOf("."));
        File newFile = new File(UPLOAD_PATH + guid + "." + ext);

        List<File> files = Arrays.asList(file.getParentFile().listFiles((dir, name) -> name.startsWith(guid)));
        Collections.sort(files, Comparator.comparing(File::getName));

        FileOutputStream fos = new FileOutputStream(newFile);
        byte[] buf = new byte[1024];
        int len;
        for (File f : files) {<!-- -->
            FileInputStream fis = new FileInputStream(f);
            while ((len = fis.read(buf)) != -1) {<!-- -->
                fos.write(buf, 0, len);
            }
            fis. close();
            f. delete();
        }
        fos. close();

        System.out.println("File merge completed:" + newFile.getAbsolutePath());
    }
}