Spring Boot Java - Sping Boot - Azure Blob Storage -状态代码400 - InvalidBlobOrBlock -指定的Blob或块内容无效

w46czmvw  于 2023-11-17  发布在  Spring
关注(0)|答案(1)|浏览(129)

我必须将大型zip文件上传到Azure Blob存储,其中包含xml文件,每个xml文件的大小将是大约200- 300 MB。我尝试使用React JS以块的形式上传文件,在服务器端我使用Sping Boot 并接收块,我想一起处理所有块。但在blockBlobClient.stageBlock()处,我收到400个错误请求,并在某处显示消息InvalidBlobOrBlock - The specified blob or block content is invalid这是我写的代码,请帮我解决这个问题。

**编辑:**似乎blockBlobClient.stageBlock()中的Stage 11块返回400
React代码:Chunk进程和上传

const uploadInsights = async () => {
    setLoading(true);
    const chunkSize = 2 * 1024 * 1024;
    const totalChunks = Math.ceil(file.size / chunkSize);
    let uploadedChunks = 0;
    for (let chunkNumber = 0; chunkNumber < totalChunks; chunkNumber++) {
      const start = chunkNumber * chunkSize;
      const end =
        chunkNumber === totalChunks - 1
          ? file.size
          : (chunkNumber + 1) * chunkSize;
      const chunk = file.slice(start, end);
      console.log('chunk no', chunkNumber, 'chunk', chunk);
      const formData = new FormData();
      formData.append('filename', file.name);
      formData.append('fileChunk', chunk);
      // @ts-ignore
      formData.append('chunkNumber', chunkNumber);
      // @ts-ignore
      formData.append('totalChunks', totalChunks);

      try {
        let response = await analyticsService.uploadFeatureAnalyticsInsights(
          formData,
        );
        if (response?.data?.success) {
          uploadedChunks++;
          const newProgress = (uploadedChunks / totalChunks) * 100;
          setProgress(newProgress);
        } else {
          console.log(
            `Error uploading chunk ${chunkNumber + 1}: ${
              response?.data?.status
            }`,
          );
        }
      } catch (error: any) {
        console.log(
          `Error uploading chunk ${chunkNumber + 1}: ${
            error?.response?.status
          }`,
        );
      }
    }
    setLoading(false);
    setProgress(0);
  };

字符串

**Java/Sping Boot **

uploadInsights(...)是处理文件块的方法

import com.azure.storage.blob.BlobClient;
import com.azure.storage.blob.BlobServiceClient;
import com.azure.storage.blob.specialized.BlockBlobClient;
import com.app.exception.general.AppException;

import lombok.NonNull;
import lombok.RequiredArgsConstructor;
import lombok.extern.log4j.Log4j2;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import org.springframework.web.multipart.MultipartFile;

import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;

@Log4j2
@Service
@RequiredArgsConstructor(onConstructor = @__(@Autowired))
public class FeatureAnalyticsService {

    private static final Map<String, List<String>> chunkIds = new ConcurrentHashMap<>();

    private static final Map<String, List<byte[]>> fileChunks = new ConcurrentHashMap<>();

    private static final Map<String, List<ByteArrayInputStream>> chunkStreams =
            new ConcurrentHashMap<>();

    private final BlobServiceClient blobServiceClient;

    private static byte @NonNull [] getChunkBytes(@NonNull MultipartFile fileChunk)
            throws IOException {
        return fileChunk.getInputStream().readAllBytes();
    }

    private static void updateChunkIds(String filename, String base64ChunkId) {
        List<String> chunkNumbers;
        if (chunkIds.containsKey(filename)) {
            chunkNumbers = chunkIds.get(filename);
        } else {
            chunkNumbers = new ArrayList<>();
        }
        chunkNumbers.add(base64ChunkId);
        chunkIds.put(filename, chunkNumbers);
    }

    private static void updateChunkStream(String filename, ByteArrayInputStream chunk) {
        List<ByteArrayInputStream> chunks;
        if (chunkStreams.containsKey(filename)) {
            chunks = chunkStreams.get(filename);
        } else {
            chunks = new ArrayList<>();
        }
        chunks.add(chunk);
        chunkStreams.put(filename, chunks);
    }

    private static void updateChunks(String filename, byte[] chunkBytes) {
        List<byte[]> chunks;
        if (fileChunks.containsKey(filename)) {
            chunks = fileChunks.get(filename);
        } else {
            chunks = new ArrayList<>();
        }
        chunks.add(chunkBytes);
        fileChunks.put(filename, chunks);
    }

    public boolean uploadInsights(
            String filename, @NonNull MultipartFile fileChunk, int chunkNumber, int totalChunks) {
        BlobClient blobClient = getBlobClient(filename);
        String base64ChunkId =
                Base64.getEncoder().encodeToString(String.valueOf(chunkNumber).getBytes());
        updateChunkIds(filename, base64ChunkId);
        try {
            byte[] chunkBytes = getChunkBytes(fileChunk);
            updateChunks(filename, chunkBytes);
            ByteArrayInputStream chunk = new ByteArrayInputStream(chunkBytes);
            updateChunkStream(filename, chunk);
        } catch (IOException e) {
            throw new AppException(e.getMessage());
        }
        if (chunkNumber == totalChunks - 1) {
            List<String> chunkNumbers = chunkIds.get(filename);
            List<byte[]> chunkBytes = fileChunks.get(filename);
            List<ByteArrayInputStream> chunks = chunkStreams.get(filename);
            BlockBlobClient blockBlobClient = blobClient.getBlockBlobClient();
            try {
                for (int i = 0; i < chunkNumbers.size(); i++) {
                    log.info("Chunk Number {}, Chunk Size {}", i, chunkBytes.get(i).length);
                    blockBlobClient.stageBlock(
                            chunkNumbers.get(i), chunks.get(i), chunkBytes.get(i).length); // error area
                }
                blockBlobClient.commitBlockList(chunkNumbers);
                chunkIds.remove(filename);
                fileChunks.remove(filename);
                chunkStreams.remove(filename);
            } catch (Exception e) {
                log.error(e.getMessage());
                chunkIds.remove(filename);
                fileChunks.remove(filename);
                chunkStreams.remove(filename);
                return false;
            }
        }
        return true;
    }

    private @NonNull BlobClient getBlobClient(String filename) {
        return blobServiceClient.createBlobContainerIfNotExists("uploads").getBlobClient(filename);
    }
}

服务器端错误

Chunk Number 0, Chunk Size 2097152
Chunk Number 1, Chunk Size 2097152
Chunk Number 2, Chunk Size 2097152
Chunk Number 3, Chunk Size 2097152
Chunk Number 4, Chunk Size 2097152
Chunk Number 5, Chunk Size 2097152
Chunk Number 6, Chunk Size 2097152
Chunk Number 7, Chunk Size 2097152
Chunk Number 8, Chunk Size 2097152
Chunk Number 9, Chunk Size 2097152
Chunk Number 10, Chunk Size 2097152
Status code 400, "<?xml version="1.0" encoding="utf-8"?><Error><Code>InvalidBlobOrBlock</Code><Message>The specified blob or block content is invalid.
RequestId:30fc912d-801e-0014-0245-12c1d4000000
Time:2023-11-08T13:16:08.4879527Z</Message></Error>"

gzszwxb4

gzszwxb41#

我不确定这是否是Azure的限制,我增加了块大小,并尝试将块阶段减少到11以下,这解决了这个问题。

相关问题