如何在pyspark中读取大型zip文件

wfypjpf4  于 2022-11-28  发布在  Spark
关注(0)|答案(1)|浏览(297)

我在s3上确实有n个.zip文件,我想处理并从中提取一些数据。zip文件包含一个json文件。在Spark中我们可以读取.gz文件,但我没有找到任何方法来读取.zip文件中的数据。有人能帮我解决如何使用python在spark上处理大型zip文件吗?我遇到了一些选项,如newAPIHadoopFile,但没有得到任何运气,也没有找到在pyspark中实现它们的方法。请注意,zip文件〉1G,有些是20G。
下面是我使用的代码:

import zipfile
import io
file_name = "s3 file path for zip file"

def zip_extract(x):
    in_memory_data = io.BytesIO(x[1])
    file_obj = zipfile.ZipFile(in_memory_data, "r")
    files = [i for i in file_obj.namelist()]
    return dict(zip(files, [file_obj.open(file).read() for file in files]))

zips = sc.binaryFiles(file_name)
files_data = zips.map(zip_extract)

但是它失败了,因为下面的原因.我正在使用的示例是r42x.large.

Exit code: 52
Stack trace: ExitCodeException exitCode=52: 
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0
des4xlb0

des4xlb01#

我确实读取了zip文件的内容块,并使用spark处理这些块。这对我很有效,帮助我读取了大小超过10G的zip文件。下面是一组示例:

max_data_length=10000
z = zipfile.ZipFile(zip_file)
data = []
counter=1
with z.open(z.infolist()[0]) as f:
    line_counter=0
    for line in f:
        # Append file contents to list
        data.append(line)
        line_counter=line_counter+1
        # Reset counters if record count hit max-data-length threshold
        # Create spark dataframes
        if not line_counter % max_data_length:          
            # Spark processing like:
            df_rdd = spark.sparkContext.parallelize(data)

            # Reset Counters and data-list
            counter=counter+1
            line_counter=0
            data= []

相关问题