尝试读取Python目录中的多个json文件并对其进行格式化时出现JSONDecodeError

kuarbcqp  于 2023-03-04  发布在  Python
关注(0)|答案(1)|浏览(169)

我尝试使用Python读取和格式化一个目录中的多个json文件。我创建了一个函数load_json_to_dataframe来将json数据加载和格式化到一个panda Dataframe 中,还有一个函数read_json_files来读取每个 Dataframe 并将其附加到一个列表中。但是,当我运行代码时,我总是得到一个JSONDecodeError。
下面是我使用的代码:

import os
import pandas as pd
import json

def load_json_to_dataframe(json_file_path):
    with open(json_file_path, 'r') as json_file:
        doc = json.load(json_file)
        return pd.json_normalize(doc)

def read_json_files(folder_path):
    dataframes = []
    json_files = os.listdir(folder_path)
    for json_file in json_files:
        if json_file.endswith('.json'):
            df = load_json_to_dataframe(os.path.join(folder_path, json_file))
            dataframes.append(df)
    return pd.concat(dataframes, ignore_index=True)

folder_path = 'path/to/json/files'
combined_dataframe = read_json_files(folder_path)

这是我收到的错误信息:

JSONDecodeError: Expecting value: line 1 column 1 (char 0)

我不确定是什么原因导致了这个错误,也不知道如何修复它。有人能帮我找出我做错了什么,以及如何修复它吗?先谢了。
下面是我的数据的一个例子:https://drive.google.com/file/d/1h2J-e0cF9IbbWVO8ugrXMGdQTn-dGtsA/view?usp=sharing

    • 更新**:有一个文件的格式与其他文件不同,因此无法正确读取,我已将其删除。现在它给我一个不同的错误
---------------------------------------------------------------------------
MemoryError                               Traceback (most recent call last)
Cell In[1], line 20
     17     return pd.concat(dataframes, ignore_index=True)
     19 folder_path = 'C:/Users/gusta/Desktop/business/Emprendimiento'
---> 20 combined_dataframe = read_json_files(folder_path)

Cell In[1], line 17, in read_json_files(folder_path)
     15         df = load_json_to_dataframe(os.path.join(folder_path, json_file))
     16         dataframes.append(df)
---> 17 return pd.concat(dataframes, ignore_index=True)

File c:\Users\gusta\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\util\_decorators.py:331, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs)
    325 if len(args) > num_allow_args:
    326     warnings.warn(
    327         msg.format(arguments=_format_argument_list(allow_args)),
    328         FutureWarning,
    329         stacklevel=find_stack_level(),
    330     )
--> 331 return func(*args, **kwargs)

File c:\Users\gusta\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\reshape\concat.py:381, in concat(objs, axis, join, ignore_index, keys, levels, names, verify_integrity, sort, copy)
    159 """
    160 Concatenate pandas objects along a particular axis.
    161 
...
    186 return self._blknos

File c:\Users\gusta\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\_libs\internals.pyx:718, in pandas._libs.internals.BlockManager._rebuild_blknos_and_blklocs()

MemoryError: Unable to allocate 966. KiB for an array with shape (123696,) and data type int64
rekjcdws

rekjcdws1#

最后,我通过下面的代码找到了解决方案,主要问题是:
1.我没有正确地指定.json文件的层,因此完全加载它会消耗太多的内存。
1.由于上述原因,我使代码适用于一个产品类别的时间。这种方式工作。
因此,我可以通过指定从.json文件中提取的正确层来解决这个问题,而不是提取整个文件并在以后进行过滤。此外,我修改了代码,一次只处理一个产品类别,以改善内存使用。

import pandas as pd
import json
import os

# Function to load a JSON file into a Pandas DataFrame
def load_json_to_dataframe(json_file_path):
    with open(json_file_path, 'r') as json_file:
        # Load JSON file into a Python dictionary
        doc = json.load(json_file)
        # Extract the creation time of the file
        file_creation_time = os.path.getctime(json_file_path)
        # Convert the creation time to a datetime object
        file_creation_time = pd.to_datetime(file_creation_time, unit='s')
        # Normalize the JSON data and add the creation time as a new column
        df = pd.json_normalize(doc, meta=['id', 'title', 'condition', 'permalink',
                                          'category_id', 'domain_id', 'thumbnail',
                                          'currency_id', 'price', 'sold_quantity',
                                          'available_quantity', ['seller', 'id'],
                                          ['seller', 'nickname'], ['seller', 'permalink'],
                                          ['address', 'state_name'], ['address', 'city_name']])
        # Reorder the columns to have the file_creation_time column after the id column
        df = df[['id', 'title', 'condition', 'permalink', 'category_id', 'domain_id',
                 'thumbnail', 'currency_id', 'price', 'sold_quantity', 'available_quantity',
                 'seller.id', 'seller.nickname', 'seller.permalink', 'address.state_name',
                 'address.city_name']]
        # Add the file creation time column to the DataFrame
        df['file_creation_time'] = file_creation_time
        return df

# Function to read multiple JSON files into a single Pandas DataFrame
def read_json_files(folder_path, categories=None, batch_size=1000):
    if categories is None:
        # If no categories are specified, read all files that end in '.json'
        json_files = [f for f in os.listdir(folder_path) if f.endswith('.json')]
    else:
        # If categories are specified, read only files that correspond to those categories
        json_files = [f for f in os.listdir(folder_path) if f.endswith('.json') and any(category in f for category in categories)]
    # Split the list of files into batches of a given size
    batches = [json_files[i:i+batch_size] for i in range(0, len(json_files), batch_size)]
    # Read each batch of files into a list of DataFrames
    dfs = []
    for batch in batches:
        batch_dfs = [load_json_to_dataframe(os.path.join(folder_path, f)) for f in batch]
        dfs.append(pd.concat(batch_dfs, ignore_index=True))
    # Concatenate all DataFrames into a single DataFrame
    return pd.concat(dfs, ignore_index=True)

# Specify the categories of files to read and the folder path
categories = ['MLC4922.json']
folder_path = 'C:/path/to/folder/files'

# Read the JSON files into a single DataFrame
combined_dataframe = read_json_files(folder_path, categories)

相关问题