我有以下Python脚本来查看10行规范化格式的JSON数据:
import pandas as pd
from openpyxl.workbook import Workbook
import csv
from pathlib import Path
from pandas.io.json import json_normalize
import json
from datetime import datetime
from datetime import date
from datetime import timedelta
import psycopg2
from psycopg2 import OperationalError
# Import our files
import pg # Various functions to interface with the Postgres servers
from db_creds import * # The DB server and user creds
#try:
# Connect to an existing database
connection = pg.create_connection(sourceDB_setting[3], sourceDB_setting[5], sourceDB_setting[6], sourceDB_setting[1], sourceDB_setting[2])
#Create a cursor to perform database operations
cursor = connection.cursor()
cursor.execute("SELECT application_data, id FROM ssap_applications LIMIT 10;")
results = cursor.fetchall()
for row in results:
jrec, app_id = row
# Process each row here
#print(jrec)
jrec = json.loads(jrec)
normal_json = pd.json_normalize(jrec)
print(normal_json)
# save to csv
normal_json.to_csv('App_data2.csv', index=False, encoding='utf-8')
cursor.close()
这是我在PyCharm屏幕上得到的输出:Output .我想将这10条记录导出到CSV文件中,到目前为止,我只能用此代码normal_json.to_csv('App_data2.csv', index=False, encoding='utf-8')
导出一条记录,所以我想知道如何修复我的脚本以导出10条记录或所有记录?
1条答案
按热度按时间yhuiod9q1#
您正在导出所有10条记录,但您每次都在重写文件,因此每一条都覆盖了前一条。
to_csv
方法包含一个mode
参数,与open
完全相同,它允许您指定“append”模式。我已经可以预测下一个问题了:但是如果我连续运行几次,文件就会变得越来越长。一种是在开始之前擦除文件:
另一种是自己打开文件,并将打开文件句柄传递给pandas: