我正在尝试将一个Pandas Dataframe 转换成嵌套的JSON,函数.to_json()
没有给予我足够的灵活性。
以下是 Dataframe 的一些数据点(csv格式,逗号分隔):
,ID,Location,Country,Latitude,Longitude,timestamp,tide
0,1,BREST,FRA,48.383,-4.495,1807-01-01,6905.0
1,1,BREST,FRA,48.383,-4.495,1807-02-01,6931.0
2,1,BREST,FRA,48.383,-4.495,1807-03-01,6896.0
3,1,BREST,FRA,48.383,-4.495,1807-04-01,6953.0
4,1,BREST,FRA,48.383,-4.495,1807-05-01,7043.0
2508,7,CUXHAVEN 2,DEU,53.867,8.717,1843-01-01,7093.0
2509,7,CUXHAVEN 2,DEU,53.867,8.717,1843-02-01,6688.0
2510,7,CUXHAVEN 2,DEU,53.867,8.717,1843-03-01,6493.0
2511,7,CUXHAVEN 2,DEU,53.867,8.717,1843-04-01,6723.0
2512,7,CUXHAVEN 2,DEU,53.867,8.717,1843-05-01,6533.0
4525,9,MAASSLUIS,NLD,51.918,4.25,1848-02-01,6880.0
4526,9,MAASSLUIS,NLD,51.918,4.25,1848-03-01,6700.0
4527,9,MAASSLUIS,NLD,51.918,4.25,1848-04-01,6775.0
4528,9,MAASSLUIS,NLD,51.918,4.25,1848-05-01,6580.0
4529,9,MAASSLUIS,NLD,51.918,4.25,1848-06-01,6685.0
6540,8,WISMAR 2,DEU,53.898999999999994,11.458,1848-07-01,6957.0
6541,8,WISMAR 2,DEU,53.898999999999994,11.458,1848-08-01,6944.0
6542,8,WISMAR 2,DEU,53.898999999999994,11.458,1848-09-01,7084.0
6543,8,WISMAR 2,DEU,53.898999999999994,11.458,1848-10-01,6898.0
6544,8,WISMAR 2,DEU,53.898999999999994,11.458,1848-11-01,6859.0
8538,10,SAN FRANCISCO,USA,37.806999999999995,-122.465,1854-07-01,6909.0
8539,10,SAN FRANCISCO,USA,37.806999999999995,-122.465,1854-08-01,6940.0
8540,10,SAN FRANCISCO,USA,37.806999999999995,-122.465,1854-09-01,6961.0
8541,10,SAN FRANCISCO,USA,37.806999999999995,-122.465,1854-10-01,6952.0
8542,10,SAN FRANCISCO,USA,37.806999999999995,-122.465,1854-11-01,6952.0
有很多重复的信息,我希望有这样的JSON:
[
{
"ID": 1,
"Location": "BREST",
"Latitude": 48.383,
"Longitude": -4.495,
"Country": "FRA",
"Tide-Data": {
"1807-02-01": 6931,
"1807-03-01": 6896,
"1807-04-01": 6953,
"1807-05-01": 7043
}
},
{
"ID": 5,
"Location": "HOLYHEAD",
"Latitude": 53.31399999999999,
"Longitude": -4.62,
"Country": "GBR",
"Tide-Data": {
"1807-02-01": 6931,
"1807-03-01": 6896,
"1807-04-01": 6953,
"1807-05-01": 7043
}
}
]
我怎样才能做到这一点?
用于重现 Dataframe 的代码:
# input json
json_str = '[{"ID":1,"Location":"BREST","Country":"FRA","Latitude":48.383,"Longitude":-4.495,"timestamp":"1807-01-01","tide":6905},{"ID":1,"Location":"BREST","Country":"FRA","Latitude":48.383,"Longitude":-4.495,"timestamp":"1807-02-01","tide":6931},{"ID":1,"Location":"BREST","Country":"DEU","Latitude":48.383,"Longitude":-4.495,"timestamp":"1807-03-01","tide":6896},{"ID":7,"Location":"CUXHAVEN 2","Country":"DEU","Latitude":53.867,"Longitude":-8.717,"timestamp":"1843-01-01","tide":7093},{"ID":7,"Location":"CUXHAVEN 2","Country":"DEU","Latitude":53.867,"Longitude":-8.717,"timestamp":"1843-02-01","tide":6688},{"ID":7,"Location":"CUXHAVEN 2","Country":"DEU","Latitude":53.867,"Longitude":-8.717,"timestamp":"1843-03-01","tide":6493}]'
# load json object
data_list = json.loads(json_str)
# create dataframe
df = pd.json_normalize(data_list, None, None)
2条答案
按热度按时间6ojccjat1#
更新日期:
结果(格式化):
旧答案:
您可以使用
groupby()
、apply()
和to_json()
方法执行此操作:输出:
PS如果你不关心标识符,你可以直接写入JSON文件:
ibps3vxo2#
groupby.apply
强制对每个组进行数据操作以创建嵌套结构,这非常慢。使用itertuples
和列表解析创建嵌套结构并通过json.dumps
序列化它的简单for循环方法要快得多。如果组比较小,则此方法特别有用,因为groupby.apply
对于这些组非常慢。1注意,MaxU的
groupby.apply
方法应该稍微改变(传递给apply
的lambda应该稍有不同),以产生预期的输出:对于给定的输入,两者都产生以下输出:
1基准结果:在具有100k行的帧上,如果每个组相对较小,则循环方法比
groupby.apply
方法快大约50倍。如果组很大,则差异小得多,但循环实现仍然比
groupby.apply
快: