我在PySpark中创建了一个数据框,如下所示:
from pyspark.sql.types import StructType, StructField, StringType, IntegerType
data_1 = [
("rule1", "", "1", "2", "3", "4"),
("rule2", "1", "3", "5", "6", "4"),
("rule3", "", "0", "1", "2", "5"),
("rule4", "0", "1", "3", "6", "2"),
]
schema = StructType(
[
StructField("_c0", StringType(), True),
StructField("para1", StringType(), True),
StructField("para2", StringType(), True),
StructField("para3", StringType(), True),
StructField("para4", StringType(), True),
StructField("para5", StringType(), True),
]
)
df = spark.createDataFrame(data=data_1,schema=schema)
这给出:
+-----+-----+-----+-----+-----+-----+
|_c0 |para1|para2|para3|para4|para5|
+-----+-----+-----+-----+-----+-----+
|rule1| |1 |2 |3 |4 |
|rule2|1 |3 |5 |6 |4 |
|rule3| |0 |1 |2 |5 |
|rule4|0 |1 |3 |6 |2 |
+-----+-----+-----+-----+-----+-----+
我想把它转换成这样的字典:
dict = {'rule1': {'para2': '1', 'para3': '2','para4': '3','para5': '4'},
'rule2': {'para1': '1', 'para2': '3','para3': '5','para4': '6','para5': '4'}, ...}
具有空""
值的列不应出现在最终字典中,例如,在“rule1”的字典中,“para1”不存在。其余的都存在。
我试着将其作为初始代码,但它并不令人满意:
dict1 = df.rdd.map(lambda row: row.asDict()).collect()
final_dict = {d['_c0']: d[col] for d in dict1 for col in df.columns}
# Returns {'rule1': '4', 'rule2': '4', 'rule3': '5', 'rule4': '2'}
1条答案
按热度按时间ulmd4ohb1#
您可以尝试以下嵌套字典解析: