使用pyspark从json df数组中删除选择性的json

vawmfj5a  于 2021-07-13  发布在  Spark
关注(0)|答案(1)|浏览(519)

我想从json数组中删除多个json,我有一个如下所示的json源代码格式。我有一个列表,其中包含需要保存在json数组中的设备id列表,其余的需要删除。例如,如我的源代码json所示,我有3个 dev_id 100010100 , 200020200 and 300030300 .
我有Python名单 device_id_list=[200020200,300030300] ,我的最终json数组应该只包含2个json,dev_id=100010100的json将被删除,如输出json所示。
我尝试了一个可能不是最佳的选择,我的方法是将json作为字符串而不是json读取,如下所示。

df = spark.read.text("path\\iot-sensor.json")
df:pyspark.sql.dataframe.DataFrame
value:string

我已经编写了一个udf来删除json文件中不存在的json文件 device_id_list . 它正在删除 dev_id 不存在,并以字符串形式返回json。
我要这根绳子。 dataframe df2 要转换为json,使用相同的源json模式 (df2:pyspark.sql.dataframe.DataFrame = [iot_station: array] (Sorce Schema) ) 因为源和输出json的模式应该是相同的,如果有更好的解决方案,请与我们分享。
自定义项:

def drop_dev_id(jsonResponse,dict_keys):
    try:
        data = json.loads(jsonResponse)
        i = 0
        n = len(data['iot_station'])
        while (i < n):
            if data['iot_station'][i]["dev_id"] not in dict_keys:
                data['iot_station'].pop(i)       
                n -= 1
            else:
                i += 1
        return data

    except Exception as e:
        print('Exception --> ' + str(e))

def drop_dev_id_udf(dict_keys):
     return udf(lambda row: drop_dev_id(row,dict_keys), StringType())

df2 = df.select('value',drop_dev_id_udf(dict_keys)('value')).select('<lambda>(value)')
df2:pyspark.sql.dataframe.DataFrame
<lambda>(value):string

源json

{
  "iot_station": [
    {
      "dev_id": 100010100,
      "device1": dev_val1,
      "device2": "dev_val2",
      "device3": dev_val3,
      "device4": "dev_val4",
      "stationid": [
        {
          "id": id_val,
          "idrs": idrs_val,
          "idrq": "idrq_val",
          "idrx": "idrx_val"
        }
      ],
      "geospat": {
          "id": id_val,
          "idrs": idrs_val,
          "idrq": "idrq_val",
          "idrx": "idrx_val"
      }
    },
    {
      "dev_id": 200020200,      
      "device1": dev_val1,
      "device2": "dev_val2",
      "device3": dev_val3,
      "device4": "dev_val4",
      "stationid": [
        {
          "id": id_val,
          "idrs": idrs_val,
          "idrq": "idrq_val",
          "idrx": "idrx_val"
        }
      ],
      "geospat": {
          "id": id_val,
          "idrs": idrs_val,
          "idrq": "idrq_val",
          "idrx": "idrx_val"
    }
    },
    {
      "dev_id": 300030300,      
      "device1": dev_val1,
      "device2": "dev_val2",
      "device3": dev_val3,
      "device4": "dev_val4",
      "stationid": [
        {
          "id": id_val,
          "idrs": idrs_val,
          "idrq": "idrq_val",
          "idrx": "idrx_val"
        }
      ],
      "geospat": {
          "id": id_val,
          "idrs": idrs_val,
          "idrq": "idrq_val",
          "idrx": "idrx_val"
    }
    }
    ]
}

输出json:

{
  "iot_station": [
    {
      "dev_id": 200020200,      
      "device1": dev_val1,
      "device2": "dev_val2",
      "device3": dev_val3,
      "device4": "dev_val4",
      "stationid": [
        {
          "id": id_val,
          "idrs": idrs_val,
          "idrq": "idrq_val",
          "idrx": "idrx_val"
        }
      ],
      "geospat": {
          "id": id_val,
          "idrs": idrs_val,
          "idrq": "idrq_val",
          "idrx": "idrx_val"
    }
    },
    {
      "dev_id": 300030300,      
      "device1": dev_val1,
      "device2": "dev_val2",
      "device3": dev_val3,
      "device4": "dev_val4",
      "stationid": [
        {
          "id": id_val,
          "idrs": idrs_val,
          "idrq": "idrq_val",
          "idrx": "idrx_val"
        }
      ],
      "geospat": {
          "id": id_val,
          "idrs": idrs_val,
          "idrq": "idrq_val",
          "idrx": "idrx_val"
    }
    }
    ]
}
swvgeqrz

swvgeqrz1#

你不需要自定义项来实现你的目标。只需将其作为普通的json而不是文本加载并使用 filter 函数来筛选数组列 iot_station :

from pyspark.sql import functions as F

df = spark.read.json("path/iot-sensor.json", multiLine=True)

device_id_list = [str(i) for i in [200020200, 300030300]]

df1 = df.withColumn(
    "iot_station",
    F.expr(f"""
        filter(
            iot_station, 
            x -> x.dev_id in ({','.join(device_id_list)})
        )
    """)
)

# check filtered json

df1.select(F.col("iot_station").getItem("dev_id").alias("dev_id")).show(truncate=False)

# +----------------------+

# |dev_id                |

# +----------------------+

# |[200020200, 300030300]|

# +----------------------+

相关问题