我必须使用以下格式从字段(日期)生成一个带有聚合的SQL:
executionDate: 2022-08-22T22:15:55.383+00:00
此字段的值始终以“+00:00”结尾
我尝试执行以下聚合,但它不起作用:
'executionDate': {
'$lte': datetime.strptime("2022-08-22T22:15:55.383+00:00", "%Y-%m-%dT%H:%M:%S.%f"[:-3]+"00:00"),
'$gte': datetime.strptime("2022-08-22T22:15:55.383+00:00", "%Y-%m-%dT%H:%M:%S.%f"[:-3]+"00:00")
}
使用此sql
{'$match': {'$and': [{'executionDate': {'$gte': datetime.strptime("2022-09-22T22:17:55.383+00:00", "%Y-%m-%dT%H:%M:%S.%f+00:00") }}, ....
我有这个错误,
pyspark.sql.utils.IllegalArgumentException: requirement failed: Invalid Aggregation map Map(uri -> mongodb://localhost:27017, database -> entity, collection -> status, pipeline -> [{'$match': {'$and': [{'executeDate': {'$gte': datetime.datetime(2022, 9, 22, 22, 17, 55, 383000)}}]):%njava.lang.IllegalArgumentException: requirement failed: Invalid pipeline option: [{'$match': {'$and': [{'executeDate': {'$gte': datetime.datetime(2022, 9, 22, 22, 17, 55, 383000)}}, It should be a list of pipeline stages (Documents) or a single pipeline stage (Document)
1条答案
按热度按时间zkure5ic1#
如果我们以json格式添加'$date'并将日期转换为isoformat,它就可以工作了!