如何在azuredatabricks中使用jdbc从postgresql读取表数据?

avwztpqn  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(470)

我正在尝试使用pyspark读取azure云订阅中可用的postgresql表,但出现以下错误。我知道当我们使用load函数时,也应该包含格式。但是由于这个postgresql示例在不同的azure订阅中可用,我根本没有访问postgresql数据库的权限,如果是这样的话,如何推断模式?或者有没有更好的方法从databricks读取数据。

df = spark.read.option("url", "jdbc:postgresql://{hostname}:5432&user={username}&password={xxxxx}&sslmode=require").option("dbtable", {tablename}).load()

错误:

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
/databricks/spark/python/pyspark/sql/utils.py in deco(*a,**kw)
     62         try:
---> 63             return f(*a,**kw)
     64         except py4j.protocol.Py4JJavaError as e:

/databricks/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:

Py4JJavaError: An error occurred while calling o1169.load.
: org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It must be specified manually.;
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:211)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:211)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:210)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:421)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:311)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:297)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:203)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
    at py4j.Gateway.invoke(Gateway.java:295)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:251)
    at java.lang.Thread.run(Thread.java:748)

During handling of the above exception, another exception occurred:
yrdbyhpb

yrdbyhpb1#

错误是因为代码隐式地假定格式为 parquet . 不管定义了哪些选项,格式都会忽略它们。
换句话说,结构化查询根本不使用jdbc加载数据。
这就是错误的一部分(几乎是这样):
org.apache.spark.sql.analysisexception:无法推断parquet的架构。必须手动指定。;
如果您想从jdbc数据源读取数据,应该包括 format("jdbc") 在查询中:

spark.read.format("jdbc")

相关问题