sqlite 用dask Dataframe 填充SQL数据库并转储到文件中

li9yvcax  于 2023-01-13  发布在  SQLite
关注(0)|答案(1)|浏览(159)

this colab上重现错误和用例
我有多个大的表,我通过Dask(dataframe)读取和分析。分析完成后,我想把它们推到一个本地数据库(在这种情况下,通过sqlalchemy包的sqlite引擎)。
下面是一个虚拟数据:

import pandas as pd
import dask.dataframe as dd

df = pd.DataFrame([{"i": i, "s": str(i) * 2} for i in range(4)])

ddf = dd.from_pandas(df, npartitions=2)

from dask.utils import tmpfile
from sqlalchemy import create_engine

with tmpfile(
    dir="/outputs/",
    extension="db",
) as f:
    print(f)

    db = f"sqlite:///{f}"

    ddf.to_sql("test_table", db)

    engine = create_engine(
        db,
        echo=False,
    )

    print(dir(engine))
    result = engine.execute("SELECT * FROM test_table").fetchall()

print(result)

但是,tmpfile是临时的,没有存储在我的本地驱动器上。我想将数据库转储到我的本地驱动器上;我找不到tmpfile的任何参数来确保它存储为文件。也找不到如何转储我的引擎。

***更新***如果使用常规文件,将遇到以下错误

return self.dbapi.connect(*cargs, **cparams)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file
(Background on this error at: https://sqlalche.me/e/14/e3q8)

这是密码

with open(
    "/outputs/hello.db", "wb"
) as f:
    print(f)

    db = f"sqlite:///{f}"

    ddf.to_sql("test_table", db, if_exists="replace")

    engine = create_engine(
        db,
        echo=False,
    )

    print(dir(engine))
    result = engine.execute("SELECT * FROM test_table").fetchall()

print(result)
44u64gxh

44u64gxh1#

如果要保存到常规文件,则无需使用上下文管理器:

import dask.dataframe as dd
import pandas as pd

df = pd.DataFrame([{"i": i, "s": str(i) * 2} for i in range(4)])
ddf = dd.from_pandas(df, npartitions=2)

OUT_FILE = "test.db"
db = f"sqlite:///{OUT_FILE}"

ddf.to_sql("test_table", db)

要测试文件是否已保存,请运行:

from sqlalchemy import create_engine

engine = create_engine(
    db,
    echo=False,
)

result = engine.execute("SELECT * FROM test_table").fetchall()

print(result)
# [(0, 0, '00'), (1, 1, '11'), (2, 2, '22'), (3, 3, '33')]

相关问题