pandas 使用BeautifulSoup将表刮入 Dataframe

mefy6pfw  于 2023-06-20  发布在  其他
关注(0)|答案(6)|浏览(109)

我在试着从钱币目录中提取数据。
one of the pages我需要将this data抓取到DataFrame中
到目前为止,我有这样的代码:

import bs4 as bs
import urllib.request
import pandas as pd

source = urllib.request.urlopen('http://www.gcoins.net/en/catalog/view/45518').read()
soup = bs.BeautifulSoup(source,'lxml')

table = soup.find('table', attrs={'class':'subs noBorders evenRows'})
table_rows = table.find_all('tr')

for tr in table_rows:
    td = tr.find_all('td')
    row = [tr.text for tr in td]
    print(row)                    # I need to save this data instead of printing it

它产生以下输出:

[]
['', '', '1882', '', '108,000', 'UNC', '—']
[' ', '', '1883', '', '786,000', 'UNC', '~ $3.99']
[' ', " \n\n\n\n\t\t\t\t\t\t\t$('subGraph55337').on('click', function(event) {\n\t\t\t\t\t\t\t\tLightview.show({\n\t\t\t\t\t\t\t\t\thref : '/en/catalog/ajax/subgraph?id=55337',\n\t\t\t\t\t\t\t\t\trel : 'ajax',\n\t\t\t\t\t\t\t\t\toptions : {\n\t\t\t\t\t\t\t\t\t\tautosize : true,\n\t\t\t\t\t\t\t\t\t\ttopclose : true,\n\t\t\t\t\t\t\t\t\t\tajax : {\n\t\t\t\t\t\t\t\t\t\t\tevalScripts : true\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t} \n\t\t\t\t\t\t\t\t});\n\t\t\t\t\t\t\t\tevent.stop();\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t});\n\t\t\t\t\t\t", '1884', '', '4,604,000', 'UNC', '~ $2.08–$4.47']
[' ', '', '1885', '', '1,314,000', 'UNC', '~ $3.20']
['', '', '1886', '', '444,000', 'UNC', '—']
[' ', '', '1888', '', '413,000', 'UNC', '~ $2.88']
[' ', '', '1889', '', '568,000', 'UNC', '~ $2.56']
[' ', " \n\n\n\n\t\t\t\t\t\t\t$('subGraph55342').on('click', function(event) {\n\t\t\t\t\t\t\t\tLightview.show({\n\t\t\t\t\t\t\t\t\thref : '/en/catalog/ajax/subgraph?id=55342',\n\t\t\t\t\t\t\t\t\trel : 'ajax',\n\t\t\t\t\t\t\t\t\toptions : {\n\t\t\t\t\t\t\t\t\t\tautosize : true,\n\t\t\t\t\t\t\t\t\t\ttopclose : true,\n\t\t\t\t\t\t\t\t\t\tajax : {\n\t\t\t\t\t\t\t\t\t\t\tevalScripts : true\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t} \n\t\t\t\t\t\t\t\t});\n\t\t\t\t\t\t\t\tevent.stop();\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t});\n\t\t\t\t\t\t", '1890', '', '2,137,000', 'UNC', '~ $1.28–$4.79']
['', '', '1891', '', '605,000', 'UNC', '—']
[' ', '', '1892', '', '205,000', 'UNC', '~ $4.47']
[' ', '', '1893', '', '754,000', 'UNC', '~ $4.79']
[' ', '', '1894', '', '532,000', 'UNC', '~ $3.20']
[' ', '', '1895', '', '423,000', 'UNC', '~ $2.40']
['', '', '1896', '', '174,000', 'UNC', '—']

但是当我试图将其保存到Dataframe并导出到excel时,它只包含最后一个值:

0
0         
1         
2     1896
3         
4  174,000
5      UNC
6        —
k4ymrczo

k4ymrczo1#

Pandas已经有了一个内置的方法来将web上的表转换为dataframe:

table = soup.find_all('table')
df = pd.read_html(str(table))[0]
lb3vh1jj

lb3vh1jj2#

试试这个

l = []
for tr in table_rows:
    td = tr.find_all('td')
    row = [tr.text for tr in td]
    l.append(row)
pd.DataFrame(l, columns=["A", "B", ...])
fkvaft9z

fkvaft9z3#

尝试:

import pandas as pd
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, "html.parser")
table = soup.find('table', attrs={'class':'subs noBorders evenRows'})
table_rows = table.find_all('tr')

res = []
for tr in table_rows:
    td = tr.find_all('td')
    row = [tr.text.strip() for tr in td if tr.text.strip()]
    if row:
        res.append(row)

df = pd.DataFrame(res, columns=["Year", "Mintage", "Quality", "Price"])
print(df)

输出:

Year  Mintage Quality    Price
0  1882  108,000     UNC        —
1  1883  786,000     UNC  ~ $4.03
ogsagwnx

ogsagwnx4#

根本不需要美味的汤。如果您只想将html表提取到DataFrames中,只需使用

dfs = pd.read_html(url)

其中url是实际的网站url(即'http://www.gcoins.net/en/catalog/view/45518')。
pandas函数将自动解析页面,并返回从HTML代码中的表创建的dataframe对象的列表。

okxuctiv

okxuctiv5#

只是提醒一下... Rakesh的这部分代码意味着只有包含文本的HTML行才会包含在dataframe中,因为如果row是空列表,则不会追加行:

if row:
    res.append(row)

在我的用例中有问题,我想在稍后比较HTML和 Dataframe 表的行索引。我只需要把它改成:

res.append(row)

此外,如果行中的单元格为空,则不包含它。然后,这会弄乱列。所以我改变了

row = [tr.text.strip() for tr in td if tr.text.strip()]

row = [d.text.strip() for d in td]

但是,除此之外,它对我很有效。Thanks:)

vom3gejh

vom3gejh6#

由于Pandas有一个内置的解析器,它有一个方法可以将web上的表转换为dataframe,因此您也可以在beautifulsoup table元素上使用以下prettify()方法作为pandas read_html方法的输入,以从元素中获取dataframe/dataframes:

table_elem = soup.find('table')
df = pd.read_html(table_elem.prettify())[0]

相关问题