pandas 如何使用Python将一个网站表中的数据拆分为不同的输出

egdjgwm8  于 2022-12-17  发布在  Python
关注(0)|答案(1)|浏览(110)

我正在做一个有趣的项目,收集新泽西州的海浪数据--我想在即将到来的一年里每天都收集这个网站的数据,看看整个趋势。
我的第一步是设置scrape。现在,我得到了一个输出,其中包括看起来像两个不同的表。看看网站,似乎他们可能总是在相同的标签。
有没有办法分割这个输出呢?我想做两个不同的脚本--一个用于“潮汐数据”,另一个用于“波浪大小”,但似乎我不能分割它们。(我也是超级新手)
理想情况下,我可以有两个不同的脚本,我将自动触发到Google工作表的不同选项卡--我想我可以处理,虽然一旦我到达那里。

import requests
import pandas as pd
from bs4 import BeautifulSoup

id_list = [
    '/Belmar-Surf-Report/3683',
    '/Manasquan-Surf-Report/386/',
    #     '/Ocean-Grove-Surf-Report/7945/',
    #     '/Asbury-Park-Surf-Report/857/',
    #     '/Avon-Surf-Report/4050/',
    #     '/Bay-Head-Surf-Report/4951/',
    #     '/Belmar-Surf-Report/3683/',
    #     '/Boardwalk-Surf-Report/9183/',
    #     '/Bradley-Beach-Surf-Report/7944/',
    #     '/Casino-Surf-Report/9175/',
    #     '/Deal-Surf-Report/822/',
    #     '/Dog-Park-Surf-Report/9174/',
    #     '/Jenkinsons-Surf-Report/4053/',
    #     '/Long-Branch-Surf-Report/7946/',
    #     '/Long-Branch-Surf-Report/7947/',
    #     '/Manasquan-Surf-Report/386/',
    #     '/Monmouth-Beach-Surf-Report/4055/',
    #     '/Ocean-Grove-Surf-Report/7945/',
    #     '/Point-Pleasant-Surf-Report/7942/',
    #     '/Sea-Girt-Surf-Report/7943/',
    #     '/Spring-Lake-Surf-Report/7941/',
    #     '/The-Cove-Surf-Report/385/',
    #     '/Belmar-Surf-Report/3683/',
    #     '/Avon-Surf-Report/4050/',
    #     '/Deal-Surf-Report/822/',
    #     '/North-Street-Surf-Report/4946/',
    #     '/Margate-Pier-Surf-Report/4054/',
    #     '/Ocean-City-NJ-Surf-Report/391/',
    #     '/7th-St-Surf-Report/7918/',
    #     '/Brigantine-Surf-Report/4747/',
    #     '/Brigantine-Seawall-Surf-Report/4942/',
    #     '/Crystals-Surf-Report/4943/',
    #     '/Longport-32nd-St-Surf-Report/1158/',
    #     '/Margate-Pier-Surf-Report/4054/',
    #     '/North-Street-Surf-Report/4946/',
    #     '/Ocean-City-NJ-Surf-Report/391/',
    #     '/South-Carolina-Ave-Surf-Report/4944/',
    #     '/St-James-Surf-Report/7917/',
    #     '/States-Avenue-Surf-Report/390/',
    #     '/Ventnor-Pier-Surf-Report/4945/',
    #     '/14th-Street-Surf-Report/9055/',
    #     '/18th-St-Surf-Report/9056/',
    #     '/30th-St-Surf-Report/9057/',
    #     '/56th-St-Surf-Report/9059/',
    #     '/Diamond-Beach-Surf-Report/9061/',
    #     '/Strathmere-Surf-Report/7919/',
    #     '/The-Cove-Surf-Report/7921/',
    #     '/14th-Street-Surf-Report/9055/',
    #     '/18th-St-Surf-Report/9056/',
    #     '/30th-St-Surf-Report/9057/',
    #     '/56th-St-Surf-Report/9059/',
    #     '/Avalon-Surf-Report/821/',
    #     '/Diamond-Beach-Surf-Report/9061/',
    #     '/Nuns-Beach-Surf-Report/7948/',
    #     '/Poverty-Beach-Surf-Report/4056/',
    #     '/Sea-Isle-City-Surf-Report/1281/',
    #     '/Stockton-Surf-Report/393/',
    #     '/Stone-Harbor-Surf-Report/7920/',
    #     '/Strathmere-Surf-Report/7919/',
    #     '/The-Cove-Surf-Report/7921/',
    #     '/Wildwood-Surf-Report/392/'
]

headers = {

    'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36'
}

for x in id_list:

    url = 'https://magicseaweed.com' + x

    r = requests.get(url, headers=headers)
    try:
        soup = BeautifulSoup(r.text, 'html.parser')
        dfs = pd.read_html(str(soup))
        for df in dfs:
            df['City'] = x
            # df.insert(3, "Source", [x], True)

            print(df)
            if df.shape[0] > 0:
                df.to_csv("njwaves3.csv", mode="a", index=False)
            print('____________')
    except Exception as e:
        print(e)

这是一个示例URL:
https://magicseaweed.com/Wildwood-Surf-Report/392/
这是我想要拆分的表数据--同样,现在我在一个输出中接收两个表;我需要一个脚本来提取所有波浪数据,然后另一个脚本单独提取高/低潮数据

这可能吗?任何见解都非常感谢
更新---
我实际上能够非常容易地刮这些表使用简单的谷歌表功能。
示例在选项卡“波浪数据”和“潮汐数据”上。
从这个Angular 看,事情有点不同--似乎我真正想做的是从URL中提取第一个和第二个表(我想)。
这是理想的数据输出:
https://docs.google.com/spreadsheets/d/1mbst-uaRGHWG5ReoFfIsazx0kpY7kXKIBqsRswy1y1Q/edit#gid=1611362673

oaxa6hgo

oaxa6hgo1#

[Not当然,如果你仍然需要一个解决方案,或者如果谷歌表的方法是足够的,但这里是我建议的方法。]
我需要一个脚本来提取所有波浪数据,然后另一个脚本单独提取高/低潮数据
您可以将它们分为两个不同的函数:

def getWaveData(mSoup, endPt='UNKNOWN'):
    wRows = []
    for tb in mSoup.select('.table-forecast > tbody'):
        wDay = tb.select_one('tr.tbody-title > th > h6')
        if wDay: wDay = wDay.get_text(' ')

        for tr in tb.select('tr[data-timestamp]'):
            wDets = [endPt, wDay]
            for td in tr.select('td'):
                td_ot = td.get('data-original-title', '').strip() 
                td_t = td.get('title', '').strip().replace('&deg', '°') 
                td_c = td.get('class', [])

                if td_ot and 'row-title' not in td_c: wDet = td_ot
                elif td_t and 'row-title' not in td_c: wDet = td_t
                elif td.select('ul.rating'): 
                    wDet = ''.join([mark*len(
                        td.select(f'ul.rating > li.{liClass}')
                    ) for liClass, mark in zip([
                        'active', 'inactive', 'placeholder'
                    ], ['*', '.', '_'])]) 
                elif 'table-forecast-wind' in td_c and td.select('.text-left'):
                    wDet = td.select_one('.text-left').get_text(' ') 
                else: wDet = td.get_text(' ')

                wDets.append(wDet)
            if len(wDets) < 19: 
                wDets = wDets[:-5] + [None]*(19 - len(wDets)) + wDets[-5:]
            if len(wDets) > 18: 
                wRows.append(tuple([' '.join(
                    w for w in wd.split() if w # minimize whitespace
                ) if type(wd) == str else wd for wd in wDets[:19]]))
    return wRows

def getTidesData(mSoup, endPt='UNKNOWN'):
    tRows = []
    for tb in mSoup.select('.table-forecast > tbody'):
        tDay = tb.select_one('tr.tbody-title > th > h6')
        if tDay: tDay = tDay.get_text(' ') 

        rSel = 'tr[data-filter="tides"]'
        cSel = 'td + td:has(+td.text-right), td.text-right'
        for tr in tb.select(f'{rSel}:has(td:is({cSel}))'):
            tDets = [endPt, tDay] + [td.get_text(' ') for td in tr.select(cSel)]

            if len(tDets) < 14:
                tDets = tDets[:-4] + [None]*(14 - len(tDets)) + tDets[-4:]
            if len(tDets) > 13: 
                tRows.append(tuple([' '.join(
                    w for w in t.split() if w # minimize whitespace
                ) if type(t) == str else t for t in tDets[:14]])) 
    return tRows

[endPtCity列。]如果你想要所有的数据,这不是你可以简单地使用pd.read_html的,因为其中一些是在标签属性内-潮汐表实际上是在波表的最后一行内的表对。
现在您只需初始化csv文件[以及id_listheaders]

logFp = 'magicseaweed_scrapeLogs.csv' #<-- set None to just print not save
wavesFp = 'waves.csv' #<-- path to csv with waves data
tidesFp = 'tides.csv' #<-- path to csv with tides data
waveCols = [
    'City', 'Date', 'Time', 'Breaking Wave', # ps/ss = primary/secondary swell
    'Swell Rating', 'ps_height', 'ps_length', 'ps_angle', 'ss1_height', 
    'ss1_length', 'ss1_angle', 'ss2_height', 'ss2_length', 'ss2_angle',
    'Wind Speed', 'Wind Direction', 'Weather', 'Temperature', 'Prob.'
]
tideCols = [
    'City', 'Date', 'High1_time', 'High1_ft', 'Low1_time', 'Low1_ft', 
    'High2_time', 'High2_ft', 'Low2_time', 'Low2_ft', 
    'First Light', 'Sunrise', 'Sunset', 'Last Light'
]

for f, h in [(wavesFp, waveCols), (tidesFp, tideCols)]:
    if not os.path.isfile(f):
        pd.DataFrame([tuple(h)]).to_csv(f, index=False, header=False)

# id_list = [....]
# headers = {....}
root_url, idLen, scrapeLog = 'https://magicseaweed.com', len(id_list), []

logFp/scrapeLog有助于跟踪它未能从哪些站点中刮取; logFp也是唯一会被覆盖的csv。wavesFptidesFp会被附加到,所以它们会用列标题初始化[除非它们已经存在],因为如果您将.to_csvmode='a'一起使用,那么如果您不希望每批行都重复标题行,则还应该使用header=False
初始化之后,您可以继续并循环链接:

for i, x in enumerate(id_list[:5]):
    url = root_url + x
    print('', end=f'\r[{i+1} of {idLen}] scraping {url}')
    try:
        r = requests.get(url, headers=headers)
        reqError = r.raise_for_status()
        if reqError:
            scrapeLog.append({'url': url, 'status': 'error', 'msg': reqError})
            continue
    
        soup = BeautifulSoup(r.content, 'html.parser')
        wList, tList = getWaveData(soup, x), getTidesData(soup, x)

        xMsg, xStatus = [], ('success' if wList and tList else 'missing data')
        for f, l in [(wavesFp, wList), (tidesFp, tList)]:
            pd.DataFrame(l).to_csv(f, mode='a', index=False, header=False)
            xMsg.append(f'added {len(l)} rows to {f}') 

        scrapeLog.append({'url':url, 'status':xStatus, 'msg':', '.join(xMsg)})
    except Exception as e:
        scrapeLog.append({'url': url, 'status': 'error', 'msg': f'{type(e)} - {e}'})

print('\n', pd.DataFrame(scrapeLog).to_csv(logFp, index=False))

请注意,此代码取决于剪贴页中与waveColstideCols中具有相同列的表,这些表完全按此顺序排列。

Spreadsheet with the CSV outputs I got.

相关问题