pandas 从多个页面抓取天气数据

qco9c6ql  于 2023-01-28  发布在  其他
关注(0)|答案(3)|浏览(119)

我不熟悉python
我想从网站“http://www.estesparkweather.net/archive_reports.php?date=200901“抓取天气数据,我必须抓取从2009-01-01到2018-10-28每天天气数据的所有可用属性,我必须将抓取的数据表示为Pandas Dataframe 对象。
以下应为 Dataframe 特定详细信息

Expected column names (order dose not matter):

 ['Average temperature (°F)', 'Average humidity (%)',
 'Average dewpoint (°F)', 'Average barometer (in)',
 'Average windspeed (mph)', 'Average gustspeed (mph)',
 'Average direction (°deg)', 'Rainfall for month (in)',
 'Rainfall for year (in)', 'Maximum rain per minute',
 'Maximum temperature (°F)', 'Minimum temperature (°F)',
 'Maximum humidity (%)', 'Minimum humidity (%)', 'Maximum pressure',
 'Minimum pressure', 'Maximum windspeed (mph)',
 'Maximum gust speed (mph)', 'Maximum heat index (°F)']

Each record in the dataframe corresponds to weather details of a given day
The index column is date-time format (yyyy-mm-dd)
I need to perform necessary data cleaning and type cast each attributes to relevent data type

刮擦后,我需要将 Dataframe 保存为pickle文件,名称为“dataframe.pk”
下面是我最初只是尝试使用Beautifulsoup阅读页面的代码,但有多个页面月,我不知道如何循环从2009年1月到2018年10月的网址,并获得该内容到汤,有人能帮助吗:

***import bs4
from bs4 import BeautifulSoup
import csv
import requests
import time
import pandas as pd
import urllib
import re
import pickle
import numpy as np
url = "http://www.estesparkweather.net/archive_reports.php?date=200901"
page = requests.get(url)
soup=BeautifulSoup(page.content,"html.parser")
type(soup)
bs4.BeautifulSoup
# Get the title
title = soup.title
print(title)
# Print out the text
text = soup.get_text()
print(soup.text)

# Print the first 10 rows for sanity check
rows = soup.find_all('tr')
print(rows[:10])***
vuktfyat

vuktfyat1#

要阅读2009年1月1日至2018年10月28日时间范围内的信息,您必须了解URL模式

http://www.estesparkweather.net/archive_reports.php?date=YYYYMM

示例:

http://www.estesparkweather.net/archive_reports.php?date=201008

因此,您需要创建一个嵌套循环来读取每个年/月组合的数据。
比如:

URL_TEMPLATE = 'http://www.estesparkweather.net/archive_reports.php?date={}{}'
for year in range(2009,2018):
  for month in range(1,12):
     url = URL_TEMPLATE.format(year,month) 
     # TODO implement the actual scraping of a single page
     # Note that you will need to pad single digit month with zeros
3okqufwl

3okqufwl2#

我只是试着用你最初的问题陈述从头开始写,对我来说效果很好

range_date = pd.date_range(start = '1/1/2009',end = '11/01/2018',freq = 'M')

dates = [str(i)[:4] + str(i)[5:7] for i in range_date]

lst = []

index = []

for j in tqdm(range(len(dates))):

      url = "http://www.estesparkweather.net/archive_reports.php?date="+ 
      dates[j] 

      page = requests.get(url)
      soup = BeautifulSoup(page.content, 'html.parser')
      table = soup.find_all('table')
    

      data_parse = [row.text.splitlines() for row in table]
      data_parse = data_parse[:-9] 

for k in range(len(data_parse)):
    data_parse[k] = data_parse[k][2:len(data_parse[k]):3]


for l in range(len(data_parse)):
    str_l = [('.'.join(re.findall("\d+",str(data_parse[l][k].split()[:5])))) for k in range(len(parsed_data[l]))]
    lst.append(str_l)
    index.append(dates[j] + str_l[0])

d1_index = [index[i] for i in range(len(index)) if len(index[i]) > 6]
data = [lst[i][1:] for i in range(len(lst)) if len(lst[i][1:]) == 19]

d2_index = [datetime.strptime(str(d1_index[i]), '%Y%m%d').strftime('%Y-%m-%d') for i in range(len(d1_index))]

desired_df = pd.DataFrame(data, index = d2_index)

这应该是您所需的 Dataframe ,您可以在此 Dataframe 上进一步执行所需的操作

您将需要导入所需模块此提取数据从2009年0月1日至2018年10月31日。您可能需要删除最后3条记录以获得2018年10月28日之前的数据

mwg9r5ms

mwg9r5ms3#

下面是对我有效的一个

import bs4
from bs4 import BeautifulSoup
import csv
import requests
import time
import pandas as pd
import urllib
import re
import pickle
Dates_r = pd.date_range(start = '01/01/2009', end = '11/01/2018', freq = 'M')
dates = [str(i)[:4] + str(i)[5:7] for i in Dates_r]
dates[0:5]
df_list = []
index = []
for k in range(len(dates)):
    url = "http://www.estesparkweather.net/archive_reports.php?date="
    url += dates[k]
    page = requests.get(url)
    soup =  BeautifulSoup(page.content,'html.parser')
    table = soup.find_all('table')
    raw_data = [row.text.splitlines() for row in table]
    raw_data = raw_data[:-9]
    for i in range(len(raw_data)):
        raw_data[i] = raw_data[i][2:len(raw_data[i]):3]
    for i in range(len(raw_data)):
        c = ['.'.join(re.findall("\d+",str(raw_data[i][j].split()[:5])))for j in range(len(raw_data[i]))]
        if len(c):
            df_list.append(c)
            index.append(dates[k] + c[0])
        f_index = [index[i] for i in range(len(index)) if len(index[i]) > 6]
        data = [df_list[i][1:] for i in range(len(df_list)) if len(df_list[i][1:]) == 19]
from datetime import datetime
final_index = [datetime.strptime(str(f_index[i]), '%Y%m%d').strftime('%Y-%m-%d') for i in range(len(f_index))]
columns =  ['Average temperature (°F)', 'Average humidity (%)',
 'Average dewpoint (°F)', 'Average barometer (in)',
 'Average windspeed (mph)', 'Average gustspeed (mph)',
 'Average direction (°deg)', 'Rainfall for month (in)',
 'Rainfall for year (in)', 'Maximum rain per minute',
 'Maximum temperature (°F)', 'Minimum temperature (°F)',
 'Maximum humidity (%)', 'Minimum humidity (%)', 'Maximum pressure',
 'Minimum pressure', 'Maximum windspeed (mph)',
 'Maximum gust speed (mph)', 'Maximum heat index (°F)']
final_index2 = final_index.copy()
data2 = data.copy()
data2.pop()
data2.pop()
data2.pop()
final_index2.pop()
final_index2.pop()
final_index2.pop()
desired_df = pd.DataFrame(data2, index = final_index2)
desired_df.columns =  ['Average temperature (°F)', 'Average humidity (%)',
 'Average dewpoint (°F)', 'Average barometer (in)',
 'Average windspeed (mph)', 'Average gustspeed (mph)',
 'Average direction (°deg)', 'Rainfall for month (in)',
 'Rainfall for year (in)', 'Maximum rain per minute',
 'Maximum temperature (°F)', 'Minimum temperature (°F)',
 'Maximum humidity (%)', 'Minimum humidity (%)', 'Maximum pressure',
 'Minimum pressure', 'Maximum windspeed (mph)',
 'Maximum gust speed (mph)', 'Maximum heat index (°F)']
df = desired_df.apply(pd.to_numeric)
df.index = pd.to_datetime(df.index)
import pickle
with open("dataframe.pk", "wb") as file:
    pickle.dump(df, file)

相关问题