这个问题在这里已经有答案了:
使用selenium scraper(python)时消除%symbol(2个答案)
两年前关门了。
下面是一个selenium web scraper,它循环浏览这个网站页面的不同选项卡,选择“export data”按钮,下载数据,添加“yearid”列,然后将数据加载到mysql表中。
import sys
import pandas as pd
import os
import time
from datetime import datetime
from selenium import webdriver
from selenium.webdriver.firefox.firefox_profile import FirefoxProfile
from sqlalchemy import create_engine
button_text_to_url_type = {
'dashboard': 8,
'standard': 0,
'advanced': 1,
'batted_ball': 2,
'win_probability': 3,
'pitch_type': 4,
'pitch_values': 7,
'plate_discipline': 5,
'value': 6
}
download_dir = os.getcwd()
profile = FirefoxProfile("C:/Users/PATHTOFIREFOX")
profile.set_preference("browser.helperApps.neverAsk.saveToDisk", 'text/csv')
profile.set_preference("browser.download.manager.showWhenStarting", False)
profile.set_preference("browser.download.dir", download_dir)
profile.set_preference("browser.download.folderList", 2)
driver = webdriver.Firefox(firefox_profile=profile)
today = datetime.today()
for button_text, url_type in button_text_to_url_type.items():
default_filepath = os.path.join(download_dir, 'Fangraphs Leaderboard.csv')
desired_filepath = os.path.join(download_dir,
'{}_{}_{}_Leaderboard_{}.csv'.format(today.year, today.month, today.day,
button_text))
driver.get(
"https://www.fangraphs.com/leaders.aspx?pos=all&stats=bat&lg=all&qual=0&type={}&season=2018&month=0&season1=2018&ind=0&team=&rost=&age=&filter=&players=".format(
url_type))
driver.find_element_by_link_text('Export Data').click()
if os.path.isfile(default_filepath):
os.rename(default_filepath, desired_filepath)
print('Renamed file {} to {}'.format(default_filepath, desired_filepath))
else:
sys.exit('Error, unable to locate file at {}'.format(default_filepath))
df = pd.read_csv(desired_filepath)
df.str.replace('%', '')
df["yearid"] = datetime.today().year
df.to_csv(desired_filepath)
engine = create_engine("mysql+pymysql://{user}:{pw}@localhost/{db}"
.format(user="walker",
pw="password",
db="data"))
df.to_sql(con=engine, name='fg_test_hitting_{}'.format(button_text), if_exists='replace')
time.sleep(10)
driver.quit()
scraper工作得很好,但是,当我下载数据时,有些列下载的数据在整数(即25%)后面加了一个%符号,这就抛弃了mysql中的数据类型。我尝试过使用df.str.replace('%','')和df.replace('%',''),但都没有成功。将数据刮入Dataframe时,是否可以更改包含%符号的列,使其仅显示整数?我应该把它合并到我的循环中,还是在我建立了一个Dataframe之后再添加一行代码?提前谢谢!
1条答案
按热度按时间tsm1rwdh1#
df.replace
不做就地更换。它返回修改后的Dataframe,因此替换此通过