python 为什么我不能用await调用这个异步函数?

r3i60tvu  于 2023-03-06  发布在  Python
关注(0)|答案(1)|浏览(280)

这是我遇到的网页抓取问题,我不知道如何解决。
我想调用异步函数scrape_session,但是我不能在主文件中调用它,它给我错误:
error: "await" allowed only within async function

import os
from bs4 import BeautifulSoup
from playwright.async_api import async_playwright, TimeoutError as PlaywrightTimeout
import time 

SEASONS = list(range(2016,2023))
DATA_DIR = 'data'
STANDINGS_DIR = os.path.join(DATA_DIR, 'standings')
SCORES_DIR = os.path.join(DATA_DIR, 'scores')

async def get_html(url,selector,sleep=5, retries = 3):
    html = None 
    for i in range(1, retries+1):
        time.sleep(sleep * i)

        try: 
            async with async_playwright() as p: 
                browser = await p.firefox.launch()
                page = await browser.new_page()
                await page.goto(url)
                print(await page.title())
                html = await page.inner_html(selector)
    
        except PlaywrightTimeout:
            print(f'Timeout error on{url}')
            continue

        else: 
            break
    return html

async def scrape_season(season):
    url = f'https://www.basketball-reference.com/leagues/NBA_{season}_games.html'
    html = await get_html(url, '#content .filter')

    soup = BeautifulSoup(html)
    links = soup.find_all('a')
    href = [l['href'] for l in links]
    standings_pages = [f"https://basketball-reference.com{l}" for l in href]

    for url in standings_pages:
        save_path = os.path.join(STANDINGS_DIR, url.split("/")[-1])
        if os.path.exists(save_path):
            continue

    html = await get_html(url, '#all_schedule')
    with open(save_path, 'w+') as f:
        f.write(html)

for season in SEASONS:
    await(scrape_season(season))
pkmbmrz7

pkmbmrz71#

这段代码的问题在于它试图在顶层代码中等待。这是不允许的。您只需要在异步函数中调用await *。
Async/await实际上只是一个您也可以自己编写的库,它对解释器没有任何魔力。
不过,为了回答你的问题,把代码末尾的for循环替换成下面的代码就可以了。请阅读一下asyncio是如何工作的,以理解为什么你的代码不能工作,而下面的代码(我希望,我没有测试过它)确实能工作:https://docs.python.org/3/library/asyncio.html

import asyncio

async def main():
    seasons = [scrape_season(season) for season in SEASONS]
    await asyncio.gather(seasons)

asyncio.run(main())

相关问题