是誰(shuí)的小眼睛還沒(méi)有看老師這里?
你們最愛的小歪老師最近又有了一個(gè)大大大發(fā)現(xiàn),說(shuō)起爬蟲,現(xiàn)在網(wǎng)絡(luò)上大多講的是requests、scarpy,卻沒(méi)有說(shuō)起過(guò)這樣一個(gè)神器:aiohttp。
究竟它是什么呢?
官網(wǎng)上是這樣介紹的:Async HTTP client/server for asyncio and Python,翻譯過(guò)來(lái)就是asyncio和Python的異步HTTP客戶端/服務(wù)器。
它的主要特點(diǎn)是醬紫的:
◆ 1.支持客戶端和HTTP服務(wù)器;
◆ 2.無(wú)需使用Callback Hell即可支持Server WebSockets和Client WebSockets;
◆ 3.Web服務(wù)器具有中間件,信號(hào)和可插拔路由。
話不多說(shuō),看老師操作?。。?/strong>
Client example:
import aiohttp
import asyncio
async def fetch(session, url):
async with session.get(url) as response:
return await response.text()
async def main():
async with aiohttp.ClientSession() as session:
html = await fetch(session, 'http:///headers')
print(html)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
{'headers':{'Accept':'*/*','Accept-Encoding':'gzip, deflate','Connection':'close','Host':'','User-Agent':'Python/3.6 aiohttp/3.2.1'}}
from aiohttp import web
async def handle(request):
name = request.match_info.get('name', 'Anonymous')
text = 'Hello, ' + name
return web.Response(text=text)
app = web.Application()
app.add_routes([web.get('/', handle),
web.get('/{name}', handle)])
web.run_app(app)
======== Running on http://0.0.0.0:8080 ========
(Press CTRL+C to quit)
去翻一下官方文檔Client Quickstart,會(huì)讓人感覺(jué)特別熟悉,很多用法和requests相似。
async with aiohttp.ClientSession() as session:
async with session.get('http:///get') as resp:
print(resp.status)
print(await resp.text())
首先,官方推薦使用ClientSession來(lái)管理會(huì)話,這不就是requests中的session嗎?用法也類似,使用session.get()去發(fā)送get請(qǐng)求,返回的resp中就有我們所需要的數(shù)據(jù)了,用法也和requests一樣,text()文本,.json()直接打印返回的json數(shù)據(jù),headers什么的也一樣,更多內(nèi)容參考官方文檔Response object。
既然已經(jīng)有了requests了,那為什么還要說(shuō)aiohttp了?重點(diǎn)來(lái)了,aiohttp是異步的。在python3.5中,加入了asyncio/await關(guān)鍵字,使得回調(diào)的寫法更加直觀和人性化。而aiohttp是一個(gè)提供異步web服務(wù)的庫(kù),asyncio可以實(shí)現(xiàn)單線程并發(fā)IO操作。
requests寫爬蟲是同步的,是等待網(wǎng)頁(yè)下載好才會(huì)執(zhí)行下面的解析、入庫(kù)操作,如果在下載網(wǎng)頁(yè)時(shí)間太長(zhǎng)會(huì)導(dǎo)致阻塞,使用multiprocessing或者threading加速爬蟲也是一種方法。
我們現(xiàn)在使用的aiohttp是異步的,簡(jiǎn)單來(lái)說(shuō),就是不需要等待,你盡管去下載網(wǎng)頁(yè)就好了,我不用傻傻的等待你完成才進(jìn)行下一步,我還有別的活要干。這樣就極大的提高了下載網(wǎng)頁(yè)的效率。
另外,Scrapy也是異步的,是基于Twisted事件驅(qū)動(dòng)的。在任何情況下,都不要寫阻塞的代碼。阻塞的代碼包括:
◆ 1.訪問(wèn)文件、數(shù)據(jù)庫(kù)或者Web;
◆ 2.產(chǎn)生新的進(jìn)程并需要處理新進(jìn)程的輸出,如運(yùn)行shell命令;
◆ 3.執(zhí)行系統(tǒng)層次操作的代碼,如等待系統(tǒng)隊(duì)列。
需要注意的是,你需要時(shí)刻在你的代碼中使用異步操作,你如果在代碼中使用同步操作,爬蟲并不會(huì)報(bào)錯(cuò),但是速度可能會(huì)受影響。
import asyncio
import aiohttp
from bs4 import BeautifulSoup
import logging
class AsnycGrab(object):
def __init__(self, url_list, max_threads):
self.urls = url_list
self.results = {}
self.max_threads = max_threads
def __parse_results(self, url, html):
try:
soup = BeautifulSoup(html, 'html.parser')
title = soup.find('title').get_text()
except Exception as e:
raise e
if title:
self.results[url] = title
async def get_body(self, url):
async with aiohttp.ClientSession() as session:
async with session.get(url, timeout=30) as response:
assert response.status == 200
html = await response.read()
return response.url, html
async def get_results(self, url):
url, html = await self.get_body(url)
self.__parse_results(url, html)
return 'Completed'
async def handle_tasks(self, task_id, work_queue):
while not work_queue.empty():
current_url = await work_queue.get()
try:
task_status = await self.get_results(current_url)
except Exception as e:
logging.exception('Error for {}'.format(current_url), exc_info=True)
def eventloop(self):
q = asyncio.Queue()
[q.put_nowait(url) for url in self.urls]
loop = asyncio.get_event_loop()
tasks = [self.handle_tasks(task_id, q, ) for task_id in range(self.max_threads)]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
if __name__ == '__main__':
async_example = AsnycGrab(['http://',
'https://www.',
'https://github.com/',
'https://zhangslob./',
'https://www.zhihu.com/'], 5)
async_example.eventloop()
print(async_example.results)
因?yàn)榕老x不僅僅只有下載這塊,還會(huì)有操作數(shù)據(jù)庫(kù),這里提供兩個(gè)異步庫(kù):aioredis、motor
import asyncio
import aioredis
loop = asyncio.get_event_loop()
async def go():
conn = await aioredis.create_connection(
'redis://localhost', loop=loop)
await conn.execute('set', 'my-key', 'value')
val = await conn.execute('get', 'my-key')
print(val)
conn.close()
await conn.wait_closed()
loop.run_until_complete(go())
# will print 'value'
文檔:aioredis
import motor.motor_asyncio
client = motor.motor_asyncio.AsyncIOMotorClient('mongodb://localhost:27017')
db = client['test_database']
collection = db['test_collection']
async def do_insert():
document = {'key': 'value'}
result = await db.test_collection.insert_one(document)
print('result %s' % repr(result.inserted_id))
async def do_find_one():
document = await db.test_collection.find_one({'i': {'$lt': 1}})
pprint.pprint(document)
文檔:motor
嗯……本文僅僅介紹了aiohttp作為Client的用法,有興趣的同學(xué)可以去研究下作為Server的用法,同樣很強(qiáng)大。