怎么利用python爬虫抓取指数-创新互联
今天就跟大家聊聊有关怎么利用python爬虫抓取指数,可能很多人都不太了解,为了让大家更加了解,小编给大家总结了以下内容,希望大家根据这篇文章可以有所收获。
创新互联主要企业基础官网建设,电商平台建设,移动手机平台,重庆小程序开发等一系列专为中小企业按需定制网站产品体系;应对中小企业在互联网运营的各种问题,为中小企业在互联网的运营中保驾护航。今天方法如下:
import requests import sys import time word_url = 'http://index.baidu.com/api/SearchApi/thumbnail?area=0&word={}' COOKIES = '' headers = { 'Accept': 'application/json, text/plain, */*', 'Accept-Encoding': 'gzip, deflate', 'Accept-Language': 'zh-CN,zh;q=0.9', 'Cache-Control': 'no-cache', 'Cookie': COOKIES, 'DNT': '1', 'Host': 'index.baidu.com', 'Pragma': 'no-cache', 'Proxy-Connection': 'keep-alive', 'Referer': 'http://index.baidu.com/v2/main/index.html', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.90 Safari/537.36', 'X-Requested-With': 'XMLHttpRequest', } def decrypt(t,e): n = list(t) i = list(e) a = {} result = [] ln = int(len(n)/2) start = n[ln:] end = n[:ln] for j,k in zip(start, end): a.update({k: j}) for j in e: result.append(a.get(j)) return ''.join(result) def get_ptbk(uniqid): url = 'http://index.baidu.com/Interface/ptbk?uniqid={}' resp = requests.get(url.format(uniqid), headers=headers) if resp.status_code != 200: print('获取uniqid失败') sys.exit(1) return resp.json().get('data') def get_index_data(keyword, start='2011-01-03', end='2019-08-05'): keyword = str(keyword).replace("'", '"') url = f'http://index.baidu.com/api/SearchApi/index?area=0&word={keyword}&area=0&startDate={start}&endDate={end}' resp = requests.get(url, headers=headers) print('获取指数失败') content = resp.json() data = content.get('data') user_indexes = data.get('userIndexes')[0] uniqid = data.get('uniqid') ptbk = get_ptbk(uniqid) while ptbk is None or ptbk == '': ptbk = get_ptbk(uniqid) all_data = user_indexes.get('all').get('data') result = decrypt(ptbk, all_data) result = result.split(',') print(result) if __name__ == '__main__': words = [[{"name": "酷安", "wordType": 1}]] get_index_data(words)
输出:
看完上述内容,你们对怎么利用python爬虫抓取指数有进一步的了解吗?如果还想了解更多知识或者相关内容,请关注创新互联行业资讯频道,感谢大家的支持。
网页名称:怎么利用python爬虫抓取指数-创新互联
网站URL:http://ybzwz.com/article/dghdgi.html