python实战项目:爬取某网帅哥图片
前言
我也不知道说啥了, 看呗,就当是一个案例练习吧,
为麻江等地区用户提供了全套网页设计制作服务,及麻江网站建设行业解决方案。主营业务为网站设计、网站建设、麻江网站设计,以传统方式定制建设网站,并提供域名空间备案等一条龙服务,秉承以专业、用心的态度为用户提供真诚的服务。我们深信只要达到每一位用户的要求,就会得到认可,从而选择与我们长期合作。这样,我们也可以走得更远!
首先导入库
from bs4 import BeautifulSoupfrom urllib.request import urlretrieveimport requestsimport osimport time
主体代码(一)
if __name__ == '__main__': list_url = [] for num in range(1,3): if num == 1: url = 'http://www.shuaia.net/index.html' else: url = 'http://www.shuaia.net/index_%d.html' % num headers = { "User-Agent":"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36" } req = requests.get(url = url,headers = headers) req.encoding = 'utf-8' html = req.text bf = BeautifulSoup(html, 'lxml') targets_url = bf.find_all(class_='item-img') for each in targets_url: list_url.append(each.img.get('alt') + '=' + each.get('href')) print('连接采集完成')
主体代码(二)
for each_img in list_url: img_info = each_img.split('=') target_url = img_info[1] filename = img_info[0] + '.jpg' print('下载:' + filename) headers = { "User-Agent":"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36" } img_req = requests.get(url = target_url,headers = headers) img_req.encoding = 'utf-8' img_html = img_req.text img_bf_1 = BeautifulSoup(img_html, 'lxml') img_url = img_bf_1.find_all('div', class_='wr-single-content-list') img_bf_2 = BeautifulSoup(str(img_url), 'lxml') img_url = 'http://www.shuaia.net' + img_bf_2.div.img.get('src') if 'images' not in os.listdir(): os.makedirs('images') urlretrieve(url = img_url,filename = 'images/' + filename) time.sleep(1) print('下载完成!')
感觉如何?自己能实现吗?欢迎大家交流学习
本文题目:python实战项目:爬取某网帅哥图片
文章位置:http://ybzwz.com/article/jdgjhp.html