您好, 欢迎来到 !    登录 | 注册 | | 设为首页 | 收藏本站

爬虫有多好玩?所见即所爬!抓取网页、图片、文章!无所不爬!

5b51 2022/1/14 8:24:27 python 字数 18869 阅读 654 来源 www.jb51.cc/python

一,首先看看Python是如何简单的爬取网页的 1,准备工作 项目用的BeautifulSoup4和chardet模块属于三方扩展包,如果没有请自行pip安装,我是用pycharm来做的安装,下面简单讲下用pycharm安装chardet和BeautifulS

概述

爬虫有多好玩?所见即所爬!抓取网页、图片、文章!无所不爬!

一,首先看看Python是如何简单的爬取网页的

1,准备工作

项目用的BeautifulSoup4和chardet模块属于三方扩展包,如果没有请自行pip安装,我是用pycharm来做的安装,下面简单讲下用pycharm安装chardet和BeautifulSoup4

爬虫有多好玩?所见即所爬!抓取网页、图片、文章!无所不爬!

爬虫有多好玩?所见即所爬!抓取网页、图片、文章!无所不爬!

爬虫有多好玩?所见即所爬!抓取网页、图片、文章!无所不爬!

爬虫有多好玩?所见即所爬!抓取网页、图片、文章!无所不爬!

进群:960410445  领取资料大礼包啊!只有二十份!送完即止!

由于抓取的html文档比较长,这里简单贴出来一部分给大家看下



 

 

 

 

   
 

 <Meta charset="utf-8">
 <Meta http-equiv="X-UA-Compatible" content="IE=Edge">
 <Meta name="viewport" content="width=device-width,initial-scale=1.0,user-scalable=no">
 
 
 
 
 
 
 <Meta name="applicable-device" content="pc,mobile">
 <Meta name="MobileOptimized" content="width"/>
 <Meta name="HandheldFriendly" content="true"/>
 <Meta name="mobile-agent" content="format=html5;url=http://localhost/">
 
 
 <Meta name="description" content="简书是一个优质的创作社区,在这里,你可以任性地创作,一篇短文、一张照片、一首诗、一幅画……我们相信,每个人都是生活中的艺术家,有着无穷的创造力。">
 <Meta name="keywords" content="简书,简书官网,图文编辑软件,简书下载,图文创作,创作软件,原创社区,小说,散文,写作,阅读">
..........后面省略一大堆

这就是python3的爬虫简单入门,是不是很简单,建议大家多敲几遍

爬虫有多好玩?所见即所爬!抓取网页、图片、文章!无所不爬!

爬虫有多好玩?所见即所爬!抓取网页、图片、文章!无所不爬!

迫不及待的看下都爬取到了些什么美图

爬虫有多好玩?所见即所爬!抓取网页、图片、文章!无所不爬!

就这么轻易的爬取到了24个妹子的图片。是不是很简单。

四,python3爬取新闻网站新闻列表

到这里稍微复杂点,就分布给大家讲解

爬虫有多好玩?所见即所爬!抓取网页、图片、文章!无所不爬!

分析上图我们要抓取的信息再div中的a标签img标签里,所以我们要想的就是怎么获取到这些信息

这里就要用到我们导入的BeautifulSoup4库了,这里的关键代码

# 使用剖析器为html.parser
soup = BeautifulSoup(html,'html.parser')
# 获取到每一个class=hot-article-img的a节点
allList = soup.select('.hot-article-img')

上面代码获取到的allList就是我们要获取的新闻列表,抓取到的如下

[
 
,
![](https://img.huxiucdn.com/article/cover/201709/17/094856378420.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)
,
![](https://img.huxiucdn.com/article/cover/201709/17/122655034450.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)
,
![](https://img.huxiucdn.com/article/cover/201709/14/182151300292.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)
,
![](https://img.huxiucdn.com/article/cover/201709/16/210518696352.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)
,
![](https://img.huxiucdn.com/article/cover/201709/15/180620783020.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)
,
![](https://img.huxiucdn.com/article/cover/201709/16/162049096015.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)
,
![](https://img.huxiucdn.com/article/cover/201709/16/010410913192.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)
,
![](https://img.huxiucdn.com/article/cover/201709/17/154147105217.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)
,
![](https://img.huxiucdn.com/article/cover/201709/17/101218317953.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)
,
![](https://img.huxiucdn.com/article/cover/201709/16/213400162818.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg)
]

这里数据是抓取到了,但是太乱了,并且还有很多不是我们想要的,下面就通过遍历来提炼出我们的有效信息

#遍历列表,获取有效信息
for news in allList:
 aaa = news.select('a')
 # 只选择长度大于0的结果
 if len(aaa) > 0:
 # 文章链接
 try:#如果抛出异常就代表为空
 href = url + aaa[0]['href']
 except Exception:
 href=''
 # 文章图片url
 try:
 imgurl = aaa[0].select('img')[0]['src']
 except Exception:
 imgurl=""
 # 新闻标题
 try:
 title = aaa[0]['title']
 except Exception:
 title = "标题为空"
 print("标题",title,"
url:",href,"
图片地址:",imgurl)
 print("==============================================================================================")

这里添加异常处理,主要是有的新闻可能没有标题,没有url或者图片,如果不做异常处理,可能导致我们爬取的中断。

过滤后的有效信息

标题 标题为空 
url: https://www.huxiu.com/article/211390.html 
图片地址: https://img.huxiucdn.com/article/cover/201708/22/173535862821.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 TFBOYS成员各自飞,商业价值天花板已现? 
url: https://www.huxiu.com/article/214982.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/17/094856378420.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 买手店江湖 
url: https://www.huxiu.com/article/213703.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/17/122655034450.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 iPhone X正式告诉我们,手机和相机开始分道扬镳 
url: https://www.huxiu.com/article/214679.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/14/182151300292.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 信用已被透支殆尽,乐视汽车或成贾跃亭弃子 
url: https://www.huxiu.com/article/214962.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/16/210518696352.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 别小看“搞笑诺贝尔奖”,要向好奇心致敬 
url: https://www.huxiu.com/article/214867.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/15/180620783020.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 10 年前改变世界的,可不止有 iPhone | 发车 
url: https://www.huxiu.com/article/214954.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/16/162049096015.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 感谢微博替我做主 
url: https://www.huxiu.com/article/214908.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/16/010410913192.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 苹果确认取消打赏抽成,但还有多少内容让你觉得值得掏腰包? 
url: https://www.huxiu.com/article/215001.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/17/154147105217.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 中国音乐的“全面付费”时代即将到来? 
url: https://www.huxiu.com/article/214969.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/17/101218317953.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================
标题 百丽退市启示录:“一代鞋王”如何与新生代消费者渐行渐远 
url: https://www.huxiu.com/article/214964.html 
图片地址: https://img.huxiucdn.com/article/cover/201709/16/213400162818.jpg?imageView2/1/w/280/h/210/|imageMogr2/strip/interlace/1/quality/85/format/jpg
==============================================================================================

到这里我们抓取新闻网站新闻信息就大功告成了,下面贴出来完整代码

from bs4 import BeautifulSoup
from urllib import request
import chardet
url = "https://www.huxiu.com"
response = request.urlopen(url)
html = response.read()
charset = chardet.detect(html)
html = html.decode(str(charset["encoding"])) # 设置抓取到的html的编码方式
# 使用剖析器为html.parser
soup = BeautifulSoup(html,'html.parser')
# 获取到每一个class=hot-article-img的a节点
allList = soup.select('.hot-article-img')
#遍历列表,获取有效信息
for news in allList:
 aaa = news.select('a')
 # 只选择长度大于0的结果
 if len(aaa) > 0:
 # 文章链接
 try:#如果抛出异常就代表为空
 href = url + aaa[0]['href']
 except Exception:
 href=''
 # 文章图片url
 try:
 imgurl = aaa[0].select('img')[0]['src']
 except Exception:
 imgurl=""
 # 新闻标题
 try:
 title = aaa[0]['title']
 except Exception:
 title = "标题为空"
 print("标题",imgurl)
 print("==============================================================================================")

数据获取到了我们还要把数据存到数据库,只要存到我们的数据库里,数据库里有数据了,就可以做后面的数据分析处理,也可以用这些爬取来的文章,给app提供新闻api接口,当然这都是后话了

总结

以上是编程之家为你收集整理的爬虫有多好玩?所见即所爬!抓取网页、图片、文章!无所不爬!全部内容,希望文章能够帮你解决爬虫有多好玩?所见即所爬!抓取网页、图片、文章!无所不爬!所遇到的程序开发问题。


如果您也喜欢它,动动您的小指点个赞吧

除非注明,文章均由 laddyq.com 整理发布,欢迎转载。

转载请注明:
链接:http://laddyq.com
来源:laddyq.com
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。


联系我
置顶