之前的文章我們介紹了 re 模塊和 lxml 模塊來做爬蟲,本章我們再來看一個 bs4 模塊來做爬蟲。 和 lxml 一樣,Beautiful Soup 也是一個HTML/XML的解析器,主要的功能也是如何解析和提取 HTML/XML 數(shù)據(jù)。 lxml 只會局部遍歷,而Beautiful Soup 是基于HTML DOM的,會載入整個文檔,解析整個DOM樹,因此時間和內(nèi)存開銷都會大很多,所以性能要低于lxml。 BeautifulSoup 用來解析 HTML 比較簡單,API非常人性化,支持CSS選擇器、Python標(biāo)準(zhǔn)庫中的HTML解析器,也支持 lxml 的 XML解析器。 Beautiful Soup 3 目前已經(jīng)停止開發(fā),推薦現(xiàn)在的項目使用Beautiful Soup 4。使用 pip 安裝即可:pip install beautifulsoup4 官方文檔:http://beautifulsoup./zh_CN/v4.4.0
抓取工具 | 速度 | 使用難度 | 安裝難度 |
---|
正則 | 最快 | 困難 | 無(內(nèi)置) | BeautifulSoup | 慢 | 最簡單 | 簡單 | lxml | 快 | 簡單 | 一般 |
首先必須要導(dǎo)入 bs4 庫 1 from bs4 import BeautifulSoup
2
3 html = """
4 <div>
5 <ul>
6 <li class="item-0"><a href="link1.html">first item</a></li>
7 <li class="item-1"><a href="link2.html">second item</a></li>
8 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
9 <li class="item-1"><a href="link4.html">fourth item</a></li>
10 <li class="item-0"><a href="link5.html">fifth item</a></li>
11 </ul>
12 </div>
13 """
14
15 # 創(chuàng)建 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17
18 # 打開本地 HTML 文件的方式來創(chuàng)建對象
19 # soup = BeautifulSoup(open('index.html'), "lxml")
20
21 # 格式化輸出 soup 對象的內(nèi)容
22 print(soup.prettify())
運行結(jié)果: 1 <html>
2 <body>
3 <div>
4 <ul>
5 <li class="item-0">
6 <a href="link1.html">
7 first item
8 </a>
9 </li>
10 <li class="item-1">
11 <a href="link2.html">
12 second item
13 </a>
14 </li>
15 <li class="item-inactive">
16 <a href="link3.html">
17 <span class="bold">
18 third item
19 </span>
20 </a>
21 </li>
22 <li class="item-1">
23 <a href="link4.html">
24 fourth item
25 </a>
26 </li>
27 <li class="item-0">
28 <a href="link5.html">
29 fifth item
30 </a>
31 </li>
32 </ul>
33 </div>
34 </body>
35 </html>
四大對象種類Beautiful Soup將復(fù)雜HTML文檔轉(zhuǎn)換成一個復(fù)雜的樹形結(jié)構(gòu),每個節(jié)點都是Python對象,所有對象可以歸納為4種: Tag NavigableString BeautifulSoup Comment
1. TagTag 通俗點講就是 HTML 中的一個個標(biāo)簽,例如: 1 from bs4 import BeautifulSoup
2
3 html = """
4 <div>
5 <ul>
6 <li class="item-0"><a href="link1.html">first item</a></li>
7 <li class="item-1"><a href="link2.html">second item</a></li>
8 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
9 <li class="item-1"><a href="link4.html">fourth item</a></li>
10 <li class="item-0"><a href="link5.html">fifth item</a></li>
11 </ul>
12 </div>
13 """
14
15 # 創(chuàng)建 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17
18 print(soup.li) # <li class="item-0"><a href="link1.html">first item</a></li>
19 print(soup.a) # <a href="link1.html">first item</a>
20 print(soup.span) # <span class="bold">third item</span>
21 print(soup.p) # None
22 print(type(soup.li)) # <class 'bs4.element.Tag'>
我們可以利用 soup 加標(biāo)簽名輕松地獲取這些標(biāo)簽的內(nèi)容,這些對象的類型是bs4.element.Tag 。但是注意,它查找的是在所有內(nèi)容中的第一個符合要求的標(biāo)簽。如果要查詢所有的標(biāo)簽,后面會進(jìn)行介紹。對于 Tag,它有兩個重要的屬性,是 name 和 attrs
1 from bs4 import BeautifulSoup
2
3 html = """
4 <div>
5 <ul>
6 <li class="item-0"><a href="link1.html">first item</a></li>
7 <li class="item-1"><a href="link2.html">second item</a></li>
8 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
9 <li class="item-1"><a href="link4.html">fourth item</a></li>
10 <li class="item-0"><a href="link5.html">fifth item</a></li>
11 </ul>
12 </div>
13 """
14
15 # 創(chuàng)建 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17
18 print(soup.li.attrs) # {'class': ['item-0']}
19 print(soup.li["class"]) # ['item-0']
20 print(soup.li.get('class')) # ['item-0']
21
22 print(soup.li) # <li class="item-0"><a href="link1.html">first item</a></li>
23 soup.li["class"] = "newClass" # 可以對這些屬性和內(nèi)容等等進(jìn)行修改
24 print(soup.li) # <li class="newClass"><a href="link1.html">first item</a></li>
25
26 del soup.li['class'] # 還可以對這個屬性進(jìn)行刪除
27 print(soup.li) # <li><a href="link1.html">first item</a></li>
2. NavigableString既然我們已經(jīng)得到了標(biāo)簽的內(nèi)容,那么問題來了,我們要想獲取標(biāo)簽內(nèi)部的文字怎么辦呢?很簡單,用 .string 即可,例如 1 from bs4 import BeautifulSoup
2
3 html = """
4 <div>
5 <ul>
6 <li class="item-0"><a href="link1.html">first item</a></li>
7 <li class="item-1"><a href="link2.html">second item</a></li>
8 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
9 <li class="item-1"><a href="link4.html">fourth item</a></li>
10 <li class="item-0"><a href="link5.html">fifth item</a></li>
11 </ul>
12 </div>
13 """
14
15 # 創(chuàng)建 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17
18 print(soup.li.string) # first item
19 print(soup.a.string) # first item
20 print(soup.span.string) # third item
21 # print(soup.p.string) # AttributeError: 'NoneType' object has no attribute 'string'
22 print(type(soup.li.string)) # <class 'bs4.element.NavigableString'> 3. BeautifulSoupBeautifulSoup 對象表示的是一個文檔的內(nèi)容。大部分時候,可以把它當(dāng)作 Tag 對象,是一個特殊的 Tag,我們可以分別獲取它的類型,名稱,以及屬性來感受一下
1 from bs4 import BeautifulSoup
2
3 html = """
4 <div>
5 <ul>
6 <li class="item-0"><a href="link1.html">first item</a></li>
7 <li class="item-1"><a href="link2.html">second item</a></li>
8 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
9 <li class="item-1"><a href="link4.html">fourth item</a></li>
10 <li class="item-0"><a href="link5.html">fifth item</a></li>
11 </ul>
12 </div>
13 """
14
15 # 創(chuàng)建 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17
18 print(soup.name) # [document]
19 print(soup.attrs) # {}, 文檔本身的屬性為空
20 print(type(soup.name)) # <class 'str'> 4. CommentComment 對象是一個特殊類型的 NavigableString 對象,其輸出的內(nèi)容不包括注釋符號。 1 from bs4 import BeautifulSoup
2
3 html = """
4 <div>
5 <a class="sister" href="http:///elsie" id="link1"><!-- Elsie --></a>
6 </div>
7 """
8
9 # 創(chuàng)建 Beautiful Soup 對象
10 soup = BeautifulSoup(html, "lxml")
11
12 print(soup.a) # <a class="sister" href="http:///elsie" id="link1"><!-- Elsie --></a>
13 print(soup.a.string) # Elsie
14 print(type(soup.a.string)) # <class 'bs4.element.Comment'>
a 標(biāo)簽里的內(nèi)容實際上是注釋,但是如果我們利用 .string 來輸出它的內(nèi)容時,注釋符號已經(jīng)去掉了。 遍歷文檔樹1. 直接子節(jié)點 :.contents .children 屬性.contenttag 的 .content 屬性可以將tag的子節(jié)點以列表的方式輸出,輸出方式為列表,我們可以用列表索引來獲取它的某一個元素
1 from bs4 import BeautifulSoup
2
3 html = """
4 <div>
5 <ul>
6 <li class="item-0"><a href="link1.html">first item</a></li>
7 <li class="item-1"><a href="link2.html">second item</a></li>
8 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
9 <li class="item-1"><a href="link4.html">fourth item</a></li>
10 <li class="item-0"><a href="link5.html">fifth item</a></li>
11 </ul>
12 </div>
13 """
14
15 # 創(chuàng)建 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17
18 print(soup.li.contents) # [<a href="link1.html">first item</a>]
19 print(soup.li.contents[0]) # <a href="link1.html">first item</a>
.children它返回的不是一個 list,不過我們可以通過遍歷獲取所有子節(jié)點。 我們打印輸出 .children 看一下,可以發(fā)現(xiàn)它是一個 list 生成器對象
1 from bs4 import BeautifulSoup
2
3 html = """
4 <div>
5 <ul>
6 <li class="item-0"><a href="link1.html">first item</a></li>
7 <li class="item-1"><a href="link2.html">second item</a></li>
8 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
9 <li class="item-1"><a href="link4.html">fourth item</a></li>
10 <li class="item-0"><a href="link5.html">fifth item</a></li>
11 </ul>
12 </div>
13 """
14
15 # 創(chuàng)建 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17
18 print(soup.ul.children) # <list_iterator object at 0x106388a20>
19 for child in soup.ul.children:
20 print(child)
輸出結(jié)果:
1 <li class="item-0"><a href="link1.html">first item</a></li>
2
3
4 <li class="item-1"><a href="link2.html">second item</a></li>
5
6
7 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
8
9
10 <li class="item-1"><a href="link4.html">fourth item</a></li>
11
12
13 <li class="item-0"><a href="link5.html">fifth item</a></li>
2. 所有子孫節(jié)點: .descendants 屬性.contents 和 .children 屬性僅包含tag的直接子節(jié)點,.descendants 屬性可以對所有tag的子孫節(jié)點進(jìn)行遞歸循環(huán),和 children類似,我們也需要遍歷獲取其中的內(nèi)容。 1 for child in soup.ul.descendants:2 print(child) 運行結(jié)果:
1 <li class="item-0"><a href="link1.html">first item</a></li>
2 <a href="link1.html">first item</a>
3 first item
4
5
6 <li class="item-1"><a href="link2.html">second item</a></li>
7 <a href="link2.html">second item</a> 8 second item
9
10
11 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
12 <a href="link3.html"><span class="bold">third item</span></a>
13 <span class="bold">third item</span>
14 third item
15
16
17 <li class="item-1"><a href="link4.html">fourth item</a></li>
18 <a href="link4.html">fourth item</a>
19 fourth item
20
21
22 <li class="item-0"><a href="link5.html">fifth item</a></li>
23 <a href="link5.html">fifth item</a>
24 fifth item
搜索文檔樹1.find_all(name, attrs, recursive, text, **kwargs) 1)name 參數(shù)name 參數(shù)可以查找所有名字為 name 的 tag,字符串對象會被自動忽略掉 A.傳字符串最簡單的過濾器是字符串.在搜索方法中傳入一個字符串參數(shù),Beautiful Soup會查找與字符串完整匹配的內(nèi)容,下面的例子用于查找文檔中所有的<span> 標(biāo)簽: 1 from bs4 import BeautifulSoup
2
3 html = """
4 <div>
5 <ul>
6 <li class="item-0"><a href="link1.html">first item</a></li>
7 <li class="item-1"><a href="link2.html">second item</a></li>
8 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
9 <li class="item-1"><a href="link4.html">fourth item</a></li>
10 <li class="item-0"><a href="link5.html">fifth item</a></li>
11 </ul>
12 </div>
13 """
14
15 # 創(chuàng)建 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 print(soup.find_all('span')) # [<span class="bold">third item</span>]
B.傳正則表達(dá)式如果傳入正則表達(dá)式作為參數(shù),Beautiful Soup會通過正則表達(dá)式的 match() 來匹配內(nèi)容.下面例子中找出所有以 s 開頭的標(biāo)簽,這表示 <span > 標(biāo)簽都應(yīng)該被找到
1 from bs4 import BeautifulSoup
2 import re
3 4 html = """
5 <div>
6 <ul>
7 <li class="item-0"><a href="link1.html">first item</a></li>
8 <li class="item-1"><a href="link2.html">second item</a></li>
9 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
10 <li class="item-1"><a href="link4.html">fourth item</a></li>
11 <li class="item-0"><a href="link5.html">fifth item</a></li>
12 </ul>
13 </div>
14 """
15
16 # 創(chuàng)建 Beautiful Soup 對象
17 soup = BeautifulSoup(html, "lxml")
18 for tag in soup.find_all(re.compile("^s")):
19 print(tag)
20 # <span class="bold">third item</span>
C.傳列表如果傳入列表參數(shù),Beautiful Soup會將與列表中任一元素匹配的內(nèi)容返回.下面代碼找到文檔中所有 <a> 標(biāo)簽和 <span> 標(biāo)簽:
1 from bs4 import BeautifulSoup
2
3 html = """
4 <div>
5 <ul>
6 <li class="item-0"><a href="link1.html">first item</a></li>
7 <li class="item-1"><a href="link2.html">second item</a></li>
8 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
9 <li class="item-1"><a href="link4.html">fourth item</a></li>
10 <li class="item-0"><a href="link5.html">fifth item</a></li>
11 </ul>
12 </div>
13 """
14
15 # 創(chuàng)建 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 print(soup.find_all(["a", "span"]))
18 # [<a href="link1.html">first item</a>, <a href="link2.html">second item</a>, <a href="link3.html"><span class="bold">third item</span></a>, <span class="bold">third item</span>, <a href="link4.html">fourth item</a>, <a href="link5.html">fifth item</a>]
2)keyword 參數(shù)
1 from bs4 import BeautifulSoup
2
3 html = """
4 <div>
5 <ul>
6 <li class="item-0"><a href="link1.html">first item</a></li>
7 <li class="item-1"><a href="link2.html">second item</a></li>
8 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
9 <li class="item-1"><a href="link4.html">fourth item</a></li>
10 <li class="item-0"><a href="link5.html">fifth item</a></li>
11 </ul>
12 </div>
13 """
14
15 # 創(chuàng)建 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 print(soup.find_all(href='link1.html')) # [<a href="link1.html">first item</a>]
3)text 參數(shù)通過 text 參數(shù)可以搜搜文檔中的字符串內(nèi)容,與 name 參數(shù)的可選值一樣, text 參數(shù)接受 字符串 , 正則表達(dá)式 , 列表 1 from bs4 import BeautifulSoup
2 import re
3
4 html = """
5 <div>
6 <ul>
7 <li class="item-0"><a href="link1.html">first item</a></li>
8 <li class="item-1"><a href="link2.html">second item</a></li>
9 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
10 <li class="item-1"><a href="link4.html">fourth item</a></li>
11 <li class="item-0"><a href="link5.html">fifth item</a></li>
12 </ul>
13 </div>
14 """
15
16 # 創(chuàng)建 Beautiful Soup 對象
17 soup = BeautifulSoup(html, "lxml")
18 print(soup.find_all(text="first item")) # ['first item']
19 print(soup.find_all(text=["first item", "second item"])) # ['first item', 'second item']
20 print(soup.find_all(text=re.compile("item"))) # ['first item', 'second item', 'third item', 'fourth item', 'fifth item'] CSS選擇器這就是另一種與 find_all 方法有異曲同工之妙的查找方法. (1)通過標(biāo)簽名查找
1 from bs4 import BeautifulSoup
2 import re
3
4 html = """
5 <div>
6 <ul>
7 <li class="item-0"><a href="link1.html">first item</a></li>
8 <li class="item-1"><a href="link2.html">second item</a></li>
9 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
10 <li class="item-1"><a href="link4.html">fourth item</a></li>
11 <li class="item-0"><a href="link5.html">fifth item</a></li>
12 </ul>
13 </div>
14 """
15
16 # 創(chuàng)建 Beautiful Soup 對象
17 soup = BeautifulSoup(html, "lxml")
18 print(soup.select('span')) # [<span class="bold">third item</span>] (2)通過類名查找1 print(soup.select('.item-0'))
2 # [<li class="item-0"><a href="link1.html">first item</a></li>, <li class="item-0"><a href="link5.html">fifth item</a></li>] (3)通過 id 名查找print(soup.select('#item-0')) # [] (4)組合查找1 print(soup.select('li.item-0'))
2 # [<li class="item-0"><a href="link1.html">first item</a></li>, <li class="item-0"><a href="link5.html">fifth item</a></li>]
3 print(soup.select('li.item-0>a'))
4 # [<a href="link1.html">first item</a>, <a href="link5.html">fifth item</a>] (5)屬性查找1 print(soup.select('a[href="link1.html"]')) # [<a href="link1.html">first item</a>] (6) 獲取內(nèi)容
1 for text in soup.select('li'):
2 print(text.get_text())
3 """
4 first item
5 second item
6 third item
7 fourth item
8 fifth item
9 """
|