使用Web抓取duckduckgo,但获取的链接格式不正确。

3
我使用BeautifulSoup库创建了一个Python 3脚本。它会访问duckduckgo搜索引擎,使用以下URL:https://duckduckgo.com/?q=searchterm。然后,它将向我显示第一页中的所有网站。
以下是代码,它完美地运行:
import requests
from bs4 import BeautifulSoup

r = requests.get('https://duckduckgo.com/html/?q=test')
soup = BeautifulSoup(r.text, 'html.parser')
results = soup.find_all('a', attrs={'class':'result__a'})

i = 0
while i < len(results):
    link = results[i]
    url = link['href']
    print(url)
    i = i + 1

事实上,我的url格式不正确(例如:https://www.google.com)。相反,我得到的所有url都是搜索查询格式。
当我在duckduckgo上搜索test时,这就是我的意思:
/l/?kh=-1&uddg=https%3A%2F%2Fduckduckgo.com%2Fy.js%3Fu3%3Dhttps%253A%252F%252Fr.search.yahoo.com%252Fcbclk%252FdWU9MEQwQzVENEZDNDU0NDlEMyZ1dD0xNTM4MzE4MTI3MzE5JnVvPTc3NTg0MzM1OTYxMTUyJmx0PTImZXM9ZVBGTU9iWUdQUy42cVdRVQ%252D%252D%252FRV%253D2%252FRE%253D1538346927%252FRO%253D10%252FRU%253Dhttps%25253a%25252f%25252fwww.bing.com%25252faclick%25253fld%25253dd3peyDLOVSWraifG78tpZ1GjVUCUzCMDkx%252DfJrFXeY2IfiXIwUmngX%252DYKvZWQ6q7hPHC_3kc%252DzBWS1SE015Or2c3CncFMVc9OjVV5OyB2kJqXdRsOzRnaCGy8gYCPuival0gLe7WCkfk_%252DAVKTWmYxranfh02ficTC7i6oC38n2q9U9KPe%252526u%25253dhttps%2525253a%2525252f%2525252fwww.dotdrugconsortium.com%2525252f%2525253futm_source%2525253dbing%25252526utm_medium%2525253dcpc%25252526utm_campaign%2525253dadcenter%25252526utm_term%2525253ddottest%252526rlid%25253d590f68ae34ff126ed0e3331eebd0c4fb%252FRK%253D2%252FRS%253DeKe3rY19jdg9vb_ayBSboMzPU1g%252D%26ad_provider%3Dyhs%26vqd%3D3%2D12729109948094676568590283448597440227%2D122882305188756590950269013545136161936
/l/?kh=-1&uddg=https%3A%2F%2Fwww.merriam%2Dwebster.com%2Fdictionary%2Ftest
/l/?kh=-1&uddg=https%3A%2F%2Fwww.speedtest.net%2F
/l/?kh=-1&uddg=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FTest
/l/?kh=-1&uddg=https%3A%2F%2Fwww.dictionary.com%2Fbrowse%2Ftest
/l/?kh=-1&uddg=https%3A%2F%2Fwww.thefreedictionary.com%2Ftest
/l/?kh=-1&uddg=https%3A%2F%2Fwww.16personalities.com%2F
/l/?kh=-1&uddg=https%3A%2F%2Fwww.speakeasy.net%2Fspeedtest%2F
/l/?kh=-1&uddg=http%3A%2F%2Fwww.humanmetrics.com%2Fcgi%2Dwin%2Fjtypes2.asp
/l/?kh=-1&uddg=https%3A%2F%2Fwww.typingtest.com%2F%3Fab
/l/?kh=-1&uddg=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FTest_cricket
/l/?kh=-1&uddg=https%3A%2F%2Fged.com%2F
/l/?kh=-1&uddg=http%3A%2F%2Fspeedtest.xfinity.com%2F
/l/?kh=-1&uddg=https%3A%2F%2Fwww.16personalities.com%2Ffree%2Dpersonality%2Dtest
/l/?kh=-1&uddg=https%3A%2F%2Fwww.merriam%2Dwebster.com%2Fthesaurus%2Ftest
/l/?kh=-1&uddg=http%3A%2F%2Ftest%2Dipv6.com%2F
/l/?kh=-1&uddg=https%3A%2F%2Fwww.thesaurus.com%2Fbrowse%2Ftest
/l/?kh=-1&uddg=http%3A%2F%2Fspeedtest.att.com%2Fspeedtest%2F
/l/?kh=-1&uddg=http%3A%2F%2Fspeedtest.googlefiber.net%2F
/l/?kh=-1&uddg=http%3A%2F%2Ftest.salesforce.com%2F
/l/?kh=-1&uddg=https%3A%2F%2Fmy.uscis.gov%2Fprep%2Ftest%2Fcivics
/l/?kh=-1&uddg=https%3A%2F%2Fwww.tests.com%2F
/l/?kh=-1&uddg=https%3A%2F%2Fen.wiktionary.org%2Fwiki%2FTest
/l/?kh=-1&uddg=https%3A%2F%2Ftestmy.net%2F
/l/?kh=-1&uddg=https%3A%2F%2Fwww.google.com%2F
/l/?kh=-1&uddg=https%3A%2F%2Fwww.queendom.com%2Ftests%2Findex.htm
/l/?kh=-1&uddg=http%3A%2F%2Fwww.yourdictionary.com%2Ftest
/l/?kh=-1&uddg=http%3A%2F%2Fwww.testout.com%2F
/l/?kh=-1&uddg=https%3A%2F%2Fimplicit.harvard.edu%2Fimplicit%2Ftakeatest.html
/l/?kh=-1&uddg=http%3A%2F%2Fwww.act.org%2Fcontent%2Fact%2Fen%2Fproducts%2Dand%2Dservices%2Fthe%2Dact.html
/l/?kh=-1&uddg=https%3A%2F%2Fwww.ets.org%2Fgre%2F

我想知道是否有一种方法可以以标准格式显示所有这些URL。
编辑:这不是我其他主题的重复,因为在上一个主题中,我被告知库PyCurl无法获取我想要的内容(它无法捕获URL中的JavaScript代码)。在这里,我的代码正在工作,但我得到的输出并不是我期望的。

可能是Pycurl javascript的重复问题。 - deadvoid
问题已经不同了,因为我发现使用 PyCurl 库无法实现我想要的功能,所以我完全更换了库。如果这违反了规定,我并不介意更新其他链接(我对论坛规则不是很熟悉)。如果我带来了任何麻烦,请原谅。此外,这里的问题完全不同,因为我的代码正在工作,但输出结果并不完全符合我的预期。 - Lok Ridgmont
这并不是关于使用任何库,而是关于爬取数据而不是使用API,正如我所指出的那样,API是随时可用的。 - deadvoid
我真的没有使用duckduckgo API的问题。但不幸的是,这只是一个小学院项目,我的老师坚持使用网页爬虫技术。所以duckduckgo API实际上不是一个选项。此外,我的代码已经完全运行正常,但输出格式并不是我想要的。 - Lok Ridgmont
1个回答

5

Python的urllib.parse库可以帮助您完成以下操作:

from bs4 import BeautifulSoup
import urllib.parse
import requests

r = requests.get('https://duckduckgo.com/html/?q=test')
soup = BeautifulSoup(r.text, 'html.parser')
results = soup.find_all('a', attrs={'class':'result__a'}, href=True)

for link in results:
    url = link['href']
    o = urllib.parse.urlparse(url)
    d = urllib.parse.parse_qs(o.query)
    print(d['uddg'][0])

这将显示以某些内容开头:

http://www.speedtest.net/
https://www.merriam-webster.com/dictionary/test
https://en.wikipedia.org/wiki/Test
https://www.thefreedictionary.com/test
https://www.dictionary.com/browse/test

首先使用urlparse()函数获取路径组件,然后从中提取query字符串并将其传递给parse_qs()函数进行进一步处理。然后你可以使用uddg名称来提取链接。


1
太棒了的解决方案!! - SIM

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接