Python-urllib

1
httpbin.org是一个练习爬虫请求模拟的一个网站。

1
2
import urllib.request
import urllib.parse

#获取一个Get请求

1
2
3
4
response = urllib.request.urlopen("http://www.baidu.com")
print(response.read().decode())
#decode代表解码默认为utf-8
#urlopen来访问网页返回数据

返回内容图片记录,可以在这些内容中获取一些有用的数据,后续会记录如果解析内容。

#获取一个post请求 httpbin.org,用post方式要分装data ,一般用来模拟用户登录

1
2
3
4
 data = bytes(urllib.parse.urlencode({"hellow":"world"}),encoding="utf-8")
response = urllib.request.urlopen("http://httpbin.org/post",data=data)
 print(response.read().decode())
#data是构建的一个数据包

{
"args": {},
"data": "",
"files": {},
"form": {
"hellow": "world"
},
"headers": {
"Accept-Encoding": "identity",
"Content-Length": "12",
"Content-Type": "application/x-www-form-urlencoded",
"Host": "httpbin.org",
"User-Agent": "Python-urllib/3.8",
"X-Amzn-Trace-Id": "Root=1-5ed90cc5-e689796066c3211dc6250287"
},
"json": null,
"origin": "1.85.61.61",
"url": "http://httpbin.org/post"
}

Process finished with exit code 0

#获取一个get请求 httpbin.org 超时问题

1
2
3
4
5
6
try:
 response = urllib.request.urlopen("http://httpbin.org/get",timeout=0.01)
 print(response.read().decode())
except urllib.error.URLError as e:
 print(e)
#timeout代表时间限制 0.01 会发生请求超时

#访问豆瓣

未经过处理直接访问豆瓣地址,会被反扒机制发现

1
2
3
4
5
6
7
url = "https://www.douban.com"

#请求对象
req = urllib.request.Request(url=url,method="POST")
#
response = urllib.request.urlopen(req)
print(response.read().decode())

urllib.error.HTTPError: HTTP Error 418:

直接会提示418

The HTTP 418 I'm a teapot client error response code indicates that the server refuses to brew coffee because it is a teapot. This error is a reference to Hyper Text Coffee Pot Control Protocol which was an April Fools' joke in 1998.
418 I'm a teapot 也就是我是个茶壶

触发反爬机制是因为我们User-Agent": "Python-urllib/3.8",而不是正常浏览器的 User-Agent,经过修改, 就可以了。

1
2
3
4
5
6
7
8
url = "http://www.douban.com"
data = bytes(urllib.parse.urlencode({"hellow":"world"}),encoding="utf-8")
heads = {r"User-Agent": r"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36"}
# #请求对象
 req = urllib.request.Request(url=url,data=data,headers=heads)
#
 response = urllib.request.urlopen(req)
 print(response.read().decode())