python 请求url-api总结[写爬虫基础]

    技术2024-10-25  24

    requests库:

    get请求:

    second_html = requests.get(sel_url, headers=headers) Soup2 = BeautifulSoup(second_html.text,"html.parser")

    post json请求:

    res = requests.post(url,data=json.dumps(values),headers={'Content-Type':'application/json'}) print(res.json())

    通过session,进行get、post请求:

     

    session = requests.Session() data = session.get(geoUrl, timeout=1) data.json #构造Session session = requests.Session() ​ #在session中发送登录请求,此后这个session里就存储了cookie #可以用print(session.cookies.get_dict())查看 resp = session.post(login_url, data)

     

    urllib2库:

    data = urllib2.urlopen(geoUrl) hjson = json.loads(data.read()) # json对象转成python对象

     

     get请求post请求sessionrequests库requests.get 返回res.json(); requests.post(json类型) 获取session,通过session post和geturllib2库urlopen.urllib2  
    Processed: 0.013, SQL: 9