Skip to content Skip to main navigation Skip to footer

Python如何编写爬虫代码采集百度百科页面

本文主要介绍了python中通过对正则表达式的使用来编写爬虫,抓取百度百科页面的方法:

本文实例讲述了python采集百度百科的方法。分享给大家供大家参考。具体如下:

#!/usr/bin/python
# -*- coding: utf-8 -*-
#encoding=utf-8
#Filename:get_baike.py
import urllib2,re
import sys
def getHtml(url,time=10):
 response = urllib2.urlopen(url,timeout=time)
 html = response.read()
 response.close()
 return html
def clearBlank(html):
 if len(html) == 0 : return ''
 html = re.sub('r|n|t','',html)
 while html.find(" ")!=-1 or html.find(' ')!=-1 :
  html = html.replace(' ',' ').replace(' ',' ')
 return html
if __name__ == '__main__':
  html = getHtml('http://baike.baidu.com/view/4617031.htm',10)
  html = html.decode('gb2312','replace').encode('utf-8') #转码
  title_reg = r'<h1 class="title" id="&#91;d&#93;+">(.*&#63;)</h1>'
  content_reg = r'<div class="card-summary-content">(.*&#63;)</p>'
  title = re.compile(title_reg).findall(html)
  content = re.compile(content_reg).findall(html)
  title[0] = re.sub(r'<&#91;^>]*&#63;>', '', title[0])
  content[0] = re.sub(r'<&#91;^>]*&#63;>', '', content[0])
  print title[0]
  print '#######################'
  print content[0]
 

希望本文所述对大家的Python程序设计有所帮助。


python爬虫采集百度百科的方法就是这样,欢迎大家参考。。。。

0 Comments

There are no comments yet

Leave a comment

Your email address will not be published.