Notice
Recent Posts
Recent Comments
Link
| 일 | 월 | 화 | 수 | 목 | 금 | 토 |
|---|---|---|---|---|---|---|
| 1 | 2 | 3 | 4 | |||
| 5 | 6 | 7 | 8 | 9 | 10 | 11 |
| 12 | 13 | 14 | 15 | 16 | 17 | 18 |
| 19 | 20 | 21 | 22 | 23 | 24 | 25 |
| 26 | 27 | 28 | 29 | 30 | 31 |
Tags
- PyQt
- 맛집
- pandas
- IOS
- port
- Excel
- GIT
- 유니티
- 라즈베리파이
- Linux
- 다이어트
- urllib
- MS-SQL
- 리눅스
- flutter
- mssql
- Unity
- python
- 함수
- sqlite
- node.js
- tensorflow
- ubuntu
- MySQL
- javascript
- 날짜
- PER
- ASP
- swift
- PyQt5
Archives
아미(아름다운미소)
[python]crawler 샘플 본문
# encoding: utf-8
import codecs
from collections import deque
import urllib2, urlparse
import socket
from BeautifulSoup import BeautifulSoup
def get_links(uri, startswith=""):
"""Returns list of referenced URIs (without duplicates) found in the document returned for the input URI"""
results = set()
try:
page = urllib2.urlopen(uri)
soup = BeautifulSoup(page)
for link in soup.findAll('a'): # <a href=""...>
try:
link = link['href']
if not link.startswith("javascript:") \
and not link.startswith("mailto:") \
and not link.startswith("skype:"):
link = urlparse.urljoin(uri, link) # expand relative URIs
if link.startswith(startswith):
results.add(link)
except KeyError:
print "Missing href attribute in %s" % link
except:
pass
results = list(results)
return results
def crawl(seed_uris, timeout=5, limit=1000, debug=True, startswith=""):
"""Returns a list of URIs found by following all links from the list of seed URIs given"""
queue = deque(seed_uris)
results = seed_uris[:]
socket.setdefaulttimeout(timeout) # set time-out to 5 seconds
while len(queue)>0 and len(results)<limit:
uri = queue.popleft()
if debug:
print "Analyzing %s" % uri
links = get_links(uri,startswith)
new_links = [uri for uri in links if uri not in results]
if debug:
print "%i links found, of which %i are new" % (len(links), len(new_links))
results.extend(new_links)
queue.extend(new_links)
if debug:
print "Status: %i URIs known, %i URIs in queue" % (len(results), len(queue))
if debug:
print "Completed."
print "URI count after analysing all linked pages: %i distinct URLs" % len(results)
results.sort()
return(results)
def main():
# SEED_URIS = ["http://www.heppnetz.de/", "http://www.unibw.de"]
SEED_URIS = ["http://www.cafe24.com/"]
results = crawl(seed_uris=SEED_URIS, limit=1000, startswith="")
f = codecs.open('uris.csv', 'wt', 'utf-8')
for line in results:
f.write(line+"\n")
f.close()
if __name__ == '__main__':
main()
'랭귀지 > python' 카테고리의 다른 글
| [BeautifulSoup]웹에서 정보 가져오기 (0) | 2017.12.22 |
|---|---|
| [python]방대한 XLS (Excel) 파일을 읽고(쓰기) (0) | 2017.12.21 |
| Python 에서 Mysql 사용 하기 (0) | 2017.12.20 |
| urllib2: 무엇을 받고 있는 거지? (0) | 2017.12.19 |
| urllib2로 에러 처리하는 법 (0) | 2017.12.19 |
Comments