[기본 사용 방식]


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import requests
from bs4 import BeautifulSoup as bs
 
url = 'https://news.naver.com/main/read.nhn?\
mode=LSD&mid=shm&sid1=101&oid=018&aid=0004305437'
= requests.get(url)
html = r.content 
 
soup = bs(html, "html.parser"
# request를 통해 파싱한 html문서를 BeautifulSoup 객체로 만들어 줌으로서
BeautifulSoup의 유용한 메서드(원하는 데이터 추출)를 사용하는 방식 
 
print('1. ', soup.title)
print('2. ', soup.title.name)
print('3. ', soup.title.string)
print('4. ', soup.title.parent.name)
print('5. ', soup.p)                 # 가장 첫 p 태그 반환
print('6. ', soup.p['class'])        # 원하는 태그의 속성에 대한 값을 파싱 가능
print('7. ', soup.a)
print('8. ', soup.find_all('a'))     # 모든 a태그 list값 반환
print('9. ', soup.find(id="right.ranking_tab_100"))  # id 값을 가지고 해당 태그 찾을 수 있음
print('10.', soup.get_text())        # text만 추출
cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
결과물
 
1.  <title>이재용 부회장, 새해 첫 출장 中시안 4일 출국..반도체 사업 점검 : 네이버 뉴스</title>
2.  title
3.  이재용 부회장, 새해 첫 출장 中시안 4일 출국..반도체 사업 점검 : 네이버 뉴스
4.  meta
5.  <p class="head_channel_layer" style="display: none;">
<span class="head_channel_layer_text">해당 언론사가 주요기사로<br/>직접 선정한 기사입니다.</span>
<a class="head_channel_layer_link" href="https://news.naver.com/main/static/channelPromotion.html" target="_blank">언론사 편집판 바로가기</a>
<button class="head_channel_layer_close" type="button">닫기</button>
</p>
6.  ['head_channel_layer']
7.  <a href="#lnb" tabindex="1"><span>메인 메뉴로 바로가기</span></a>
8.  [<a href="#lnb" tabindex="1"><span>메인 메뉴로 바로가기</span></a>, 
<a href="#main_content" tabindex="2"><span>본문으로 바로가기</span></a>, 
<a class="h_logo nclicks(STA.naver)" href="https://www.naver.com/"> ... 생략
9.  <a aria-selected="false" class="nclicks(rig.ranking)" href="#" id="right.ranking_tab_100" onclick="return false;">정치</a>
10. 이재용 부회장, 새해 첫 출장 中시안 4일 출국..반도체 사업 점검 : 네이버 뉴스... 생략
cs

[사용 예시]

ex.1) 뉴스 기사 html에서 a태그의 href 링크들만 크롤링하기



해당 뉴스기사 url에서 이 부분의 1번부터 10번까지의 링크를 크롤링하기


1
2
3
4
5
6
7
8
9
10
11
12
import requests
from bs4 import BeautifulSoup as bs
url = 'https://news.naver.com/main/read.nhn?\
mode=LSD&mid=shm&sid1=101&oid=018&aid=0004305437'
= requests.get(url)
html = r.content 
soup = bs(html, "html.parser"
a_tags = soup.find_all("a", {'class':"nclicks(rig.newstopic)"}) 
count = 1
for a_tag in a_tags :
    print(count,".", a_tag["href"] ) # 찾은 a태그의 href 값 크롤링
    count += 1
cs

1
2
3
4
5
6
7
8
9
10
11
결과값 확인
1 . https://search.naver.com/search.naver?where=nexearch&query=%EC%97%B0%ED%9C%B4+%EB%A7%88%EC%A7%80%EB%A7%89+%EB%82%A0&ie=utf8&sm=nws_htk.nws
2 . https://search.naver.com/search.naver?where=nexearch&query=2%EC%B0%A8+%EB%B6%81%EB%AF%B8%ED%9A%8C%EB%8B%B4&ie=utf8&sm=nws_htk.nws
3 . https://search.naver.com/search.naver?where=nexearch&query=%EC%8B%AC%EC%84%9D%ED%9D%AC+%EB%A9%94%EB%AA%A8&ie=utf8&sm=nws_htk.nws
4 . https://search.naver.com/search.naver?where=nexearch&query=%EC%9E%90%EC%A0%95%EC%AF%A4+%ED%95%B4%EC%86%8C&ie=utf8&sm=nws_htk.nws
5 . https://search.naver.com/search.naver?where=nexearch&query=12%EC%9D%BC+%EA%B2%80%EC%B0%B0+%EC%86%8C%ED%99%98%EC%A1%B0%EC%82%AC&ie=utf8&sm=nws_htk.nws
6 . https://search.naver.com/search.naver?where=nexearch&query=%EA%B5%AC%EC%A0%9C%EC%97%AD+%EB%B0%A9%EC%97%AD+%EC%B4%9D%EB%A0%A5&ie=utf8&sm=nws_htk.nws
7 . https://search.naver.com/search.naver?where=nexearch&query=%ED%95%9C%EA%B5%AD%EB%8B%B9+%EC%A0%84%EB%8B%B9%EB%8C%80%ED%9A%8C&ie=utf8&sm=nws_htk.nws
8 . https://search.naver.com/search.naver?where=nexearch&query=%ED%94%8C%EB%9D%BC%EC%8A%A4%ED%8B%B1+%ED%94%84%EB%A6%AC+%EC%B1%8C%EB%A6%B0%EC%A7%80+%EB%8F%99%EC%B0%B8&ie=utf8&sm=nws_htk.nws
9 . https://search.naver.com/search.naver?where=nexearch&query=%EB%B2%A0%ED%8A%B8%EB%82%A8%EC%84%9C+%EA%B0%9C%EC%B5%9C&ie=utf8&sm=nws_htk.nws
10 . https://search.naver.com/search.naver?where=nexearch&query=%EA%B5%AD%EC%99%B8+%EC%98%81%ED%96%A5%EC%9D%B4+75%25&ie=utf8&sm=nws_htk.nws
cs


+ Recent posts