新浪微博爬虫实践

hxy    2018-08-29 13:48

最近想要获取一些真实社交网络的数据,试了试Facebook for developers, 提供了很完善的接口:

https://developers.facebook.com/tools/explorer/

但是从2018年4月4日之后,停止了friendlists的功能。

有点可惜。并且,在服务器上暂时无法访问Facebook,

在看看Google+的api: https://developers.google.com/+/web/

测试了一下get方法,我的id是101266749844321077526,获取到的信息如下。

{
 "kind": "plus#person",
 "etag": "\"jb1Xzanox6i8Zyse4DcYD8sZqy0/nwifmdOUqdBZRErC-C29_Ygbw9A\"",
 "gender": "male",
 "emails": [
  {
   "value": "huangxinyu.h@gmail.com",
   "type": "account"
  }
 ],
 "objectType": "person",
 "id": "101266749844321077526",
 "displayName": "Xinyu Huang",
 "name": {
  "familyName": "Huang",
  "givenName": "Xinyu"
 },
 "url": "https://plus.google.com/101266749844321077526",
 "image": {
  "url": "https://lh3.googleusercontent.com/-FkRHLFgCH3k/AAAAAAAAAAI/AAAAAAAAAC8/s8HP3fdnB24/photo.jpg?sz=50",
  "isDefault": false
 },
 "isPlusUser": true,
 "language": "zh_CN",
 "circledByCount": 0,
 "verified": false
}

所以只好先试试国内的微博了。

新浪微博提供了API,但是需要申请。试了试挺麻烦,到现在审核还没有完成。好在有人@飞鸟2010提供了爬虫抓取数据,测试一下可以运行,在此记录一下。

1,安装Mysql服务,如果不想安装可以使用Xampp自带的绿色版,启动。

 

2. 建立数据库。

create database weibo_spider DEFAULT CHARSET utf8 COLLATE utf8_general_ci;

 

3. 建表。

CREATE TABLE `following` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `followingId` varchar(255) DEFAULT NULL,
  `followingName` varchar(255) DEFAULT NULL,
  `followingUrl` varchar(2027) DEFAULT NULL,
  `followersCount` varchar(255) DEFAULT NULL,
  `followCount` varchar(255) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=69 DEFAULT CHARSET=utf8;

 

4. 创建Python脚本,粘贴如下代码。

# -*- coding: utf-8 -*-
"""
Created on Thu Aug  3 20:59:53 2017

@author: Administrator
"""

import requests
import json
import time
import random
import pymysql.cursors


def crawlDetailPage(url,page):
    #读取微博网页的JSON信息
    req = requests.get(url)
    jsondata = req.text
    data = json.loads(jsondata)

    #获取每一条页的数据
    print(data)
    content = data['data']['cards']
    #print(content)

    #循环输出每一页的关注者各项信息
    for i in content:
        followingId = i['user']['id']
        followingName = i['user']['screen_name']
        followingUrl = i['user']['profile_url']
        followersCount = i['user']['followers_count']
        followCount = i['user']['follow_count']

        print("---------------------------------")
        print("用户ID为:{}".format(followingId))
        print("用户昵称为:{}".format(followingName))
        print("用户详情链接为:{}".format(followingUrl))
        print("用户粉丝数:{}".format(followersCount))
        print("用户关注数:{}".format(followCount))



        '''
        数据库操作
        '''

        #获取数据库链接
        connection  = pymysql.connect(host = 'localhost',
                                  port = 53306,
                                  user = 'root',
                                  password = '123456',
                                  db = 'weibo_spider',
                                  charset = 'utf8')
        try:
            #获取会话指针
            with connection.cursor() as cursor:
                #创建sql语句
                sql = "insert into `following` (`followingId`,`followingName`,`followingUrl`,`followersCount`,`followCount`) values (%s,%s,%s,%s,%s)"

                #执行sql语句
                cursor.execute(sql,(followingId,followingName,followingUrl,followersCount,followCount))

                #提交数据库
                connection.commit()
        finally:
            connection.close()


for i in range(1,11):
    print("正在获取第{}页的关注列表:".format(i))
    #微博用户关注列表JSON链接
    url = "https://m.weibo.cn/api/container/getSecond?containerid=1005052164843961_-_FOLLOWERS&page=" + str(i)
    crawlDetailPage(url,i)
    #设置休眠时间
    t = random.randint(31,33)
    print("休眠时间为:{}s".format(t))
    time.sleep(t)

 

5. 运行,查看数据库。

 

Views: 2.3K

[[total]] comments

Post your comment
  1. [[item.time]]
    [[item.user.username]] [[item.floor]]Floor
  2. Click to load more...
  3. Post your comment