使用Python Tweepy从Twitter API中挖掘了7天的Twitter API推文



我使用python,tweepy,django,celery,django rest框架从Twitter API开采7天的推文。

我每分钟使用芹菜节拍发送请求,并使用Django Orm将收集的数据存储到PostgreSQL数据库中。

为了确保API在每个呼叫中都不会继续发送相同的100条推文,我正在检查min(tweet.id)的数据库,并在每个新请求之前将其设置为max_id参数。

我遇到了一个问题:一旦获得7天的推文,我如何重置此max_id

models.py

class Tweet(models.Model):
    tweet_id = models.CharField(
        max_length=200,
        unique=True,
        primary_key=True
    )
    tweet_date = models.DateTimeField()
    tweet_source = models.TextField()
    tweet_favorite_cnt = models.CharField(max_length=200)
    tweet_retweet_cnt = models.CharField(max_length=200)
    tweet_text = models.TextField()
    def __str__(self):
        return self.tweet_id + '  |  ' + str(self.tweet_date)

tasks.py

auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
# Instantiate an instance of the API class from the tweepy library.
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)

@shared_task(name='cleanup')
def cleanup():
    """
    Check database for records older than 7 days.
    Delete them if they exist.
    """
    Tweet.objects.filter(tweet_date__lte=datetime.now() - timedelta(days=7)).delete()

@shared_task(name='get_tweets')
def get_tweets():
    """Get some tweets from the twitter api and store them to the db."""
    # Subtasks
    chain = cleanup.s()
    chain()
    # Check for the minimum tweet_id and set it as max_id.
    # This ensures the API call doesn't keep getting the same tweets.
    max_id = min([tweet.tweet_id for tweet in Tweet.objects.all()])
    # Make the call to the Twitter Search API.
    tweets = api.search(
        q='#python',
        max_id=max_id,
        count=100
    )
    # Store the collected data into lists.
    tweets_date = [tweet.created_at for tweet in tweets]
    tweets_id = [tweet.id for tweet in tweets]
    tweets_source = [tweet.source for tweet in tweets]
    tweets_favorite_cnt = [tweet.favorite_count for tweet in tweets]
    tweets_retweet_cnt = [tweet.retweet_count for tweet in tweets]
    tweets_text = [tweet.text for tweet in tweets]
    # Iterate over these lists and save the items as fields for new records in the database.
    for i, j, k, l, m, n in zip(
            tweets_id,
            tweets_date,
            tweets_source,
            tweets_favorite_cnt,
            tweets_retweet_cnt,
            tweets_text
    ):
        try:
            Tweet.objects.create(
                tweet_id=i,
                tweet_date=j,
                tweet_source=k,
                tweet_favorite_cnt=l,
                tweet_retweet_cnt=m,
                tweet_text=n,
            )
        except IntegrityError:
            pass

尝试以下:

# Check for the minimum tweet_id and set it as max_id.
# This ensures the API call doesn't keep getting the same tweets.
date_partition = get_seven_day_partition
## Since you're cutting off every seven days, you should know how
## to separate your weeks into seven day sections
max_id = min([tweet.tweet_id for tweet in Tweet.objects.all()
              if tweet.tweet_date > date_partition])

您没有指定有关如何拉出这些推文以及如何在特定日期停止(以及该程序执行)的足够信息,因此很难建议一种跟踪日期的正确方法。

我能告诉您的是,为您的用例设置date_partitionmax_id分配的此补充将适当地抓住最古老的7天期间的最大一天

相关内容

  • 没有找到相关文章

最新更新