Automating twitter scrapping

Hi

I am doing some sentiment analysis for a piece of academic work.

I am looking at the sentiment of user replies to a series of original tweets. I have a years worth of original tweets (to which i have the conversation_id for each original Tweet). I have then written the below very basic code to get all the replies liked to that conversation_id. But i am doing each one by one for the conversation ID. Can anyone recommend a way i could load a dataframe (or anything else) to speed this up?

#Build query of conversation_ids
ConvoID <-
build_query(
conversation_id = 1244656682111811584,
)

#Run programme to extract replies to original tweets
PHE_tweets999 <-
get_all_tweets(
query = ConvoID,
start_tweets = "2020-03-01T00:00:00Z",
end_tweets = "2021-03-31T00:00:00Z",
n = 1000
)

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.