No longer current

Here I describe how I pull tweets to publish on my website.

I display my recent tweets on the right-hand side of my web site. There isn't really enough width for a Twitter widget, and I don't want too much visual distraction (like icons and photos) or slow loading there anyway.

I used a simple mechanism: I obtain the tweet data with client-side JavaScript/jQuery using the 1.0
https://api.twitter.com/1/statuses/user_timeline.json API, which requires no authentication. Then I parse/extract/format/insert the data in JavaScript.

This promptly broke when twitter retired the API v1. I didn't really think of this usage as an app using an API, but I suppose the URL should have given it away.

Now I need to use 1.1, which requires an app and OAuth authentication. I understand the reasons, but this this is a bit of an inconvenience, plus it doesn't mesh well with the above approach where visitor browsers use direct JavaScript to Twitter.

My new approach is to use a script on my server, which polls Twitter's API, and formats the result into a formatted JSONP file on my web server. The client-side JavaScript then uses that instead of talking to Twitter. Or I could use a proxy that does this dynamically (with some caching), but this is simpler and more resilient. If there is a easy/efficient alternative I'm not aware of, please let me know.

The script uses a python library called "Python Twitter API" (PyPI, Home, Github). I installed it into a virtual environment:

mkdir ~/venv
virtualenv ~/venv
source ~/venv/bin/activate
pip install twitter

Next I went to dev.twitter.com to create a new application called GreenhillsImporter, saved off my app details:

echo 'myconsumerkey' > ~/.greenhillstwitterapp
echo 'myconsumersecret' >> ~/.greenhillstwitterapp

and get my tweets with:

#
# Usage: python tweets.py > /srv/www.greenhills.co.uk/htdocs/twitter-makuk66.js

import os,json,re,cgi
from twitter import *

CONSUMER_KEY, CONSUMER_SECRET = read_token_file(os.path.expanduser('~/.greenhillstwitterapp'))

MY_TWITTER_CREDS = os.path.expanduser('~/.greenhillstwitterimporter')
if not os.path.exists(MY_TWITTER_CREDS):
    oauth_dance("GreenhillsImporter", CONSUMER_KEY, CONSUMER_SECRET, MY_TWITTER_CREDS)

oauth_token, oauth_secret = read_token_file(MY_TWITTER_CREDS)
twitter = Twitter(auth=OAuth(oauth_token, oauth_secret, CONSUMER_KEY, CONSUMER_SECRET))

# See https://dev.twitter.com/docs/api/1.1/get/statuses/user_timeline for the API docs,
timeline = twitter.statuses.user_timeline(user_id='makuk66', count=10)
#print json.dumps(timeline)
out = []
for tweet in timeline:
  text = tweet['text']
  words=[]
  for word in text.split():
    if word.startswith('#'):
      word = '<span class="hashtag">{0}</span>'.format(cgi.escape(word))
    elif word.startswith('@'):
      word = '<span class="at">{0}</span>'.format(cgi.escape(word))
    elif word.startswith('http://') or word.startswith('https://') or word.startswith('ftp://'):
      word = '<span class="link">{0}</span>'.format(cgi.escape(word))
    words.append(word)
  html = " ".join(words)
  line = {}
  line['text'] = text
  line['html'] = html
  line['id'] = tweet['id_str']
  line['link'] = "https://twitter.com/" + tweet['user']['screen_name'] + "/status/" + tweet['id_str']
  out.append(line)
print 'fillTwitterReaderContent(' + json.dumps(out, sort_keys=True, indent=4, separators=(',', ': ')) + ')'

From JavaScript I link the whole tweet to refer to twitter.com, so that you can see it in context (in a conversation, with retweet avatars, nicely formatted photos etc.). I format spans for the @usernames, #hashtags and links, so that I can colour them differently to enhance readability. I do the per-word processing and HTML escaping for safety.

So, tweets are back. I've also added a proper twitter widget to the homepage.