1. I have to say that whilst ’s data export functionality is rubbish, their URLs and use of conneg made it really easy to roll my own. A little hacking with the web inspector (tracking XHR, looking up headers and content) was all that was required — and now it’s documented, so no-one else will have to spend time doing so.

  2. I was going to spend this evening working on browser extension, but I think it would be better spent providing a /data export utility for fellow ex users.

    From my initial researches, it looks like /u/username.json is the best bet, as it gives a JSON array of all posts written by username, along with like and comment data. It accepts a max_time=timestamp query param, and a _ query param, the function of which I am not sure of.

    To iterate through all the pages of posts from a certain user, start with their profile URL w/ .json tacked on the end, fetch all the items, get the datetime of the last item, convert that to a timestamp, fetch the same URL with ?max_time=timestamp, repeat until an empty array is returned.

  3. Secret Gurdy Feature: This image showed a secret feature, so I had to block it out in accordance with the laws of secrecy (Chapter 14, Section 47, Sub-section 19: If a photo contains something secret, it must be blended up in a manner most unusual) http://flic.kr/p/bCLGr6

    Secret Gurdy Feature: This image showed a secret feature, so I had to block it out in accordance with the laws of secrecy (Chapter 14, Section 47, Sub-section 19: If a photo contains something secret, it must be blended up in a manner most unusual) http://flic.kr/p/bCLGr6