1. New release: php-mf2 v0.2.12 is out!

    In this version: bugfixes! Improved implied name parsing! Merged pull requests! Full details and contributor list in the changelog.

    In related news, the packagist website has been updated and looks very nice! Unfortunately all of my version links are now broken though. And apparently php-mf2 has been installed almost 3000 times (using composer, which is not even the most common installation method) — WOW! Here’s the to the next 3000, and more :)

  2. Got some little bits of work done on my site today — fixed a spam archiving issue, got leaflet-based maps working again and made note locations editable. Lots more to come soon, hopefully…

  3. Emil Björklund: Thinking so far: accept the webmention, send a signal passing along the URL somehow, model listens to signal, looks up instance and checks.

    @thatEmil Taproot works almost the other way round — a “mentions” module stores incoming mentions, noting their target path after resolving redirects. Then, each content module queries the mentions module for mentions of a particular URL. That way the two are decoupled, and I can keep track of mentions of static URLs and things not represented by a “model”. Haven’t figured out how to handle redirects well yet though.

  4. Tantek Çelik: new home page * 100 posts via flat bim files * <64KB HTML * <1s page load no DB XHR ∞scroll needed beat that, silos :)

    @t excellent minimal Like implementation! Whilst your homepage performance is admirable, I don’t think you can meaningfully compare it to silo infinite scroll untill there’s some sort of pagination :) Currently, without rel-prev[ious] links, there’s no way for crawlers (e.g. readers like Shrewdness, semi-hypothetical “your year in indieweb”) tools to find your old posts other than fetching each one individually, which for many cases takes too long to provide a good experience — e.g. crawling your years worth of content takes ≈162s, verifiable with the following bash+PHP code:

    curl -Ss https://getcomposer.org/installer | php
    ./composer.phar require taproot/subscriptions
    php -a  # Start an interactive shell, paste in following code (alternatively save into file):
    @(require 'vendor/autoload.php'); $start = microtime(1); echo "Starting crawl…\n"; Taproot\Subscriptions\crawl('http://tantek.com/2014/365/t1/indieweb-like-posts-2015-commitment-done', function ($r) { echo "."; if (substr($r['mf2']['items'][0]['properties']['published'][0], 0, 4) == '2013') { return false; } else { return true; } }); $total = microtime(1) - $start; echo "\nYear crawl for 2014 took {$total}s";
  5. One of the creepiest visible things about Facebook IMO is contact prioritisation in the chat sidebar. Whenever a contact changes places, I think “what did I do to prompt that? What did they do? What maths told Facebook that would optimise my engagement with it? Is it trying to influence me? Am I being experimented on?”.

  6. @acegiak that’s hilarious — OTTOMH, h-review p-rating is assumed to be between 0 and 5 as a fallback if no best/worst are given (and even if just best is given, 0 could be assumed to be “worst”). So you could totally publish something like:

    <span class="p-rating">1</span>/<span class="p-best">10</span>
  7. ☮ elf Pavlik ☮: I wonder if http://micropub.net could use application/ld+json besides application/x-www-form-urlencoded ? #indiewebcamp @aaronpk

    @elfpavlik HTTP POST urlencoded bodies are supported by every web application framework ever. For everyone to add support for not only a separate vocabulary but a completely different content type is a huge amount of (probably unnecessary) work. What benefits does it bring? Is there existing client software which would immediately work with micropub resource providers if this change was made?

  8. Dan York: Questions About Known (@withknown) Platform, Webmentions and security / spam

    Webmention spam has already started to become a problem, especially thanks to Brid.gy’s backfeeding of twitter comments. For most of us it hasn’t yet been a big problem, but it inevitably will be in the future. There’s some ideas about potential spam prevention tools on the wiki: indiewebcamp.com/spam

  9. Marcus Povey: Spying on a website using Webmention and MF2

    @mapkyca good point, I hadn’t considered this problem with hotlinking profile photos before. I think some webmention implementors have started doing this, and I intend to do it within Shrewdness.

    It’s worth noting that the attack is not at all limited to profile photos though, rather any photo or otherwise automatically loaded content in the comment e.g. images or audio. Whilst caching profile photos is feasible, caching any media in comments is more difficult, and therefore a good reason for text-only comments.

    Text-only content is not an option in Shrewdness, but perhaps instead images could be cached, and other media loaded upon demand, removing the ability to arbitrarily spy on people.

  10. @kartik_prabhu amazing work overall! This is one of my favourite parts though — the fact that fragmention comments fall back gracefully if they’re not supported on either side, and yet all the data required to present them is preserved, so future updates can retro-actively put old marginalia in the right place!

    I wonder how tricky it would be to implement this on the comment publisher side too — detecting fragmention URLs and tailoring the reply context content…

  11. Working on I’m coming to realise that there are at least two usefully distinct levels of semantic data on the web:

    There’s the basic “object” level at which microformats act, defining simple, basic-level objects like posts and people with properties like name, phone and content.

    Then there’s the level at which HTML works, marking up blocks of text and creating a tree of elements, each of which gives context to the text it contains, for example blockquote elements for containing content from another source, code elements for “computer code” (might be some space to make that more useful — who’s up for adding the type attribute to code?) and so on.

    So what? So these are the two sufficiently standardised levels at which content on the web can be made portable, and mutually understood by many parties. Any additional undefined semantics introduced by author-defined classnames and the meaning communicated by their default styling is unportable, and will be lost when that content is viewed elsewhere (for example shown in a reader or as a cross-site comment.

    So how can you tell if your content is sufficiently portable? For the object-level (microformats) a validator like indiewebify.me can be used. Strangely, there aren’t as many tools for the markup level, but one surefire way to check is to disabled CSS in your browser. Is your content still understandable using only the default styles? If so it’s probably pretty portable.