“There are plenty of documented design patterns around the web for single line, static-content inputs (e.g. tag inputs, email To/CC/BCC), but hardly anything for multi-line, dynamic-content UIs where autosuggest is context-sensitive” Barnaby Walters on the IWC Autosuggest page
Perhaps it’s because of the inconsistencies between implementations and this lack of documentation that free–context autosuggest interface evolution has become a bit stagnant.
Arguably the best implementation in the wild ATOW is Facebook’s – their large usage datasets allow them to offer intelligent suggestions. But not all of us have the facilities Facebook does — and we don’t need it. There are a few smaller changes that can be made to vastly improve autosuggest behaviour across all platforms.
When I was building my Indieweb Autosuggest UI, I went looking at all the autosuggest UIs I could find around the web and beyond. Turns out, there are a whole load of them, and they’re all different. I continue to document any approaches I find, the differences and common denominators on the Autosuggest indiewebcamp wiki page and under #autosuggest on my openphoto install.
After reading Magic Ink (specifically the #inferring_context_from_history">Inferring context from history section, but read it all the way through if you haven’t already), I was really struck by how much of an improvement could be made by implementing some really simple learning behaviour — specifically, usage history and last values.
- Use localstorage to record the number of times an autosuggestion has been accepted by the user
- Sort the autosuggest array by that metric
Bang. Two trivial steps, and you have an autosuggest implementation which gives the impression of being hugely more intelligent.
As mentioned in Magic Ink, the easiest possible way of predicting what values a user might enter into a field is just to suggest the last values they used. This could make for even speedier data entry if used in combination with usage history:
- Use localstorage to record any autosuggestions which were accepted in the current session
- Next session, temporarily add some value (5? 10? Or a percentage of the current value?) to the number of times the suggestion has been accepted ever before ordering suggestions
Another trivial change, and the UI is once again improved.
The data a user is entering into a note entry (tweet) box, as well as the increasing amount of data available to our scripts (time, screen size, location — what next?) actually gives us a huge amount of data to pool in order to make intelligent suggestions.
At any given time during the user=typing session, we can pool this sort of data:
- Screen Size (and from that we can infer context to a certain degree)
- Text typed
- Hashtags used
- Links used
- HTML/Markdown used
- People mentioned
- People CCed or Via’d (via /slashtags, e.g.
/cc Aaron Parecki
- A more tangibly useful /slashtag is /w (with), as it allows us to potentially infer geographical location if we don’t have hardware access to it
Some ideas about what we could do with it:
- Auto-tag posts based on markup (e.g. if it has a
<blockquote>in, it can probably be tagged with
quote, and we can auto-suggest other tags which historically have been used alongside
- Correlate the location of the user and the known locations of their contacts, rank physically closer auto-suggested contacts higher
All of this is a little harder to do than the basic learning, but still not “real” AI. A more complex (well, complex for me — if you studied AI this is probably the first thing you learnt) approach would be to use something like Naive Bayes to model all of this data probabilistically, resulting in something like this:
- Given that the user is posting at 18:14 on a mobile device from the location “Home” and they’re /w a member of their close family, based on their history the probability of this note being tagged #thesimpsons is high, so let’s rank that tag higher up in the autosuggest/present it as a suggested tag