Markup vs. Mockup

Yesterday on Twitter I became briefly embroiled in what I’m calling the ‘markup vs mockup’ debate, sparked by this article. In case you’ve been in UX Siberia (not a conference, though it should be) the dogma du jour is that ‘designers’ (by which we mean visual and interaction designers) should be able to ‘code’ (by which we mean implement – nay, initiate – their designs using HTML, CSS and perhaps a dash of jQuery).

There’s a bunch of reasons why, all discussed better elsewhere. But briefly:

  • It’s more efficient than making disposable interim wireframes and comps that ultimately get translated into markup anyway. Lean, if you must.
  • It gives you a high-fidelity interactive prototype sooner in the process, thus better to test the efficacy of your design before it hits an expensive development stage. 
  • You’re working with the native materials of the medium (linked web pages and interaction affordances), rather than an abstraction of them.

This last point is the thrust of the article above, and it’s why I look to hire designers who are markup-friendly. Web design isn’t print design. It shares some superficial similarities, such as a need for clear information hierarchy and the cadences of typography, colour and rhythm. Unlike print, web layout is inherently variable, unpredictable even. And this is to say nothing of the complex, animated, state-changing interaction patterns that advance modern web applications. You’re not just designing lipsum-filled chrome here; web design is the harmonising of content, context, affordance and flow into a pleasurable, meaningful and motivating experience.

It has a grammar all its own, and can you really grasp the grammar without first learning the language? In today’s richly-interactive, responsive, cross-platform, transmedia web of data, how much of a given interface can you really communicate via bitmap mockup or wireframe? How painful is it to keep these updated as the design iterates? At what point does it become easier to ‘just do it’ natively in markup rather than document every state and exception?

These aren’t rhetorical questions. You should use whatever method plays best to your skills and company culture. User Experience takes a lot of crafting and ideation and getting busy with the Sharpies, Post-Its and Mental Notes can be a way of life in itself. Perhaps your working day entirely operates at this more conceptual level and so you’re happy to leave the specifics of cross-browser implementation to the other designers (aka ‘front-end developers’). That’s fine and dandy. I’m not going to say “You’re not a UX Designer if…”. That would be naive and idiotic.

Only…well, I wouldn’t hire you. The company I work for is stacked with engineers working Agile to deliver rich web and mobile applications. We’re on a mission to improve our user experience, and the centerpiece of this is a beautiful interface. We’ll standardise where we can; adopting well-crafted, reusable templates and patterns. But still, to hand a developer a sketch or annotated wireframe for each user story is to leave too much to chance. How do I know how many milliseconds my hover state should animate over without trying it first? How can I ensure my webfonts feel right and render nicely across browsers if I’m mocking up in Photoshop? In Agile development, user stories get split across both iterations and teams. Visual consistency is a challenge when you’re asking a developer to take your mockup and turn it into the real thing. And should you find you’ve overlooked or miscalculated detail in those mockups, good luck trying to disrupt the iteration to get your changes incorporated.

Better then to define as much of the interface aesthetic as we can before development begins. And better for happy cross-functional teams when working from a position of deeper empathy. Collaboration, not coexistence. The days of Morlocks and Eloi are thankfully over.

While I respect everyone’s freedom of religion, devoutly I adore the code-conversant UI designer. Markup is just the beginning. HTML document design, metadata structures, server caching, URI design – all of these things and more profoundly affect a user’s experience so to ignore them or dismiss them as technical implementation is negligent. Sure, you can keep it high-level and outsource detail. Or you can get your hands dirty, as the print designer does with ink and substrate, or as the author moves from plotting and characterisation to sentence-construction and actual spelling. I for one prefer the novels of Tom Clancy to those by TOM CLANCY (with shhh the bloke who actually wrote it).

In closing, I’m given to wonder: is the group who advocate mockup over markup limited to those who can’t code? Are they simply Canuting to avoid professional redundancy? Or a slightly different question: Would any skilled markup web designers choose to go back to being mockup designers?

 

Blast from the past: SAFE methodology

I was digging through some reeeeeally old files this morning, and happened upon something I wrote in an agency proposal back in 2000. Of course, that was before we had widespread adoption of the term ‘UX’, and instead talked about ‘usability’ (which doesn’t seem to get as much name-checking these days). Anyway, it would appear I was bigging-up the agency’s holistic approach to interactive product design, or at least my idealised approach, since apparently I made up the whole thing for the purposes of the proposal:

Our design strategy is based on our proprietary quality assurance methodology, which ensures that any proposed solution is assessed against standardised criteria. This methodology (called S.A.F.E.) addressed the following areas:

  • Strategy, including measurement against business objectives
  • Aesthetic, including brand consistency and appeal
  • Function, including navigation and interaction flows
  • Editorial, including tone and clarity of language

Because the user must take such an active role in any interactive system, we never underestimate the emotional response a system may solicit. A well-designed system can provide the difference between a satisfying experience and a deeply frustrating one. Our approach to usability takes into account this unique emotional relationship and aims to reduce cognitive workload and dissonance throughout the flow of the system.

Prescience or wanky words? You decide…

Truth and Dare: My response to Jason Mesut’s EuroIA 2011 talk

If you haven’t seen the slides for Jason Mesut’s EuroIA talk, Truth And Dare, stop reading and chow down immediately. With his trademark flouncy manner and withering snark, the lovely Jason has taken a pin to the UX bubble and exploded the bullshit within. Nice.

He makes some broad-shouldered, possibly sweeping statements about the state of the industry, and hopes to fend off slings and arrows by encouraging us to share which parts we agree with, disagree with, find interesting or indeed missing altogether. Well, here goes.

On Rockstars: Jason chastises our culture of deifying UX celebrities. The Spools, the Morvilles, the Garretts of the industry, and how we hungrily lap up whatever crumbs of insight they might choose to scatter as they drift around the conference circuit. He goes on to question their motives and credentials; are they doing this to sell books or raise their own celebrity (and thus consulting fee)? What work have these money-grubbing bastards [citation needed] ever done anyway?

It’s a thorny issue for me. A couple of years ago I attended my first major IA conference, and had much the same reaction. I was hoping for practical thought leadership from the teams behind major things that I’d heard of or even used. Insights into eBay, Facebook or Amazon. I did find cogent insight, but it came from luminaries that only appeared to shine in the context of the community itself. In fact I did learn some of Amazon’s design treasures, but not from someone with any connection to the company.

I see a visible and vociferous ‘UX community’; exemplified by the crowd of usual suspects that speak at or attend the major conferences. And then (I hope), there’s an outer halo of unsung-yet-experienced crop of thought-leaders; as @chrisfahey puts it, “people who don’t talk about it and don’t know what a UX Community is“.

But I’m one to talk, because increasingly I am one to talk at conferences. And while far from being a rockstar or even a folk-poet, I can’t exactly claim impartiality. But I know I don’t speak to sell books. Speaking at the IA Summit or EuroIA hasn’t thus far increased my market worth, or even helped me land my next gig. But Jason did make me question my own motives. At first glance, there’s really nothing in it for me; in many cases, I end up financially out of pocket on the deal. So why devote the weeks it takes me to write up a presentation, and the time, cost, and effort it takes to deliver it? Mostly, it’s the same reason I like to get up at karaoke. The sheer buzz of performing; to stand and hold an audience’s attention for 45 minutes is a rush like no other. And even more than that, the validation that comes from the applause, the questions and the tweets. That you stood up on stage, said your piece, and your peers confirmed that You. Were. Right. My imaginary analyst would no doubt tell me of the evils of relying on such external validation for self-worth, but hell – I’ve been a client-facing designer for 20 years. Dancing till other people are happy is all I know.

On Freelancers: I teasingly dub Jason the ‘UX Careers Officer’, since he does a lot of hiring these days, has much to say about effective portfolios, and, well, because that UX flag hasn’t been otherwise captured yet. He ranted at impertinent length about how his desk is awash with the CVs of junior freelance designers demanding north of £400 per day for their leet Axure skillz, and how these money-grubbing bastards (is there a theme here? Ed.) should presumably have been shot at birth.

Well again, this is an issue close to my heart. At 37 years old, with over 15 years experience behind me (to paraphrase Man V. Food‘s Adam Richman, I’ve held most jobs in the UX industry) I’m starting to feel a bit long in the tooth, or perhaps to quote Danny Glover, I’m getting too old for this shit. It irks me that relatively inexperienced interaction designers are charging premium rates (not nearly as much as it incenses me that sophomore interaction designers are being packaged and sold as UX rockstar thought-leaders, but don’t get me started on that) and I am completely behind Jason’s assertion that agencies and corporations alike should be giving their juiciest work to their in-house staff, and using freelancers to mop up the overspill. And I even sympathise with his whining that freelancers are total commitment-phobes looking to make a quick buck with a 3-month contract, rather than having the dedication to see a long-term project through to completion.

I’ve been a freelancer for 10 years. Almost half that time has been spent with a single client, and the rest with precisely five clients. So don’t talk to me about commitment (unless its about how to remember to spell it correctly). Only, we’re damned from the other side too. Four years continuously freelancing for a client? How dare I fleece them so? I’m obviously a de-facto employee! Why didn’t I have the good grace to go permanent with them, whether that was offered or not?! The short answer is ‘Why the hell should I?’.

It’s not just about the money. Although the money really is very much better. The £350/day currently being handed out to junior (2yrs+) wireframers equates to a permanent salary of – what, about £84,000? Good luck finding a permanent UX job at £84k that expects so little of you. But beyond that, what are creative agencies and corporations doing to tempt permanent staff? Certainly I’ve never seen any meaningful structure of advancement in either pay or duties, beyond the inevitable promotion out of being a practioner and into being a resource manager. A recruiter recently told me that most permies he places agency-side move on after 18 months. If you want commitment from me, how about a little quid pro quo? How about creating an environment that gives me somewhere to go, other than out of the door?

Speaking of somewhere to go, just what does one do when one is blessed with longer industry service? I’ve rarely seen a UX job posted that looks for more than 5 years’ experience. That either means that employers aren’t willing to reward financially beyond that, that they don’t consider their project requires more seniority, or (most worryingly of all) that they consider UX knowledge and experience tops out at the 5-year mark. Frankly, it makes me feel old and frustrated. Are we saying that for the last 10 years, my market value has only increased with inflation? Am I to compete for freelance roles with people 10 years younger than me? Or perhaps it is only dignified now to shuffle off and write my books and ride the conference circuit. Fear not – I’m not really so bleak about it, but I do wonder whether in such a rapidly-changing, trend-surfing industry, length of service has any real currency.

And finally on this, there’s the work itself. Perhaps this is another symptom of a world-weary, seen-it-all-before outlook, but many, many projects, especially agency-side projects, are worthless pieces of shit. Short-termist digital marketing campaigns for FMCG or automotive brands, or otherwise unwinnable wars of attrition against hostile, status-quo-loving dinosaurs who begrudgingly slouch toward a digital age. UX loves to throw around words like ‘passion’ and ‘excitement’. I fail to summon either for projects like these. Don’t get me wrong; I have integrity and ambition. I can feel St. Elmo’s fire burning in me. I want to change the world. I just don’t want to do it through the medium of Flash microsite or Sharepoint intranet.

On UX eating itself: Like many before him, Jason expressed concern over the growing ubiquity of the term ‘user experience’ and indeed the loathsome shorthand ‘UX’. As a prefix to any number of job titles (‘UX designer’, ‘UX developer’, ‘UX architect’ were some he cited) the phrase has become confused and even meaningless. Jason also alluded to a bugbear of mine; the conflation of UX and UI. The perception (mostly among clients) that ‘the UX’ is the top-layer sprinkling of fairy dust. What they mean, of course, is the user interface.

UX is clearly more than visual and interaction design, but I also rankle a little at the frequently-cited idea that it’s everything; from the way the website functions, to the way customers are spoken to when they phone the call centre, to the fact that you can return your shoes up to a year after you bought them. Of course, all these things are part of the ‘user experience’ (or if you will, the lower-case ‘ux’), but such a wide purview isn’t a particularly practical definition of ‘UX’. Should a ‘UX Consultant’ be equally qualified to advise both about ontological models for product classifications and maintaining a negative operating cycle for improved cashflow? I think not.

In fact your UX Consultant is probably a rebranded IA or IxD who’s done a bit of usability testing. And to that end, Jason asked us all to map our skills to quadrants of Experience Strategy, Interaction Design, User Research and Information Architecture. Here’s mine.

Media_httpreduxdcomwp_bgibb

The blobs are where I think I am, the arrows where I want to go.

As someone said, ideally the phrase ‘UX’ will disappear completely into a collective understanding and we will once again call ourselves by titles that better describe what we do all day.

Truth and Dare covered much ground, but these are the issues that chimed most with me. As for what’s missing, I’d hoped to see some suggestions for reaching the hidden markets; the would-be thought-leaders who never speak at conferences, and the talented practitioners who aren’t expecting money for jam. Plus I could have done with a touch more positivity about the things we are good at. While he was right to kick the tyres on the dogma of user-centred design, he could have gone further on just why we didn’t have to abdicate product design to the public at large. I think it’s because as overpaid, over-celebrated User Experience professionals, we’re pretty damn qualified to make decisions on our own.

5 Lessons from Beyond the Polar Bear

Recently I gave a talk at the 2011 IA Summit in Denver. It was prepared by me and Michael Smethurst, and covered the great work that has led to some of the BBC’s most respected online projects over the past few years; notably Programmes, Food and the ever-shiny Wildlife Finder.

We cheekily titled the talk ‘Beyond the Polar Bear’ in reference to Rosenfeld & Morville’s book Information Architecture for the World Wide Web, which has been the standard text for IA for over ten years. It wasn’t a case of saying that book was flawed (indeed it is still an excellent introduction to the organisation and classification of content), but the domain-driven approach practiced at the BBC speaks to a different, and perhaps wider, view of information architecture. You could say that IA itself is in a transitory state, becoming ever more conflated with the broader (and in my view, more vague) term ‘User Experience Design’. That’s a rant for another day, but with a growing number of UX designers emerging who have no grounding in IA at all, it’s timely to propose a model based on the state of the web, and the industry, today. Here beginneth the lessons…

5. People think about things, not documents

When I’m geeking out over vintage Disneyland (as I do), what comes to mind is a messy blur of stuff; Adventureland, Tomorrowland, the Haunted Mansion ride, Walt Disney’s apartment above the fire house, the various incarnations of the Disneyland model in Florida, Paris and Tokyo. I could go on (in fact I did, here). The point is that I’m thinking about real-world stuff, and the often complex relationships between those things; John Hench designed the Cinderella Castle in Main Street and Space Mountain in Tomorrowland. Pirates of the Carribean is a creative concept that spawned not only the original attraction, but the subsequent movies, video games and lunchboxes.

However with traditional taxonomical IA, a lot of that complexity gets lost. We organise webpages into ever-broader sections and categories, less like a tangled web and more like neatly-stacked russian dolls. Often we won’t structure by ‘thing’ at all, but instead by type, leading to categories like ‘photos’ and ‘videos’ – which isn’t much more advanced than the dark ages of the web when you’d routinely see ‘Links’ as a main menu option.

At the BBC a new model for making websites has taken hold. Domain-driven design puts the emphasis on the real-world connections between things, rather than the tree-like hierarchy of webpages. It encourages us to expose the most important things within our chosen subject (or business) and then sketch the graph-like relationships between those things. If we’re designing a website for a restaurant chain, those things might be locations, cuisines, dishes, ingredients and providers (since it’s no longer enough to be served mere bangers and mash, but Cherry Tree Farm Lincolnshire sausages with mashed Saxon potato). Ultimately, each of these ‘things’ may be represented by a document (or other machine-readable) resource, but by using things and relationships as our means of organisation, we expose a navigation model that better fits with a user’s mental model, and thus opens up a wealth of user journeys where the links between resources can serve as informative content. For example, BBC Wildlife Finder features animaly stuff like Species, Habitats and Adaptations. By connecting the Giant Panda to the Broadleaf Forest and the Herbivorous adaptation, we’ve not only learned some stated assertions about the Panda, but can see which other animals display those characteristics.

A thing-based structure acknowledges that we are trying to represent reality in our information architecture, and furthermore that a desktop webpage is only one of those representations. Equally we could serve up mobile views, TV views, RDF or microformats. Consistency across platforms comes from the relationships between the things.

4. Canonical concepts help people and SEO

Sometimes however, the complexities of reality don’t exactly map to our idealistic mental models. In my head, there’s a novel called Wuthering Heights published in 1847, The Beatles released their album Rubber Soul in 1965, and Star Wars is a movie that came out in 1977. Would that reality were that simple. In fact, countless editions of Wuthering Heights have been published over the years (including one with added raunch!), Rubber Soul has been reissued in various vinyl, cassette, CD and digital formats, and don’t get me started on the ever-shifting sands of Star Wars re-releases. Maybe sometimes my thinking is even more abstract; my mental image of Wuthering Heights is of the tragic and doomed love between Heathcliff and Marmaduke [citation needed], moreso than the pages and binding of a physical book, or indeed any movie or musical adaptations of that story.

Clearly then, we formulate our mental models around comfortable levels of abstraction. On the web that most often surfaces when we search for things. But when I Google, say, ‘wuthering heights amazon’ I get back a boatload of results, many arguably correct yet still leaving me to do the guesswork as to picking a result. This isn’t great for me, neither is it great for Amazon, since their Googlejuice for the term ‘wuthering heights’ is splintered across their inventory. It would perhaps be better for both of us to have an Amazon page referencing the general concept of Wuthering Heights, and aggregate all the various books, DVDs and CDs onto there. This after all is what works so well for Wikipedia; a single page per concept, mostly at a search-friendly level of abstraction, but containing all the nuances of that subject within the page. One page per concept makes it obvious where people should link to when referencing that concept, which is one of the reasons Wikipedia is so often used as a citation in online articles.

On the BBC website, managing this level of abstraction for programme pages results in resources at series and brand level, or conversely, it’s why there’s one page per episode of a show, but not one page for every time that episode has aired. BBC Nature aggregates programme clips around its canonical ‘thing’ pages, so when you Google ‘BBC lion’, you’ll get the Lion aggregation page, rather than a ton of pages for the separate clips. BBC Food used to have a similar ‘splintering’ issue to Amazon, whereby the 280-odd results for, say, “apple crumble recipe” likely resulted in option paralysis. By redesigning the Food site to conform to a domain model, the concept of the ‘dish’ was established; an abstract concept of apple crumble from which the individual recipes could be linked. This was particularly advantageous for that site, since due to rights-restrictions individual recipes come and go. Directing traffic primarily toward the canonical dish page establishes a persistent URL displaying all currently-available content.

3. URL design is a part of User Experience

URLs. We see them every day on business cards, buses, and boxes. We share them via email and social media. They are the connecting nodes that make up the web itself. Yet too often they are overlooked as part of user experience design. Effective URL design should balance three main principles: They should be hackable (so that foo.com/products/gangly_wrench can be chopped back to foo.com/products) thus serving as a form of orientation and navigation. They should be human-readable, since this makes them memorable and easier to promote, and above all they should be persistent.

Persistence means your URLs never change. After all, the web is made of links, and if the URL of a resource changes, it effectively becomes disconnected from the wider web. Your URL is an implicit contract with those who have linked to you or bookmarked you, so presenting users with a 404 is as heinous as changing your telephone number without telling anyone. But designing for persistence has implications. To truly future-proof your URLs, you need to remove anything that is subject to change. Do all your URLs end in .php? Then you’d better not be planning to move to ASP any time soon. Got something like ‘…/music/artists/Prince’? Good luck managing that when he decides to change identity again.

When the BBC designed their URL schema for Programme pages, they had to accommodate the inherent volatility in TV production. One-off dramas can morph into series. Series routinely hop from one network to another. Even programme names themselves can change on the long road from planning to broadcast. It was a tough call for a project aiming to provide a persistent URL for every BBC programme broadcast, and the solution was a URL format whose only human-readable component was the one thing they knew for sure: it was a programme. They opted for bbc.co.uk/programmes/:pid (a unique alphanumeric ID to identify each programme). The use of unique identifiers doesn’t exactly make for human-readable URLs, but you can be damned sure they’re not subject to change.

Persistence should be supported by pointability; allowing your content to be referenced uniquely via its URL. Employ a ‘one thing per page’ policy, describing a single topic at a single URL. This strategy has been instrumental in Wikipedia’s ubiquity, since it encourages linking to that topic page whenever the topic is referenced elsewhere. If your documents and media assets have URLs they can be pointed at, otherwise they can’t. It’s that simple.

2. Rapid prototyping means real prototyping

UX dogma loves to big-up rapid, iterative prototyping; putting small, spit-and-sawdust versions of a product into play, rather than diving headlong into building the super-duper version. This is all well and good, yet the tools employed – Axure, Flash, paper mockups – are inherently fake, and therefore fail to test the most important mechanic; how people interact with actual content. Worse still, they are throwaway deliverables; expensive to create, but contributing nothing to the actual development effort required to create the live site.

Better then, to prototype using the native tools of the medium: HTML, CSS and Javascript, connected to a database of real content. This necessitates getting the data model (and thus the content strategy) in place up front, but that’s undoubtedly a good thing anyway. If information architecture is the management and structuring of information, then it must follow that this needs to be in place before worrying about how the user interface looks. Rough web pages that broadly structure and position content can be created relatively quickly, ready for iterative design embellishment. The beauty of this approach is that your protoype is actually an early version of the live site itself, so no effort gets wasted.

It’s probably heresy for many UX designers more comfortable in the Adobe suite, and who may consider HTML/CSS as client-side development that just ain’t their bag, but perhaps it’s just a skills gap that’s yet to be closed. In his IAS11 talk, Jared Spool asserted that the ‘most valuable UX person in the world‘ is one who can code. The increased efficiency in having a designer implement their own vision using native tools is clear, even if the resultant ‘code’ isn’t exactly production-ready. It also forces designers to work with the art of the possible, rather than tacitly suggesting functionality or data relationships in their wireframes that are presumably supposed to be realised by developer magic.

Real prototyping means UX can break free of its culture of interim documentation, and actually get on with making the damn thing. There’s little need for persona documents or paper prototypes when you can put real users in front of real content, real fast.

1. User Experience isn’t just what you can see

Every time a project manager says ‘We need someone to do the UX on this’, God kills a kitten. User Experience Design is NOT limited to the user interface, and yet it’s tough to shake this perception that UX is a kind of glorified interaction design, post-rationalised and invoice-enriched through a bit of cod-psychology and ten-cent sociology.

On a practical level, every layer of a web build has significant impact on the experience a user will ultimately have. User Experience therefore touches everything from the business strategy, through the content and data modelling, to the many and varied ways that this content can be served to users. In that sense, everyone involved in the project is contributing to its design, and thus we need to do away with this cultural and procedural divide whereby the white-collar (or more likely black t-shirted) ‘designers’ ideate in a vacuum, before tossing their Photoshop comps over an invisible wall for the blue-collar ‘techies’ to build.

Domain-driven design (as practiced at the BBC) begins with breaking down the elements of a subject domain, connecting these elements via their real-world relationships, then mapping the conceptual ‘things’ to available content. Once you start to work in this way, it’s hard to imagine a more sensible approach. Domain modelling is user-centered design at its most elementary, upon which can be layered all manner of sophisticated, device-appropriate presentation models, safe in the knowledge that you’re building on solid foundations. This is a web that has moved away from ‘sticky’ siloed websites, towards a decentralised dissemination of content via search and social media. Presentation and interaction are therefore less important than the content itself, which sounds blitheringly obvious when you say it out loud. The new wave of Information Experience Design Architects need to worry less about interaction, less about taxonomy, and more about making content findable, pointable, searchable and sharable.

I am, as ever, indebted to the boys: For some less theatrical, more practical grounding in these practices, check out the seminal How We Make Websites by Michael Smethurst, or this equally-lovely webcast by Silver Oliver.

Models of interactive storytelling

In this post I want to look at the various models of interactive storytelling. This isn’t necessarily an exhaustive list, and indeed many examples blend multiple models in order to tell their story. Without further ado:

Non-linear progression

Here the overall story arc is fixed in place. The reader may explore story elements in the order they choose, but ultimately all roads lead to Rome. Imagine a detective story, where the reader may experience the questioning of suspects in any order, but ultimately the murderer remains the same as in the story’s outset. Better still, don’t imagine it: Go play Deadline by Marc Blank. Non-linear progression gives the reader the story fragments out of order, but in a controlled way that a) makes sense, and b) keeps them on a controlled path so that while smaller elements of the story are experienced in any order, larger plot points (and thus the direction of the story) are always kept in sequence.

Traditional parser-based interactive fiction (from the earliest days of Infocom to modern works like Photopia and The Space Under The Window) offer a very pure form of non-linear progression, although IF is not necessarily interactive in the truest sense. Often, the elements of the story world are set in place at the outset of the game, and through exploration the player merely exposes those elements (either through spatial exploration or LOOK and EXAMINE commands). Of course, the puzzle elements of IF (which may require the player to pull levers or combine objects) give the player a feeling of agency, and really do alter the state of the story world, but usually such puzzles are presented as obstacles to story progression rather than altering the course of the story itself.

Branching plot

Probably the most common preconception of interactive storytelling is that the story may branch in a number of directions. In modern culture, this model was popularised in the Choose Your Own Adventure and Fighting Fantasy series of books, where readers were asked to flip to specific pages based on story choices they were offered. More recently, branching plot has become a standard convention in computer role-playing games.

In Bioware’s Knights of the Old Republic, the player’s actions shape their character toward good or evil, which consequently change the story they experience, giving them a unique blend of plot elements. However, such works have finite limitations, so not only is the number of branches limited, but often the branches (experienced during the ‘muddle’ of the story) eventually coalesce back to a single resolution.

Therefore the reader again really travels a single story arc, albeit one with choices along the way. Multiple endings are possible of course (as is the case with KotOR), but frequently these are a matter of life and death. Taking the ‘wrong’ choice means the player ends the story abruptly, and the game ends. Rarely do these feel like a satisfying conclusion to the story; usually the player is aware of their failure to reach the author’s intended ending, and thus is compelled to try again.

Story as unlockables

Indeed, most modern video games are imbued with a strong sense of story, even those of a genre where it could be said to be superfluous (such as the first-person shooter or platform game). Here the story is loosely overlaid on the player’s actions, with narrative acting as a supporting incentive for progression through the game’s levels. The player’s action may be to make it through the maze-like level, killing enemies and perhaps completing a quest, and they are rewarded with a cut-scene that segues between missions. The effect is that the player must succeed to unlock the next chapter of narrative, but beyond basic success or failure, their interactions are not really influencing the story itself.

However, as such games become more sophisticated, they are getting better at infusing story into the gameplay itself. Bioshock (ostensibly an FPS) was lauded for its story. Its innovative use of ‘audiologs’ to telegraph backstory and plot elements ensured the story was progressed during all the fighty bits, not just in-between. And by borrowing morality choice and trust conventions from RPGs, the player had real agency in affecting the story outcome, placing it more in the ‘branching plot’ model of interaction.

Alternate-reality games take non-linear progression out of a controlled environment and into the real-world. Players are required to piece together clues scattered across the web, ambient media, text messages and other communication forms. Solving these clues rewards this player with a story fragment that leads to the next clue. ARGs are cleverly-written to allow non-linear progression, but essentially the entire story is planned in advance, with plot development a reward for the successful solving of a clue.

The developing story

I suppose another kind of interactivity is one where the story is not pre-conceived at all, but instead is written through interaction. This would be difficult to achieve through game mechanics since it could not rely on existing story fragments, and is arguably only storytelling from the perspective of a non-participant. For the interactor, it’s really story writing. I struggle to think of popular examples of this, save for the old parlour game where each player must invent a new sentence to follow the previous one and thus collectively build a narrative. But perhaps the real-time, multi-user aspects of the web offer an opportunity for this to be a valid mechanic of interactive storytelling?

When I set out writing this brief audit (of sorts) I was unsure which examples of interactive storytelling would spring to mind. Curiously, most of them are games. This begs the question of whether there’s something about the mechanics of games that is required to tell a story interactively. More specifically, without the overcoming of obstacles inherent in gameplay, what is the incentive to experience a story interactively? Authors would probably suggest it is in the value of the prose itself; that interest in the story keeps the player/reader progressing. That’s certainly true of books and movies, but then linear media is passive. Once you ask your reader to work for their story, I’d argue their motivation must be much higher. The satisfaction of overcoming a game obstacle evidently supplies that, but can the workload of interactivity be justified by story alone?

Slouching toward interactive narrative

My work has taken an unexpected turn into the world of transmedia storytelling and interactive digital narrative, in an attempt to create a model for telling stories online, and more specifically figuring out how best to use the native interactivity of the web as a storytelling medium.

Having not blogged for a bit (or indeed on this blog, ever) I thought I’d add to the discussion and help sort out my own thoughts on what all this stuff is, and where it might go. As usual when I try to write out these thoughts, my mind has shot all over the map, so I’ll do my best to break up my observations into separate posts. This might take a while.

So then, interactive storytelling is an interesting diversion for someone of my generation, who grew up on books like The Warlock of Firetop Mountain, and interactive fiction games like Zork, Deadline, and perhaps most pertinently The Hitchhiker’s Guide to the Galaxy (of which more in a future post).

These days, in a world of almost constant daily media consumption across the various devices that fill our living rooms and pockets, the time is right to consider how best to tell stories that best fit each medium. While we are happy to read traditional novels or watch movies on our smartphones, is the linear format of these forms an unnecessary constraint on a device that offers inherent interactivity? Or conversely, does interactivity and the potential for influencing an outcome meddle with the rules of storytelling itself? Several people I know are already debating this, notably Kat Sommers and Paul Rissen, and I expect my thoughts will increasingly intersect with theirs.

Over the coming posts, I want to look at what I think both interactive and transmedia storytelling look like. Neither of these are new ideas, and certainly interactive fiction (in its broadest sense) has a lot of past form, from the Choose Your Own Adventure series of children’s books, through the often-risable ‘interactive movies‘ of the CD-ROM era, to modern video games and real-world Alternate Reality Games.

But first I want to explore what I think a story is, since apparently even this is subject to debate. To me, a story is something that relates a sequence of events, usually (but not necessarily) from the perspective of a central protagonist, who is is involved with and influenced by those events, and probably undergoes some personal growth as a result. It has an explicit structure; in the words of Philip Larkin, a beginning, a muddle and an end.

Crucially, a story must be told. It is not open-ended, nor does it occur in the present-tense. It may of course be told in the present-tense (and to great effect, as with Fight Club or The Time Traveler’s Wife) but ultimately it is a complete account of events that have already happened.

In the case of interactive fiction games, where the player experiences the story unfolding in response to their actions, the experience of play may feel open-ended, and indeed the player’s actions may affect the events that subsequently unfold. But even then, the story is the one written after the fact; a record of what the player did do, rather than all the roads not taken. In that sense, computer games could be thought of as a kind of story engine, capable of generating a different story for each player. Yet the story itself, as co-authored by player and creator, remains linear.

Further, and this may be going out on a limb, a story must be eventful. There’s a certain rhetoric floating around my workplace that online user journeys are themselves stories. That for example, a clickstream showing that a Giant Panda lives in a Broadleaf Forest and is herbivorous, is telling a story about the Giant Panda. As a generalism, I don’t buy it. Certainly one can use hyperlinks as a method of story navigation, and as mentioned, an interactive format allows for a story to unfold through a non-linear set of choices. But unless this journey is explicitly eventful, it is no more a story than my walking down the street is a story. If my walking down the street involved me besting a rabid badger with my trusty spork, then certainly it has the makings of a story, but doesn’t become so until told to you as a complete account after the fact.

Finally, I want to touch on another important distinction; that of plot versus narrative. I’m no literary scholar so I may have my terms muddled, but there’s a difference between a sequence of dramatic events (plot) and how those events are voiced to the reader (narrative). One could say that it is plot that drives a story forward, but narrative that makes it interesting. Many novels have used unconventional narrative (such as correspondence – Stoker’s Dracula and Matt Beaumont’s e come to mind), but always these describe a linear plot. Interactive stories too, mostly use a non-linear narrative (i.e. the player’s actions) to further a linear plotline. Occassionally in interactive storytelling, the plotline itself is affected even to the extent of altering the story’s outcome. In my experience however, it is rare that the ‘multiple endings’ approach yields a satisfactory conclusion (though Emily Short’s Galatea and Mateas & Stern’s Façade are frequently cited as examples of this). Most of the time the player is left with the distinct feeling of having either succeeded or failed to reach the author’s intended ending.

Perhaps then stories work best at their most straightforward. The challenge for interactive storytellers is to use their arsenal to enhance the storytelling, rather than obfuscate it. Since our most beloved stories from the past 500 years have required no more interactivity than the turn of a page, it’s no small feat.

Please don’t mind the mess. It’s not normally like this.

Honestly. It’s just that I haven’t long moved in, and good cleaners are so hard to find these days. So y’know, make yourself at home. I think the tea bags are in one of these boxes somewhere. Oh no – don’t mind that, you know how dogs get stressed in a new home. Hang on, I’ll get a cloth. No, don’t rub it – just let it dry and it’ll brush off.

While you’re here, you might want to follow me on Twitter. What’s that – you already came from there? Oh right, you were after the Disneyland slideshow from IA Summit 2009. Yes, it did go well thanks. I’ll probably do a thing about it once I’ve put these shelves up. In the meantime, you can see Creating Magic Kingdoms: 9 Lessons From Disney’s Imagineers on Slideshare. Huh? No I know it doesn’t make a whole lot of sense on its own. Yes, I was trying that Slidecast thing, but it isn’t really built to handle slide changes every couple of seconds. If you have any suggestions about that, let’s hear ‘em.

What’s this blog about? Really not sure to be honest. I mean, it’s in the ballpark of user experience design and information architecture, but its more my snarky observations than anything of profound interest. But you never know, I might hit payload eventually.

When am I speaking again? Haven’t been asked, to be honest. Would love to though.

Anyway, don’t mind me – I’m just hoovering up around here.  Back in a tic.