Feb 282011

Mid 2010 draft catch-up post – What will it mean when we all use a handful or even just one device to consume ‘all’ our media? Will we also use it to share ‘all’ our content, pushing it to large, dumb screens around us? When we talk about transmedia we often mean, telling a complex story across many platforms used by many users, objects and screens, perhaps partly in a book, on a TV show, inside Facebook on the PC, in a console game or at the cinema ¬†– but what will happen if all our personal media is consumed only on one screen? A world where TV is not about home screens, where Facebook is not about desk or laptop PCs and the most used games are not on chunky, dedicated consoles?

This is article is not a resurrection of the dreaded, old school (circa late 90s) convergence debate but something much more akin to the Trojan Horse saga. We are palpably moving into a space where a certain medium size screen, portable device, connected, personal & social is slowly permeating our world. As powerful and practical as all the other gadgets & screens we have gotten used to the 7-10″ tablet is has hit a sweet spot. Already the fastest selling device of all time, the iPad has caused a storm, the dam holding the waters back has leaks and other similar devices are starting to trickle out, but the dam is about to burst and we will be flooded in the next year as these tactile hybrids of smartphones and laptops seep into our daily lives – once again ūüôā

Painting Original: The Marriage of the Virgin by Raphael. Public Domain

But will we converge towards this swiss army media device? Does it fulfil all our video, game, communication, work & social needs? ¬†More specifically, just as we are starting to master the ‘Art of transmedia Storytelling’ are we now looking at a mono device future? Will the art of transmedia storytelling turn into telling our stories across services and channels on a ‘single’ device rather than across multiple devices and platforms?


Almost half a decade ago I did a post called Media Journeys Part 2 that explored a simple evolution of media technology from cinema at the start of the last century through to the portable revolution of the mid noughties. That post implied a device that would be a screen, with a quality good enough to view films on, portable, tactile, connected, communicative and powerful enough to play networked & graphically rich games on. This post completes that train of thought and asks a key question – are online tablets the end point of a 100 years of platform evolution and more significantly can we actually expect to see a decline in the number of ‘discrete’ platforms available to transmedia producers?

The Evolution Timeframe

Firstly the timeframe. As explained in my earlier post the most useful timeframe for this ‘postulation’ is the last 110 years – from the dawn of mass media communication and non text based story-telling (film). There has been a compression of the evolution in the last twenty years, so the curved template below reflects that year-wise. The reason the chart is curved is to allow my five key trends to converge visually.

Convergence Media Tablets

Evolution of the Human Interface

Convergence Media Tablets

One thing I didn’t cover in the post from five years ago was the evolution of interface which reflects how the technology has become sufficiently powerful enough for us to need to do less ‘unnatural fiddling’ at the ‘control’ end and use our bodies more naturally – less of a slave to qwerty or cross, square, circle, triangle (PS reference!)…a continuum (each number corresponds with the icon sequence, left to right, on the chart)

  1. The remote or keyboard – Alongside the TV in the 1950s the button based infrared remote control was born and a decade plus later early QWERTY keyboards were used (using strange alien languages) to communicate with computers. The remote is still with us today but as we know a revolution is about to take place there.
  2. The mouse – The PC’s popularity spread quickly when the Mac was born in the early 1980s and the computer mouse became the norm for how we interact with complex lean forward screens vs rather clunky text entry using QWERTY keyboards.
  3. The controller – When game consoles entered the living room in the mid 80s more complex controllers were required
  4. Voice – although still not universal, voice controlled PCs became usable for dictation and basic control in the late 90s
  5. Touch – Touchscreens were suddenly on every device from 2005 onwards and today any portable device that is not touch feels very antiquated
  6. Body – at the end of 2010 XBox Kinect led the way for popular use of the whole body to interact with games, of course Sony and others had launched similar interfaces many years earlier, but the 3D sensing of kinect raised the bar significantly
  7. Mind – (future only) having played with controllers such as Emotiv we can certainly look to a time where using parts of our body will seem so old fashioned, but that is another evolution diagram

Items 4 to 7 are of course sensory, based on natural human movement & communication.

So we need a device that responds to my touch, I can wave it around so it gets a good sense of the GPS environment it is in, as well as controlling games or measuring my physicality and without a mouse or remote in sight.

Evolution of Film and TV Viewing Screens

Continue reading »

Apr 082010

What happens when the content cloud descends? Rocket science or people science?

Here is a really simple metaphor to illustrate the pervasiveness and societal significance of Augmented Reality. For the past 20 years humanity has been ‘floating’ its content (its personas, its information, life data, economy and social media) creating a distant, electronic cloud drifting, conceptually,¬†way up above us. A cloud that is only reachable when we area able to connect to it via a variety of fixed and mobile ‘information’ screens, themselves connected to a veritable wormhole aka the global internet. (In reality hundreds of thousands of servers murmuring around the world with billions connected via hard wiring to receive richer media & experiences).

Up until now this ‘content cloud’ (different to cloud computing) has been abstractly disconnected from our physical lives – we read news about California earthquakes sitting in Australia, we view videos on the train of a concert three weeks ago at a local venue, we have personal social networks fragmented across time and space, play a game set in Hong Kong on a screen in London,¬†Facebook groups comprised of half friended, remote avatars (the extended self ). 99% of the content in the cloud is not relevant to here and now (although a philosophical moot point if the now ‘is’ the participation and consumption itself?!)

In a near AR future, non geo-sensitive content will be perceived as incomplete

The Descending Cloud

But that cloud, has reached saturation, it no longer can keep afloat, there is just too much or rather just enough content to be temporally and geographically relevant. In other words there is so much ‘stuff’ up there that it now makes sense to access it, in a true Web 3.0 way, in real time, the present moment from anywhere you are. It will at its simplest level be Google Earth, slowly morphing out of your PC screen, growing to global scale and locking into place over the real world or Facebook mapping itself onto the billion users faces out in the street, advertisers reaching out to where ever you are, personalizing your everyday life with relevancy vs noise.

The always on cloud has now become very useful to a range of stakeholders. Marketeers, storytellers & users alike. Mists of information, media and experiences will engulf onto our cities and physical infrastructure, it will become a persistent fog that will coat everything in its path with layers of time and place stamped content. It will create a web of layers, of parallel narratives and realities and enhance our experiences.

OK fluffy intro over and this leads to some high level areas of a ‘consultancy’ whitepaper I did mid last year (which annoyingly I still can’t publish) but some key themes are explored below.

What does this mean on the ground, a ground covered in this fog of information. The transformative effect of our physical world being invaded by ‘cyberspace’ will make the current discussions about social network privacy seem like a children’s party. When the ‘web’ spreads into and permeates our real world will their be any hiding places. As portable screens become practical (think iPad with camera), pervasive wearable computing becomes commonplace and surveillance technology evolves to being ubiquitous and transparent – society will evolve way ahead of government and law, who powerless to stop the flow of information on connected screens will be even more powerless to stop this flow moving into real space?

“Augmented reality allows people to visualize cyberspace as an integral part of the physical world that surrounds them, effectively making the real world clickable and linked,” says Dr.¬†Paul E. Jacobs, chairman and CEO of Qualcomm.

The videos below might give them ‘digital’ food for thought.

Beware: I would like to point out everything below has already happened or about to launch in the next few months.


From Eyetap.org (a wearable computing lab in Toronto) – “Stewart Morgan discusses Architecture of Information on the show Daily Planet. It is a visionary short film showing augmented reality, and the implications of it’s applications.” From 2007


What kind of society will it be when our personal profiles, details and content are available to anyone in the street simply by scanning our face. That person across the train carriage, are they really playing an iPhone game or finding out ‘everything’ about you, well at least that which you have placed on the open web? A short video that will shock forward thinkers…

Continue reading »