One of my resolutions this year is to blog more about actual personalized services rather than looking too much at the easy target of emerging trends, mostly dominated by US companies launching obvious services based on new technology enablers. I have posted before about personalizing the experience at service (read: interactive package) level and the following posts The MyEdit Phenomenon and It’s not about them, it’s about me! amongst others talk about ways that rich media particularly can be made to resonate with the viewer. This resonation really means that what each individual gets in terms of video, audio, graphical and textual elements as part of their user journey is potentially unique to them and has relevance. It becomes relevant when the style of delivery, the content exposed, the emotional narrative and the level of interaction are tailored for them.
This of course goes against the potential to have mass shared experience of one property (linear or interactive programme) where the whole audience gets exactly the same thing. This of course has been the thorn in the side of getting linear creatives working in this field, but things will change – in answer to Christy’s prediction post, sub section “Personalisation and the Shared Experience Crises”, there will still be the option to watch/use the ‘everyone’ version (see 1 below), talk about it over the water cooler and share in mass games and interactive broadcast type services. I think that these highly personalized services will balance the mass impersonal services as both stretch out to each extreme of the personalization continuum – and apart from highly dramatic narratives all other genre are potential targets for effective development.
Developing tailored user experiences through a broad narrative framework firstly needs to be defined by the mechanism through which the narrative engine responds to user input. This input is in the form of metadata that the engine can then use to deliver alternate media types as part of the progression of user journey. I often say there are four ways (on top of doing nothing no.1) that a ‘system’ can respond in a personalized way to a viewer
1 – The viewer does nothing and they have a personalized experience in their head
2 – The viewer is asked to ‘type’ themselves – who are you most like from a pre-set list and the experience will format itself appropriately – clustering
3 – The viewer at the beginning or various points in the progression is required to fill in an individualised ‘form’ (this could be done through quiz mechanisms or simple tick box etc) basically their preferences/likes/dislikes – manual profiling.
4 – The viewer uses the service and in real time and based on routes that others have taken, generates the most appropriate media and further pathways through the service – collaborative filtering
5 – The viewer simply enters the service which then learns about you based on a range of methods (how fast you click, what you spend most time doing, content you skipped, questions you answered [as in 3], the paths you are taking, emotional response [perhaps based on your input to a “how are you” function], mouse or controller movements during use etc).
The real time personalization in number 5 is where the line starts to get blurred between a resonant service and levels of artificial intelligence. If the media type is mostly text based then services exist already that can generate narrative on-the-fly. Not a list based ‘bot’ but something using language and narrative rule sets to create story dynamically based on your input. Once in text then text to voice/sound can be brought into the mix and things start to really move into human-machine interaction ala ‘the system is alive!’. The interface itself, the human-system interaction (whether keyboard controller or more integrated body device) of course comes into play in how it can ‘learn you’ and respond accordingly. I think the mobile device with more ‘sensors’ around its frail, tiny frame could become a great way to input useful emotional info – after all if lie detectors are based around hand/finger response then we already have a ‘hci’ Trojan horse in place. Before I venture into practical dimensions of all of this I always observe that once someway down the road of investigating resonant personalization the technology starts to take over and many media futurists then lose sight of the narrative and story.
Now looking at the personalization of narrative applying some of the five potential methods above we move into different problem areas. One of my banes with so-called, generative narrative services is the constant pause as the system asks for ‘input’. Firstly the ideal service will flow (be un-interrupted). I am lucky to have been involved and privy to a few creations that do have continuum yet also profile the viewer dynamically – sadly the best distribution method for some of these kinds of services, DVD, is not naturally equipped to provide flow as part of its architecture and online suffers network delays. Providing a sense of ‘natural time’ and flow is a big challenge. The success of a good movie is often because an editor has spent many days fine-tuning the pace of the narrative to perfection – doing this dynamically requires those skills to be incorporated into the AI/resonant engine. The constraints of an online or interactive TV personalized narrative means Qos (quality of service) issues come into play. Responses are timed imperfectly and so the fourth wall begins to breakdown as the medium jars with the message.
As for the story structure and associated assets in this personalized narrative paradigm I will bow to a thousand academics and occasional practitioners who have debated the cross-over tension between narrative and game-play for years. The personalized resonant narrative though is more akin to a customised edit, for you, from the same pot (or dynamically generated stream) of content. So the audio or music, the voice over, the actual video stream may change its order or open up into different archives, providing alternate perspectives and emotions. At the very simplest level you may have ‘told or informed’ (via 2-5 above) the engine you prefer more physicality or action in the narrative and so it will be. At the other extreme you prefer more dialogue. From another level you like stories that are investigative and leave you guessing until the end vs ones that reveal at every pivot point. You may prefer stories set in exotic locations, ones you can be a character within, ones you can actually steer as it progresses…as we go deeper into this particular rabbit hole and get lost in theory perhaps we should step back a moment and think in more simple terms. A narrative structure that is personalized to you may mean only a few changes – a few cut scenes, the occasional tangent that no one else sees or a sub-plot that makes it more meaningful to you. The music may be less prominent and a voice over may be called into play to describe more fully what is happening etc: etc:.
I will post more about the above and build a few simple examples over the next weeks but as my personalized life has eaten into this post I will close with a few production issues. The permutations and combinations grow factorially of course as you allow each component of an interactive service to be ‘personalized’ and the amount of metadata one needs to add to accommodate the kind of functionality here is often prohibitive. Then there is the content management aspect, multiple platforms and application build on top. We, the industry, need to start small and develop some templates and standards if these sorts of services are given a chance. It has to come from the AI side of the fence as to manually build the depth (read: numbers of routes/responses/media assets) required to make these sorts of services work is unmanageable – AI and personalization will be ubiquitous bed-fellows over the coming years.
Posted by Gary Hayes ©2006