Having just co-written a DTI paper about the future for time shifted TV and VOD in the UK and done several posts on the subject about tools to personalize video content it was a great treat to find two projects that have reached alpha at least falling squarely in the personalize media zone. The first is from an ex-colleague (on his last day at the BBC!) who tell us about an internal project that takes the concept of collaborative editing (ala wiki) and applies it to rich media – in this case audio . Tom Coates posted some quite detailed information about his Annotatable Audio project. OK this is not rocket science, I was involved in several broadcast projects doing the same with video in early 2000, but it is a simple idea that has far reaching social consequences for personal and professional media consumption. The USP of this as a simple av tagging tool is that it is what could be described (using current terminology) as a conceptual mash-up – a tool for collaborative rich media annotation, wiki meets av metatagging. To begin with and on the theme of my last post on UGC here is Tom’s context:
“… An on-demand archive is going to make the number of choices available to a given individual at any point almost completely unmanageable. And then there’s the user-generated content – the amateur and semi-professional creations, podcasts and the like that are proliferating across the internet. In the longer term there are potentially billions of these media creators in the world.”
And the bit of his post that makes this unique and a taste of things to come:
But it gets much more exciting when you actually delve a bit deeper. If you want to edit the information around a piece of audio, then just like on a wiki you just click on the ‘edit / annotate’ tab. …you can change the title to something more accurate, add any wiki-style content you wish to in the main text area and add or delete the existing fauxonomic metadata. If you want to delete a segment you can. If you need to keep digging around to explore the audio, you can do so. It’s all amazingly cool, and I’m incredibly proud of the team that made it.
When I was leading the media applications elements of TV-Anytime we defined many ways that ‘segments’ (parts of the whole rich media temporal property) could be used in creative ways. Everything from capturing pieces onto several devices to create a whole experience through to more attractive business models such as targeted segment, replacement insertion or to put it another way – segments that you would like dropped into and around other content, contextualized or not.
Another group looking at the collaborative annotation space is Ourmedia. Described as “The global home for Grassroots media” (didn’t realise that Grassroots is still being used as a term!) they are going beyond just audio though and richly segmented annotated video on the web is a big part of their roadmap. Firstly what they do:
Video blogs, photo albums, home movies, podcasting, digital art, documentary journalism, home-brew political ads, music videos, audio interviews, digital storytelling, children’s tales, Flash animations, student films, mash-ups — all kinds of digital works have begun to flourish as the Internet rises up alongside big media as a place where we’ll gather to inform, entertain and astound each other.
And in their open-source ‘what’s ahead section’ the bits that are relevant to this post:
A very cool new social networking system called the PeopleAggregator, a next-generation social networking system that goes beyond the idea of social networks as mating games and uses open standards and network interconnectivity to bring social networking into the mainstream.
The ability for contacts or members to tag (add metadata info to) other members’ works.
Always interested to see how their PeopleAggregator eventually manifests but for the moment the really interesting thing is allowing others to attach metadata to your movies and audio. What BBC and Ourmedia are doing is moving this from some of the standards or broadcast level proprietary tools into the wonderful world of user maintained, controlled and metatagged segmentation of av material. This is where things start to rock in the personalization space. Search engines love to dig deep into text based data and once assigned to video content we can start to appreciate the significance of millions of people and their agents hunting around for very specific clips, moments, fragments, segments of longer form, time-based content. The real exciting element of this whole area is the creative cross-media possibilities once we have ubiquitous richly tagged segments of av – just think of how we can then really think about how much easier it will be to produce services that cross-link audio visual content. The world of hyper-linked film, tv, radio, podcasts etc etc moves so much closer.
Posted by Gary Hayes ©2005