As promised a more specific ‘commercial’ follow up to my previous post on this topic which was more ‘story’ centric. I am developing & producing a range of Augmented Reality (or if you prefer AR, ‘blended or layered media’) applications at the moment. I have also been asked to present at a few conferences and create a detailed white paper on the implications of AR for government & business looking at privacy, legal, copyright & crime issues. As readers of this blog will know I also lecture, run workshops & work with creative teams to come up with future ‘social entertainment’ based around virtual worlds and augmented reality.
But the purpose of this short post is to simply list and try to categorise the many types of business Augmented Reality apps appearing in the market. The first manifestations of AR appeared in the late 60s, became real in the 70s and by the 90s were already being used by major companies. Now portable computing is finally powerful enough to deliver AR to anyone who has a smart phone or latest generation PC or console. But first my simple definition of Augmented Reality.
Information, 3D models or live action blended with or overlaid onto the physical world in real time. A camera & attached screen is used to view the combination of reality & real time virtuality. Devices or systems commonly used for AR include
But the purpose of this pretty detailed post is to simply list and try to categorise the many types of business Augmented Reality apps appearing in the market and to try to identify opportunities.
I have mentioned several times in this blog about the next big steps in the metaverse and two of my key points spring to mind, 1) Integration with existing online ‘life’ tools and 2) a company with super deep pockets, Google. So without much fanfare or pomp or circumstance ‘Lively – perpetual beta‘ was sneakily launched. A Google driven Social Virtual World made up of customised and personalized ‘rooms/enviroments’ that runs inside Windows web browsers (IE,FF). After a 2 minute install I was up and running checking out some CCEs (Community Created Environments) and looking for folk to chat to. The YouTube gives a good sense of the experience, but turn the sound down 🙂
With avatars and aesthetic nearer to there.com than Second Life (bizarrely referred to in the marketing blurb above and on the instructional site) the real point of difference here is the fact that you can embed your ‘spaces’ in blog posts and other embed friendly web 2.0 apps. Here is one of my rooms embedded inside my Justvirtual.com blog (yes you have the virtual world as an embedded active window!)…
…plus the fact there is a multitude of other integrations with the Googleverse around the corner. I have seen talk of contextual advertising, built in YouTube on various screens, items within the rooms containing Amazon-type product links and of course the ability to plop your space on top of Google Earth/Maps – the list goes on and on…There is also integration with Facebook, MySpace and others via OpenSocial.
There is also a nod to PS3 Home given the strong create your own room using bits of found furniture (in fact very Habbo also), embed it in your blog etc and what looks like a catalog where 3rd parties can eventually come along and sell pixel products, virtual goods – which is where the real biz model is of course. Here are a couple of room screen grabs I took on a first whistle stop and I will report more when I have had time to dig in deeper…I include the ever so important embed pane at the bottom of each room, will Virtual Environments start to go viral? Sadly using the current technology each room has been crippled to only 20 avatars at a time and the movement around is a rather clunky mix of click to jump there and drag with mouse to smoothly move around. There are some fun elements though…
…and I just filmed and uploaded onto the LAMP channel a quick grab of me Gazlitt and some of the 20 or so avatar interactions – limited to fun-fights or petting ala Simpsons vs sophisticated Second Life custom ones. I can see this type of world working well with the there.com demographic, or even perhaps the 10-early 20s and I am already tempted to create a couple of machinimas due to the cartoon’y nature of the graphics.
… and some more insight – A Google talks YouTube video from January 24 of this year looking at the backend…
Apparently Millions of Us and Rivers Run Red have been making objects and a few first off the block branded spaces – probably as Second Life commissions runs a little low they have time on their hands. A good summary of the service can be read at Virtual Worlds News which features some quotes from Head of 3D Worlds at Google, Mel Guymon…
“Our goal is to get everyone on the Web using 3D and to validate it as a part of the social experience,” said Guymon. “If we [as an industry] are going to do it, I think getting someone like Google to do it is crucial. And since we are doing it, I think weÃ¢Â€Â™re going to look back at having had Google do it as crucial.”
and to show that embedding web 3.0 into 2D social networks are going to be one of the really interesting growth areas this year how about this new 3D ‘vivaty‘ plug-in for Facebook with tons of web 2.0 integration? Wired has a good introduction article on it from last week…Vivaty Scenes Taps Facebook, AIM for ‘Immersive Internet’
A new immersive web platform called Vivaty Scenes lets users create tiny virtual worlds and decorate them with content from around the internet. After adding Vivaty Scenes, which entered public beta Tuesday, to a Facebook or AOL Instant Messenger account, users can set up a customizable “room” where they can host chat sessions or small virtual gatherings within a web browser. The free service lets users pull content directly from some of the internet’s most popular sites. Scenes’ virtual televisions can be populated with any video from YouTube; virtual picture frames can be filled with any picture from a user’s Photobucket, Flickr or Facebook accounts.
A nice little toy to play with down below which has resulted from a few recent think streams. I have been thinking hard about the potential millions of formats that can be created by mashing the many traditional media forms with even more new media forms – endless combinations of genre, platform, structure, intention and so on. This is nothing new for me as last century I was busy producing BBC cross-media forms, then working on the TV-Anytime (Mpeg7-21) classification dictionaries and of course recently leading LAMP and its transmedia work. I have blogged about media forms many times and regular readers may remember this simple ‘Media Universe’ diagram (right), that talks about community created content (including professional producers) but really about separating distribution from screen from media type so you can identify form(at) for audience.
So for a bit of fun I threw together an ‘Experience Generator’ (in good old shockwave) that plays on a few key concepts and potential new formats. It contains five of the sections of comprehensive classification dictionary I worked on with a small team at TV-anytime, originally a bloated and complex matrix of intention, format, content, commercial product, intended audience, origination, alert, media type and atmosphere – all used to describe content in depth so service providers and consumers are able to perform rich findability and personalization respectively. For the ‘Format Generator’ below I enhanced only five of the Classification Dictionary areas:
intention (what an experience is meant to achieve) – list of 27
content/genre (the subject or niche area) – 690
format (the shell or structure of the experience) – 108
media type (the actual key media element) – 17
atmosphere (a rather fluffy, mood the experience should create) (This is a very Far East input into the standard) – 55
So how to use it. Click once on the ‘app window below’, then press letter keys (ideally in sequence top to bottom) to randomly generate something unique and cool. The consider how you would produce the experience. Doing the sums based on the number of items in each list the possible combinations are 1 881 257 000 format/experiences! OK, it’s not perfect yet there is a little overlap between format and media type (still working on that), there are far fewer newer forms than older ones, but I think a very useful ‘format for thought’ generator. Enjoy (oh and good luck with the auto shockwave plugin browser
I find this really interesting and a portent of the future. I am sure you remember the context to the films Bladerunner, chasing down sneeky renegade bots. Then there was AI with its ‘Flesh Fair’, (humans destroying orphaned robots), then the agents in Matrix, what is real and who is not etc: All come from many years of fiction and dread about a future infested with automatons. But AI is already receiving a backlash yet a BBC report last week, has experts agreeing it is here to stay and will be ubiquitous.
Here are on the dawn of real and virtual spaces having AI driven ‘invaders’. I say invaders because in the most ‘socialised’ virtual world of SL not knowing who is human or not in is already receiving backlash and resistance. New World Notes post ‘How to Spot a Bot‘ is the tip of the iceberg and many forums/blogs around the larger virtual worlds talk about corporate spies, automated gold farmers (WoW) and the embarrassment of spending three hours chatting up an avatar only to find out he or she is a database driven machine! In a completely virtual environment which is of course far ahead of ‘humanoids’ being present in real space, it has become very difficult to tell now if the avatars have a human or an sql driving them and this is irritating many ‘human’ inhabitants!
This is definitely something that we will be facing more and more in the coming years and the BBC report ‘Machines to Match Man by 2029‘ takes a different approach.
“”I’ve made the case that we will have both the hardware and the software to achieve human level artificial intelligence with the broad suppleness of human intelligence including our emotional intelligence by 2029,” said Ray Kurzweil. The report continues – Mr Kurzweil is one of 18 influential thinkers chosen to identify the great technological challenges facing humanity in the 21st century by the US National Academy of Engineering. The experts include Google founder Larry Page and genome pioneer Dr Craig Venter. The 14 challenges were announced at the annual meeting of the American Association for the Advancement of Science in Boston, which concludes on Monday.
I have been looking at AI in its various manifestations over the years as it is the ultimate in personalization, a digital you. Last year I created a simple chatbot on the ABC Island (and other builds I have created in Second Life) that uses a Pandora look-up back-end (calling to the web on each line of chat). I get many IM’s inworld asking if it is actually me talking through it! (not sure what that says about my conversational abilities!). There are also some ‘cultured’ (as in they can talk literature, science etc) book-bots I created for Thursday’s Fictions (another SL project) – and I am working with a great Australian company called MyCyberTwin who are leading the way globally in personalized, personality based AI. As regards the backlash mob I quote Hamlet from NWN
“And in any event, what happens when the bot farmers program their bots to have minimal AI and conversational abilities, a technology which already exists? I can see the fun in not knowing if the avatar you’re dancing with has a human being controlling her. But at some point, isn’t there an ethical obligation for bot owners to clearly designate them as such?”
It is a shame these poor bots (they prefer to be called AIs at the moment – eventually of course we will treat them as ourselves) are already being blamed for the ills and wrong doings in Virtual Worlds, but expect much more. Anyone for the next ‘Flesh Fair’ in Second Life, may as well get practicing for the real thing in a couple of years, or is that a couple of centuries? 😉