High Fidelity’s AKA Get Easy Like Sunday Morning

The High Fidelity team have ramped up their blog discussion during August and there’s a lot of interesting and fun stuff to look at. There’s a post about the importance of the speed of sound, there’s a post about Javascript from Paloma .. Paloma being a 17 year old intern and not a place and then there’s a post about frogs who aren’t frogs sitting on lilypads and singing.

Now as this is Sunday and I need to shave and hit the pub to watch a bit of footie, I’ll focus on the frogs who aren’t frogs sitting on lilypads and singing. We’re promised a follow up post from executive producer Ryan Karpf to explain the concepts behind this post at a later date. However for now we’re left to see some members of the High Fidelity team at play.

Ryan, Chris Collins, Emily Donald and Ozan Serim all feature in this video as well as a guy with very large shoulders whom looks uncannily like the avatar form of former Linden Lab employee Andrew Meadows (AKA Employee Number 2 when he worked at The Lab). However as this avatar isn’t introduced I’m not 100% sure who it is.

The post introduces a name for the High Fidelity band, they are known as AKA, they are also known as AKA too.

The video in the post exemplifies High Fidelity in action as well as Chris Collins reminding me of a character from Monkey Island for some reason. However what we see here is facial expressions and once again the mouth movements are pretty damn impressive.

We also get to see a guitarist in action with Ozan Serim on guitar, who introduces the song with his traditional countdown, which you can just about hear. We also see arm and hand movements in action, although quite how that has been achieved has not yet been explained in this post.

Sometimes it’s better to see something in action, rather than get bogged down in the discussion and this is one of those times. The concepts at play here work and they work largely well. There are some imperfections of course but that’s to be expected. This is still an alpha product and imperfections are to be expected, indeed in this day and age imperfections are to be expected even with a completed project.

Personally I enjoyed this, your mileage may vary but High Fidelity does appear to be progressing very nicely.


6 Replies to “High Fidelity’s AKA Get Easy Like Sunday Morning”

  1. Count me out with the impressiveness of this tech. The avatar expression capture is good up to a point but then it stops. Most of the avatars look mentally disabled.

    1. My understanding is that people will be able to create their own avatars so I’m not getting too hung up on how the avatars look, designers will make improvements there.

      The facial expression technology is looking impressive.

      1. But it isn’t impressive Ciaran. The eyes looked crossed or squinty. Mouths gape open inappropriately.

        I’m not going to overlook these things just because the notion of motion capture is cool. That is denying our own highly and evolutionary advanced skills in reading facial expressions. Hi-Fi is asking us to dumb ourselves down to say yeah, its good…good enough. It isn’t.

        1. The eyes I’ve commented on before, they do seem to go a little out of control.

          The mouth movements seem to work quite well, the people behind the avatars may well be opening their mouths without realising it. We react differently face to face, watching a computer screen is not going to have the same richness or make one as aware of our reactions as we do when we physically see someone.

          1. Read what you have just written. You are already making excuses for this technology. Excuses that you would not extend to a real person. If we were face to face, and my expressions were the same as these VR poppets you could most likely, rightly assume that I had some kind of mental disability.

            None of them are mouth breathing, the capture is just not good.

  2. Real Time Facial Animation presented for Unity game engine a year ago. The exact same tech but only more advanced. Presented in this video at minute 18.30 https://www.youtube.com/watch?v=vjgSbX28Qz0

    What is not there is the streaming part, the streaming part could be added by two or three student coders from a tech university in about a month or two. Stream voice or stream the caputured animation data is really not that hard. You just send the packets with the data from the server to the client. It is worth to watch the unity presentation as the tech presented there provides you with a more deep insight.

    Looking a bit deeper this did show up: https://www.youtube.com/watch?v=NFBv_ypyhiA
    live motion capture data streaming in Second Life in the year 2010 and nobody blinked when it was possible, nobody used it or saw potential in it.

    There is really a lot of impressive gadgets and tech around these days, in particular in Unity because Unity is free so all students experiment with it.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Follow

Get the latest posts delivered to your mailbox: