Human Computer Interaction : Blending the Real and Digital Worlds . . .
If you were fascinated by Dexter's Secret Laboratory Partner and by Tony Stark's (Iron Man) Virtual Assistant Jarvis and dream to give physical substance to this sci-fi, then we share a common goal. This blog is my way of sharing some relevant knowledge in the fields on HCI and Computer Vision, widely used concepts and upcoming technologies...
27th March'14 : Presentation by the Glass team led by Wilson White at the School of Informatics and Computing at IUPUI.
An awesome presentation by Wilson White who leads the Public Policies concerning Glass! The brief one hour seminar was attended by faculty and students alike and the Glass team put forth some very compelling use case scenarios for the Glass, each having the potential for a really big impact in our lives, specially for the disabled.
The team were also equipped with extra Glasses for everyone to try and play with. Trying the Glass hand-on was really helpful in changing my perceptions about the overall user experience.
The presentation was followed by a thought-provoking Q/A session. One of the things that I keep thinking about is how Glass's immense capabilities might hinder the growth of other wearables at some point. Granted that the Glass is much more costlier than present day smartwatches but then again, it can do so much more and almost everything that smartwatches can do today. From activity sensing, notifications, tracking, guiding, voice commands, image capturing, video recording: the Glass can do it all and will only get better with time. But then not everyone can afford the Glass and there is still the question to general acceptance by the public!
Here are some of the popular user stories included in the presentation:
Very recently, on the occasions of support withdrawal for Windows XP (one of the few operating systems Microsoft actually got right), I can across this video that tells the story behind one of the most iconic images captured of all time, that too in the words from the photographer, Charles O'Rear himself.
For everyone who really thought the wallpaper, "Bliss" was photoshopped or edited into this scenic beauty, here is the true story behind the blissful wallpaper we are so familiar with!
Have you spent endless hours just painting your imagination on invisible canvas deep inside your mind? How about, if you could paint your thoughts and ideas on the fly in the air.!! Have you ever wondered that your artistic design on paper can be live in front of you? Till now these were only possible in dreams and movies. Not anymore because that technology now has a name: 3Doodler. The first 3D pen has now been developed by a company called Wobbleworks with the help of Kickstarter. It is the world’s first and the only 3D pen made in the world. It will bring your ideas to live.
So What is 3Doodler?
Now you can just lift your pen and what you draw will be live in front of you. You can design and draw 3D objects in real time. 3Doodler uses ABS plastic (material used in many 3D printers) to draw on surface or air. 3Doodle pen is 180mm by 24mm and will work fine with 110V OR 240V. With some few hours of practice, you can draw amazing drawing and shapes in air and not just a paper drawn thing. It has been designed so that anyone can use it without any prior computer or technical knowledge. You just need to plug it in power socket.
Currently 3D printing is not in reach of common people as it is very costly and complex too. 3Dooodle overcomes all these hurdles and will make 3D printing fun and accessible to everyone on this planet.
How Does It Work?
If you can wave your fingers in the air, move them or scribble, 3Doodle is for you people. If you are not short of ideas then don’t waste your time drawing on boring papers because your creative designs and work will now be live in front of you in no time. Learning and teaching of 3D objects will no more be a bottleneck for you rather it will be more fun now. You draw a bicycle, mobile, jewellery designs, car or anything else and its plastic dummy will be in 3D live in front of you. 3Doodles can be created as flat forms and peeled off a piece of paper, as freestyle 3D objects, or in separate parts, ready to be joined together using the 3Doodler.
Watch this breathtaking video to see the 3Doodler in Action:
The latest futuristic toy is finally here. What do you do when you see fiction come alive?! 2012 brought about speculations of a device, being developed by the search-giant Google, to further the boundaries of augmented reality. And now, we see the fruits of their research, labour and finesse: The Google Glass. Google Glass is an attempt to free data from desktop computers and portable devices like phones and tablets, and place it right in front of your eyes.Various sci-fi movie directors had the vision of such a head-mounted device even back in the 1990s but it is only Google so far which has managed to convert this concept into a sensible device and also bring alive their dream of wearable techniology.
So what exactly can Google Glaas do? Sure it's sleek and stylish but is it really that promising? Does it really have the potential to transform the way we use our PCs, smartphones etc. The first versions are in the hands of developers who went into a lottery to fork out $1,500 for their own pair of spectacles. Here is the official link:
As well as a mooted 640 x 360 display, the built-in camera is a 5MP snapper that can film at 720p.
Battery life is apparently a day, although that's with the usual "typical use" caveat, which probably excludes a lot of videoing.
There's 16GB of flash memory built into the device, although only 12GB will be available for user storage. The device will sync to your Google Drive in the cloud
Bluetooth and WiFi will be built in, but no GPS chip - so the Glass will probably work best alongside an Android phone, although you can pair with any Bluetooth enabled phone.
There is a Micro USB cable and charger for the dev versions, and all of the above specs are expected to be replicated in the consumer versions when they arrive.
Lastly, Google Glass will come in five colours: Charcoal, Tangerine, Shale, Cotton and Sky.
Personally, I feel that this is a huge step in the wearable computing world. It's a bold step and a really good initiative but it's far from perfect. A lot will actually come down to personal preference. For many perople, the idea of being filmed by someone from their glasses will make them uneasy (especially the women), having conversations that could be logged and later tampered with will be decisive. The inclusion of facial recognition looks like a good feature but could also be problematic. Google clearly haven't thought it through completely because even though this nifty device does make many tasks such as taking photographs, navigation assistance, recording video etc. much simpler and voice istructions are wonderful, several customers may have reservations against using such a device and it surely will not be even allowed in corporate offices and may be some other restrictions might be imposed on the use of the device as well to prevent breach of privacy and security. The cost factor is a major concern as well. But kudos to Google with coming up such a sleek, powerful device. Hopefully, given a few years this device would evolve further into something much more user friendly, secure and acceptable by everyone. I am a Google fan and for that I sincerely hope they get the job done before their potential rivals (Apple, Microsoft, Sony...).
March 26, 2013, 1:39 PM — Even with all our emojis and emoticons, a typed message will never carry the same sort of emotion as a face-to-face conversation. Sure video calls have gotten a lot easier these days between FaceTime and Skype calls, but who makes calls anymore?
To help combat the problem, researchers from the University of Cambridge are saying that they have developed the most expressive, human-like avatar ever created. Dubbed, "Zoe", the avatar can read text out-loud with all the emotion a disembodied face would have.
To create this human-faced avatar, the scientists had an actress named Zoë Lister read over 7,000 lines of dialogue while tracking her facial expressions. The data from these exercises were used to train a virtual model. In case you think this is all just a recording, the underlying mesh structure of the virtual face can be manually manipulated for some real freaky expressions.
Zoe basically reads out a message you type in with the emotion you select manually on a bunch of sliders that includes: anger, sadness, happiness, tenderness, fearfulness. In a case study, a group of volunteers correctly recognized the emotion of the virtual avatar (77%) at a higher rate than the actual human actress (73%). So once again robots have beaten humans.
The team behind Zoe is currently looking into possible applications for the avatar. One of the most prominent uses for this technology could be to help autistic children read emotions and the deaf to read lips. The scientists also believe the technology could have a future in gaming, audio-visual books, and online lectures.
Since the system is practically a template, anyone could record themselves, or someone else, to create their own personalized assistant. The system is also extremely lightweightjust 10-megabytesso it might end up being implemented into our phones and tablets someday.
As stuff like Google Glass becomes mainstream, we’re going to see a lot more wearable computing devices around. But one thing that isn’t clear is how we’ll control them. One idea is to use gesture control, which would enable users to communicate with wearable computers without having to use a whole separate smartphone or other device to do so.
But so far, gesture control for most devices — like the Xbox Kinect, for instance — has depended upon cameras watching user movement. That means remaining in a fixed space and using pre-programmed gestures that are not exactly natural, but can be picked up by cameras. As a result, today’s gesture-control technologies are far from perfect. In fact, most to date are just downright bad.
Y Combinator-backed startup Thalmic Labs believes it has a better way of determining user intent when using gesture control. To do so, it’s developed a new device, called MYO, which is an armband worn around the forearm. Using Bluetooth, the armband can wirelessly connect to other devices, such as PCs and mobile phones, to enable user control based on their movements without directly touching the electronics.
Thalmic Labs was founded by University of Waterloo Mechatronics Engineering graduates Aaron Grant, Matthew Bailey, and Stephen Lake. After leaving school, the three began collaborating on building the technology behind the Myo armband. Altogether, the company that they’ve built now has 11 employees.
“Before we graduated, we were interested in the area of wearable computing,” Lake told me. According to him, the team realized that a ton of research had been done on heads-up display technology, like the kind used in Google Glass. But there was a lot less energy placed on the technology used to control wearable computing devices. And so the founder set out to build it.
The first product they’ve developed is MYO, which uses a bunch of sensors and machine-learning technology to use the muscles in your forearm to determine what gestures users are making with their hands. Once it’s done that, users will be able to manipulate what’s happening on screen for different devices.
Sample applications of the technology involve being able to manipulate and edit slide presentations remotely. Users could also control wireless devices with the MYO armband — like for instance, theSphero gaming ball. In the future, The Thalmic team hopes to enable control of stuff like Google Glass without actually touching the display.
For users, the armband will be available for pre-order for $149 at www.getmyo.com. But it’s not just end users that the team is trying to get on board — it’s also hoping to court developers, as well.
To do so, Thalmic Labs is introducing an API that will allow third-party developers to build applications that can take advantage of its gesture-control technology. The idea is to create a platform that will enable others to build their own applications based on MYO gesture control.
“We’re really interested in what third-party developers can do. Everyone we’ve talked to has a different idea for it,” Lake told me. The company is hoping to harness some of that creative energy to build things that it would have never thought of.
While it’s unclear how popular the MYO armband will actually be, Thalmic Labs hopes that other developers will help to create applications that make it more valuable. The company also appears to have some interesting IP that could be pretty valuable. It has already filed for a couple of patents, and has more filings on the way.
Thalmic Labs is currently part of the Y Combinator Winter 2013 class of startups, and has raised $1.1 million in seed funding. In addition to Y Combinator, that funding has come from investors such as ATI Technologies founder Lee Lau, HP Canada CEO Paul Tsaparis, Rypple co-founder Daniel Debow, and Dayforce CEO David Ossip.