Our Scientific paper

Abstract:

In this paper, we will be researching what the future of 3D cameras will be. This topic is subdivided in three categories: (1) mapping or recording, (2) standalone interaction and (3) enhanced interaction.
Our research is mainly based on experiments performed by others and our personal experience[1]. We also used research material from touch screens and we compared the 3D camera to similar existing techniques. To confirm these deductions, we had an interview with Jan Derboven, researcher in touch interfaces and gestural interaction from the K.U. Leuven.
The results showed that mapping cameras have a great future once the resolution and accuracy increases. Standalone 3D cameras will only be successful as long as the interactions are kept very basic. To overcome this limitation, enhanced interaction provides the solution.

Full paper:

Download here

Advertisements

3D cameras and phantom limbs

This is quite a nice application! Phantom limbs are limbs that were amputated (or lost), but still felt by the person. This mostly results in pains that aren’t physically there because they are caused by the brains and some mixing up of the nerves. To soften the pains, there is a technique using a mirror box: the limb that is gone, is ‘faked’ by the reflection of the other limb. This tricks the brain into thinking that it actually is there.

So far for the medical part! Benjamin Blundell has been working on a digital version of this mirror box. It uses the Kinect camera in combination with virtual reality glasses. The kinect detects the user, and gives that data to the software. There, the missing limb is filled in by the reflection of the other one and showed on the glasses. Check the video included!

3D camera experiments

3D cameras are a cheap and fun way to experiment with a new technology. That is why so many hobbyist incorporate such a camera in their projects. Such projects are probably the best ways to discover new uses for these cameras.

One of those projects is called the Board of Awesomeness, where an off-terain skateboard is enhanced with electrical motors and control circuitry. To control the board, both voice control and a Kinect camera are used.

The combination of voice control and the camera is a very strong one. A major problem with developing software for a 3D camera, is that a user is always interfacing with the system. Even though the user is just standing in front of the camera, he still is interfacing and if you have problems with nervous hands, you might be triggering actions you never intended! Because you are interfacing with your body, you cannot simply turn of or on the system with your body (e.g. a button you have to press) because you could be doing other actions in the meantime.

That is where voice control comes into the scene. By using your voice, you can prevent the system from tracking your movements when you don’t want it to.

We are not entirely sure that this specific application will have a big future (maybe as a replacement for the Segway) but the same technique could be used in hospitals, where a small workstation follows the nurse wherever she goes.

3D gaming

As we talked before, 3D technology can bring a big evolution to our lives based on some existed technology such as online shopping and so on. Let’s see another amazing application which uses 3D technology: 3D war game!

Now keep your eyes open and enjoy!

 

 

Just like that? You may say WTF! Well just kidding ūüôā ¬†Most of the comments say this vedio was faked, but whether it is faked or not, I cannot help thinking, how sad it is! You even do not have anything in your hand! The worst thing in 3D application is the lack of control or feel it. ¬†If the 3D game in the future turned out to be like this, i guess people would go back to the desk and sit and play it with your mouse and keyboard.

But recently, this following news has been covered by UK TV channel 5 on  24th October 2011, it is 3D game based on other thing, you hold a machine gun on your hand and you got the omni-direction trade mill that you can feel on the foot and the whole game is like you are really in the war!

 

 

While we are still sitting in front of the computer and playing war games like Call of Duty, Battlefield and so on. Maybe now its time for us to see the most real game in the world that one person can ever experience! The game is in a big They even invited a soldier to play it,  and it was so real that the soldier himself got nervous! It is so real because the wireless gun you hold, when you shoot other people, they will also physically hit by a ball, which hurts too. I believe in the future this application can come to us and we can experience it.

But despite its¬†amaze, we have to be honest that it is not 3D¬†technology¬†on ¬†its own, it consists of a lot of other technologies too, such as pixel mapping, wireless communication and so on. And this project is so¬†costly¬†as it uses 10 kinect¬†infrared¬†camera and 5 LCD projectors in orderto get a 360 ¬į view, and up to 80000 color LED to create the “real” environment. But nevertheless, we got something new!

3D content

You have probably heard of 3D televisions and might have watched a 3D movie already. We haven’t told anything about this subject so far, because it relies on other technologies.

When you are watching at a screen, there is absolutely no way that you can walk around the scene. The screen greatly limits you ability to check the normally invisible aspects. The biggest difference compared to the 3D we have been talking about, is that there is no model made by a computer. The 3D images are composed in the same way as our eyes would look at reality.

While those technologies are not the same, they do influence each other. Currently there is not that much 3D content available, but this could change. TV broadcasts of soccer matches could benefit from a full 3D scan, resulting in a complete model of the field and the players. That would make it possible for the director to show a video feed as if you were a soccer player! And if you want to take full control, you can do so too.

But as we are not there yet at all, we will need regular 3D content first. And one way to do so, is to give the everyday person to record 3D video. That’s exactly what the smartphone manufacturers are doing now.

Whether that will help 3D screens to be sold more often is yet another question.

Real life examples

We’re at the final stages of this research, and it demands a couple of real life applications that can readily be used or are used already.

So far we have had 3D interfaces as interaction devices with computers. But they’re also measuring tools! Land¬†surveyors¬†nowadays measure one single point at a time, making it a time consuming job. The next big thing, is to move it to the full 3D experience. The measuring device is placed and seconds later a digital model of the environment is available. Any other measurements can be done on a computer system later on.

One specific example of land surveying is the conservation of architecture. “What?” I hear you say. Yes it is! Architecture, monuments, history etc. In time we have lost a great deal of those constructions because of earthquakes, bad maintenance, sour rain and many other causes. We can technically preserve all of them, but the lack of funds is a killer. Most of those constructions don’t exist on paper: they were built and no documentation about them is present. Ben Kacyra (see video) is promoting the¬†digitization¬†of all of these constructions by the means of a laser based 3D scanner.

Even if you don’t have the means to go to Ankor, you can check it out at¬†http://archive.cyark.org/angkor-intro

The future of 3D interaction: conventions

By now you should have noticed that we are investigating what the possibilities are for 3D computer interaction in the future. To do that, we have been comparing this technology to existing ones.

Human Computer Interaction (HCI) seems to be neglected more often, while the need for it becomes even bigger. Illustrating it is quite easy:

  • “Let’s implement a bunch of smart phone gestures, that no user will ever use”
  • “Let’s implement everything without thorough testing” (let the users test)
  • “Apple’s latest OS: let’s invert the scrolling direction to make it feel like a tablet”
  • “Let’s patent gestures so no other manufacturers can use them”

In all of those examples, there is one common aspect: the lack of conventions. It seems that our human brains can still handle all of those differences, but how long will that last? It seems to feel natural to interface with an Iphone, but try to use an Android phone afterwards and you will notice that it is not working the same way!

We’re at the verge of the 3D era (whether it’s 3D vision or 3D interfacing) and those same problems can rise there too. Google seems to notice the need for some kind of unity (at least within Android apps), and has released app convetions. Although they’re not binding, they do give some guidelines in application development which only make it easier for the user to transfer between apps.

Our request to future developpers: GET CONVENTIONS!

Digital data finds its way into the physical world

In our last post, we have talked about augmented reality. But that’s just the first step into ‘digitising’. Just imagine yourself that every piece of environment can turn into a computer system that is – one way or another – interacting with you. Here’s another fragment from Minority Report (yes, it really is a cool film ūüôā )

There’s quite a lot to think about in that fragment: screen of that size? cost? privacy? But it might be available in the future. We have been talking about holograms before; cheap and big screens might be another (currently more realistic) solution. At this moment, our world is not filled with that many screens yet, so other experiments have to be done, one of which is the 6th sense:

It actually removes the need for any screens to display the data, but who would be carrying such system with him/her all the time?

But what has this to do with 3D cameras? Doing a fancy iris scan and looking up a person in a database is probably not ever going to happen (at least not for commercial purposes). But a 3D camera can give extra information to a system about a person. Last year, Microsoft has patented a technique to estimate the age of a person by simply looking at bodily proportions. 3D techniques make it possible to model an entire body for further processing which can then be used to display advertisements about sports facilities (subtle hints!)

Augmented reality

So far we have been discussing techniques that can be implemented by touch screens but augmented reality is something completely new. We did already tell you something about it in our Holodeck Post where we mentioned the interactive kitchen table.

Wikipedia describes it as follows:

Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data.

AR is nowadays implemented on smartphones. You point your camera at an object, at an environment or something else, and extra elements are added on the screen that relate to the context. One example is taking a picture of the City Hall in Leuven, and getting extra information about it. The phone does this processing based on location (GPS), a digital compass and possibly image recognition.

Taking a picture is not something you would have done if you would not have a smartphone and wanted to get some extra information. And also, this system seems to be not as popular as initially intended. Could this be because it demands more effort from us than we would like to do (cfr Can New Technologies Replace Old Ones)?

Augemented Reality seems like something great, but care has to be taken where to implement it. With (3D) cameras, the situation becomes quite different. The camera is stationary, and also the visual feedback (a projector) is mostly standing still. A table for example becomes the ideal area where to perform such interaction. A lot of experiments have been done already, and produce promising outcomes. The following videos are all examples of AR with the use of a 3D camera.

 

Tips for 3D gestural design

As we already discussed in other blogs that 3D gestural design is very difficult. When we want the user to perform some action in front our system, we do not want them to feel stupid or crazy.

There are some tips when designing 3D gestural system.

  • The gesture has to be simple but not commonly used. People has to feel natural and not embarrassed to do it and will not often do it by accident.
  • Feedback should be provided immediately so user can feel they were doing something useful but not moving crazily. If necessary, visual and audio feedbacks together are more inviting.
  • The transition between different actions should be very clear so the system can capture the right gesture at the right moment and give a right response.
  • Handle diversity

Different morphologies may imply different behaviors.
Make sure you don’t optimize for just you or your reference movie.

  • Camera has to see it
    Only superman and airport security can see through objects.
    Help user stay inside the boundaries.
    Fast movements can be blurry.

In order to make the system more socialized, we should try to figure out how to make the design difficult minimum.

Refrence: http://www.softkinetic.com/Support/Forum/tabid/110/forumid/5/postid/1/scope/posts/Default.aspx