I’m working on a research project right now that involves deforming polygonal faces and bodies in real time. It’s strictly low-polygon stuff. I’ve had an itch to work on this sort of thing for years. The idea is to have a few basic controls that allows the user to radically change the appearance of the starting mesh. If you’ve played with the face builder in Oblivion then you have the basic idea, although the models I’m working with have a tiny fraction of the polygons to work with. (A wild guess tells me the Oblivion meshes are 5 to 10 times denser.)
What has shocked me is how easy it turned out to be. I thought I was going to need a lot of logic to manipulate points on a face. This is complex business, and I assumed it would need complex code. It doesn’t. The only real trick is identifying the right points on the mesh. Once you know which points make up the tip of the nose, you can pull them around in different ways to make different noses. It’s so simple it’s stupid. In fact, the surest way to make it work badly is to make the logic too complex. I keep trying to come up with more “intelligent” code that will do more with a face based on calculations, but the results are rarely as realistic as simply moving groups of points along an axis. For example, if you want higher or lower cheekbones, then identifying the “cheekbones” points yanking them down is much more effective than trying to analyze the shape of the cheekbones and re-create that shape using points in the desired location.
In the midst of this project, my wife has sent me this video:
Not realtime, but very, very interesting. I remember an episode of Star Trek: TNG where the ship’s computer reconstructed someone’s face based on a small portion available in a photograph. It seemed far-fetched at the time, but here we have the ability to turn flat photos into moveable 3d shapes, which is half the battle right there. Amazing.
What is Vulkan?
There's a new graphics API in town. What does that mean, and why do we need it?
Games and the Fear of Death
Why killing you might be the least scary thing a game can do.
Project Frontier
A programming project where I set out to make a gigantic and complex world from simple data.
Batman: Arkham City
A look back at one of my favorite games. The gameplay was stellar, but the underlying story was clumsy and oddly constructed.
Video Compression Gone Wrong
How does image compression work, and why does it create those ugly spots all over some videos and not others?
That is incredibly cool.
I want one of those!
Oddly, I’m working on the same thing, admittedly from a non-coding perspective, for some project I have in queue. The fewer polygons you’re working with, the easier it seems to be.
Shamus, did you ever play Second Life? I played around with it for a few days at one point (and then got bored with it) and it’s character design engine was working exactly the way you describe it.
You have bunch of sliders which are assigned to certain parts of the body and/or face. Then you drag them up and down to change the shape. Incidentally, now that you mentioned the points, this is exactly how i remember it working – each slider would just push and pull bunch of pre-defined points on the face.
I haven’t played Oblivion, but I imagine that SL thing might be closer to what you are doing here.
Btw, I’m amazed you haven’t seen this video. It’s actually pretty old and there was a time that this thing was getting posted on every blog, and message board in existence. ;)
Corvus: I was struck by the same thing. Less polys = easier manipulations. Counter-intuitive to my way of thinking, but you can’t argue with success. :)
Luke: Yes, I’ve seen SL. I work for the competition. :)
From what they’ve shown here, it looks like the “eigenvalues” of the system are a bunch of 3-d facial models. The opening sequence suggests that they have just 9 of these, but I suspect the total may be higher. Anyway, these core facial models may themselves consist of a bunch of polygons, but when the software sets out to model Audrey Hepburn or the Mona Lisa, it do so not by direct manipulation of polygons but rather by working out the correct weighting of the core faces which combine to model Audrey. Audrey might = .25 a + .10 b + .04 c etc. To turn Audrey, or make her smile, they combine the same weighting of the turned or the smiling core faces.
What do you folks think?
Possibly on the eigen values. We are working with biometric facial matching, but the software is propietary (not ours), but my guess would be comparison to a “Perfect Face”, perfect for matching by the computer, of course.
To use this technology for good or evil. We could see plenty of new Elvis sightings with a tool like this.
DB, more Elvis movies definitely. Sightings maybe once the general public gets reality-filters ala GitS:SC.
That’s pretty spiffy. However, I can’t believe that they passed up the opportunity to make the Mona Lisa scowl!
Well, I’d argue with the use of “eigenvalues” (eigenvectors would be more appropriate) and I’d say they’re not necessarily eigenvectors, either, but just some basis, but yeah, that sounds like a distinct possibility to me, too.
Odd, the video stopped for me with about 1.30 to go. (Another youtube video did the same thing.)
Oh well. And, hey, check out the “related” video Kiwi. :) (plug plug, NZ, plug plug)
That was neat, I am reminded of MI:3, with all that fancy scanning for the masks… I got nothing.
Actually I can see something like this forming the basis of 3D video – a lot more processor grunt required, but if you can model from flat to 3D and have some info about the surrounds, it should be possible to simulate the whole environment, letting you move around. That should happen about the same time we get AI and fusion (ie, 20 years from now. No, wait…. now, no, now).
I didn’t follow half of what he was saying…….but it was neat.
Henebry: They say in the movie that they used 200 faces.
That was cool with the Mona Lisa, but I wish they would have gotten that smile off of her face, so people would quit asking why she’s smiling!
That’s really impressive. That sort of thing is is really going to revolutionize 3d modeling. They’ve already done sort of what Shamus is doing in a number of video games. The developers have non-repetitive NPCs created by algorithms so that your’e not looking at the same exact bad guy or store clerk every single time. Oblivion used this for its NPC recreations, but not for monsters, but other games have done both.
This sort of thing will also be really useful for law-enforcement types.
I also can’t help but think how cool it will be when I can snap a picture of myself with a webcam and have my face modelled directly on my ingame character. I know this has been done before, but never like this demo.
Very cool, but did anyone else find the smile a bit creepy?