By Shamus Posted Tuesday Jul 3, 2007

Filed under: Programming 18 comments

I’m working on a research project right now that involves deforming polygonal faces and bodies in real time. It’s strictly low-polygon stuff. I’ve had an itch to work on this sort of thing for years. The idea is to have a few basic controls that allows the user to radically change the appearance of the starting mesh. If you’ve played with the face builder in Oblivion then you have the basic idea, although the models I’m working with have a tiny fraction of the polygons to work with. (A wild guess tells me the Oblivion meshes are 5 to 10 times denser.)

What has shocked me is how easy it turned out to be. I thought I was going to need a lot of logic to manipulate points on a face. This is complex business, and I assumed it would need complex code. It doesn’t. The only real trick is identifying the right points on the mesh. Once you know which points make up the tip of the nose, you can pull them around in different ways to make different noses. It’s so simple it’s stupid. In fact, the surest way to make it work badly is to make the logic too complex. I keep trying to come up with more “intelligent” code that will do more with a face based on calculations, but the results are rarely as realistic as simply moving groups of points along an axis. For example, if you want higher or lower cheekbones, then identifying the “cheekbones” points yanking them down is much more effective than trying to analyze the shape of the cheekbones and re-create that shape using points in the desired location.

In the midst of this project, my wife has sent me this video:

Not realtime, but very, very interesting. I remember an episode of Star Trek: TNG where the ship’s computer reconstructed someone’s face based on a small portion available in a photograph. It seemed far-fetched at the time, but here we have the ability to turn flat photos into moveable 3d shapes, which is half the battle right there. Amazing.


From The Archives:

18 thoughts on “Faces

  1. Evilllama says:

    That is incredibly cool.
    I want one of those!

  2. Corvus says:

    Oddly, I’m working on the same thing, admittedly from a non-coding perspective, for some project I have in queue. The fewer polygons you’re working with, the easier it seems to be.

  3. Luke says:

    Shamus, did you ever play Second Life? I played around with it for a few days at one point (and then got bored with it) and it’s character design engine was working exactly the way you describe it.

    You have bunch of sliders which are assigned to certain parts of the body and/or face. Then you drag them up and down to change the shape. Incidentally, now that you mentioned the points, this is exactly how i remember it working – each slider would just push and pull bunch of pre-defined points on the face.

    I haven’t played Oblivion, but I imagine that SL thing might be closer to what you are doing here.

    Btw, I’m amazed you haven’t seen this video. It’s actually pretty old and there was a time that this thing was getting posted on every blog, and message board in existence. ;)

  4. Shamus says:

    Corvus: I was struck by the same thing. Less polys = easier manipulations. Counter-intuitive to my way of thinking, but you can’t argue with success. :)

  5. Shamus says:

    Luke: Yes, I’ve seen SL. I work for the competition. :)

  6. Henebry says:

    From what they’ve shown here, it looks like the “eigenvalues” of the system are a bunch of 3-d facial models. The opening sequence suggests that they have just 9 of these, but I suspect the total may be higher. Anyway, these core facial models may themselves consist of a bunch of polygons, but when the software sets out to model Audrey Hepburn or the Mona Lisa, it do so not by direct manipulation of polygons but rather by working out the correct weighting of the core faces which combine to model Audrey. Audrey might = .25 a + .10 b + .04 c etc. To turn Audrey, or make her smile, they combine the same weighting of the turned or the smiling core faces.

    What do you folks think?

  7. DB says:

    Possibly on the eigen values. We are working with biometric facial matching, but the software is propietary (not ours), but my guess would be comparison to a “Perfect Face”, perfect for matching by the computer, of course.

    To use this technology for good or evil. We could see plenty of new Elvis sightings with a tool like this.

  8. Romanadvoratrelundar says:

    DB, more Elvis movies definitely. Sightings maybe once the general public gets reality-filters ala GitS:SC.

  9. Emma says:

    That’s pretty spiffy. However, I can’t believe that they passed up the opportunity to make the Mona Lisa scowl!

  10. Sauron says:

    Well, I’d argue with the use of “eigenvalues” (eigenvectors would be more appropriate) and I’d say they’re not necessarily eigenvectors, either, but just some basis, but yeah, that sounds like a distinct possibility to me, too.

  11. Thad says:

    Odd, the video stopped for me with about 1.30 to go. (Another youtube video did the same thing.)

    Oh well. And, hey, check out the “related” video Kiwi. :) (plug plug, NZ, plug plug)

  12. Nanja Kang says:

    That was neat, I am reminded of MI:3, with all that fancy scanning for the masks… I got nothing.

  13. Dave@ says:

    Actually I can see something like this forming the basis of 3D video – a lot more processor grunt required, but if you can model from flat to 3D and have some info about the surrounds, it should be possible to simulate the whole environment, letting you move around. That should happen about the same time we get AI and fusion (ie, 20 years from now. No, wait…. now, no, now).

  14. Skeeve the Impossible says:

    I didn’t follow half of what he was saying…….but it was neat.

  15. WysiWyg says:

    Henebry: They say in the movie that they used 200 faces.

  16. Carl the Bold says:

    That was cool with the Mona Lisa, but I wish they would have gotten that smile off of her face, so people would quit asking why she’s smiling!

  17. phlux says:

    That’s really impressive. That sort of thing is is really going to revolutionize 3d modeling. They’ve already done sort of what Shamus is doing in a number of video games. The developers have non-repetitive NPCs created by algorithms so that your’e not looking at the same exact bad guy or store clerk every single time. Oblivion used this for its NPC recreations, but not for monsters, but other games have done both.

    This sort of thing will also be really useful for law-enforcement types.

    I also can’t help but think how cool it will be when I can snap a picture of myself with a webcam and have my face modelled directly on my ingame character. I know this has been done before, but never like this demo.

  18. Tess says:

    Very cool, but did anyone else find the smile a bit creepy?

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply

Your email address will not be published.