ZBrushCentral

Tom the friendly Troll

Holy crap, what a great work!!

It all makes sense to me but I do have one small question.

“Ofcouse I have finetuned the expressions a bit by moving and exaturating the forms in Zbrush slightly.”

How were you able to fine tune the expressions in zbrush? What I mean is how were you able to edit the deformed mesh? Obviously you cannot project the higher detail on the deformed mesh because it would try to project your base head.

To clarify what I am asking (cuz it sounds weird even to me) is if you are not using zbrush for the facial expressions how did you use it to fine tune them? Did you import the geometry at the lowest subdivision level and replace your base head shape with the deformed one, and all the higher subdivisions are kept?

Thanks for all the information :smiley:

That’s exactly what I did. Then refined the model slightly, and if changes occured at the lowest level in ZB after that, I exported that as the new blendshape.

Thanks for the response, very good to know. I have been wanting to attempt some facial character animation, but I was not sure how it would work with keeping certain details.

You rock Jelmer! :+1:

I wholeheartedly agree, and would like to add some more reasons:

Realistic creature deformation is all about recreating the movement of skin which we percieve through high frequency detail like pores, moles, freckles etc. If you just smooth skin a character, it’ll never be realistic as instead of skin and muscle and bone, it’ll look like rubber, stretching around the joints.
Skin is sliding all around over the underlying structures, and prefers to wrinkle up instead of compressing.

The troubles with Zbrush are that, as Jelmer mentioned, you don’t have proper access to individual vertices to finetune the falloff of the deformations. For example you’d want to move the skin as far as the back of the jaw bone for something as simple as a puckered lips blendshape (don’t just accept it, look into a mirror to see it) and it’s very complicated to do this in Zbrush. There aren’t enough tools to relax and even out the amount of deformations either, or test how multiple shapes work when mixed together.
Also, there’s no proper way to test your deformations in motion, in real time, like when you’re pulling the blendshape sliders in Maya.

Which is probably why all the blendshape work I’ve seen that was primarily created in Zbrush looks like rubber faces…

Very nice work on the expressions by the way!

Or it could be the artists doing the blendshape work. Never blame/claim tools for the quality of a work, that’s what I say. :sunglasses:

That said, I can understand why you use Maya for the majority of animation, for added controls.

Can’t expect someone to paint blind and procude stuff as good as he could do when he can see.

Zbrush is just not fit for this kind of work at the quality levels we expect. It’s not the end of the world.

I didn’t say it was the end of the world. I just want to explore what CAN be done in Zbrush, and like I said I understand now why you would use Maya for further control.

So the move tool in zBrush with a very small fall-off, on the base mesh (so it moves the single vertexes) wouldn’t work? Why is that, not enough controll or is it flakey?
Not trying to dispute here, just trying to learn.

Here’s some things I use for my blendshape workflow:

  • wireframe views, comparing two meshes in wire views

  • extreme amounts of zoom, looking at the inside of the mouth, eyelids etc.

  • checking the effect of multiple shapes at once

  • cheking the transition of vertices between end positions

  • selecting multiple vertices at once AND moving/rotating them with soft selection, along a specific axis in space (opening the mouth, closing the eyelid)

  • selecting edge rings, loops, converting selections from faces to vertices, edges etc.

  • sliding vertices along edges/faces

  • relaxing vertices while keeping the underlying shape

  • using Daniel Pook-Kolb’s PaintDeform tool (http://dpk.stargrav.com/)

This is just from the top of my mind… I also use wrap deformers a lot, masking out vertices from the effect of some shapes etc. etc.

Navigation and selection tools are far too clumsy in Zbrush, you don’t have wire views, edge/face selections at all, and you have very little to review your work. Most of the tools are missing too.

So the short answer is that yes, maybe you can use that approach but it’s still far from what we have in other apps, so why bother at all? and Max/Maya has paint tools as well, even if they’re a little more clumsy…

great job! hope u won :smiley:

I didn’t know anyone was competing for a win :laughing:. I think they are trying to educate and give reasons for their work flow. I am personally very interested in learning the differences and appreciate the knowledge shared. Thanks LY and Jelmer :+1:

I see. That makes total sense. Thanks for the explanation LY :+1:

Personaly I use XSI and not Maya. I always thought it a bit strange that you need seperate meshes for your blendshapes in Maya, but I definitly see the benefit of that now. You can use the same aproach in XSI though, so maybe I should give that a try.

Thanks again.

All your works in here are just awesome! Very inspiring!

http://www.zbrushcentral.com/zbc/showthread.php?p=582590&posted=1#post582590

lol i wasnt talking about that…i was refering to jelmer for the cgtalk hardcore modeling challenge :stuck_out_tongue:

now i read wat u guys were talking about…well, i agree with LY completely…but zb could be used to complement the morphs…

I totally feel like a jack ass. Sorry about that, I miss-interpreted your reply. I thought you were still trying to argue with LY’s post, wow im an idiot sorry.

lol its alright :stuck_out_tongue:

I too–have read that “Hyper-real creature” book, and it very much emphasizes a smart mesh flow. Thus, your results are amazing. The downside of dense meshes I find is that they animate slow as molasses when performing the lip-sync! Maybe a proxy model thing would hasten computational time, but that could be silly for facial lip-syncing. I’d love to hear someones opinions. That is why I thought the theory of displacement maps would not only be along for the ride, but also place less of a burden on the processing times for a program to update mesh-wise.

Have you found success in rendering your blends together? I’m trying to render out my own driven displacements in a shading network, and I have like half hour render times for one frame! This is for 6 maps hooked up. I’m thinking it’s either the res of my maps - 4096x4096, or the series of numerous nodes that bog down the calculation! Also I have a 4096x4096 bump map routed in conjunction to a misss shader. I will keep you posted as soon as I figure it out.

Great study, fantastic expressions!!!:+1: