Jump to content
sibarrick

3d Mean Value Coordinates

Recommended Posts

Basically yes. The real advantage is that the control mesh can be as detailed as you like and the underlying geometry will move precisely along with it, at the same time you can be sure that every point is effected smoothly right though the mesh.

Also, if you try and do extreme distortions on very complex geometry with a metaball falloff then you can easily get situations where the geometry pulls apart. Simply because neighbouring points can be effected very differently from each other where fields overlap. With the mean value corrdinates every single point within the mesh is defined literally by a coordinate so this doesn't happen. Think of it like a lattice deformer but with no internal points. The advantage over a lattice is that you need only animate the external points and it can be any shape you like.

Finally there is abosulutely no need to tweak anything, if the control mesh moves correctly then everything inside it does too. :D

post-509-1119619431_thumb.jpg

By the way when I say bones, I mean "real" ones not houdini bones. I build my control mesh by drawing curves in the bone objects and then object_merging them altogether and then skining them. Obviously then as the bones move so the shape of the control mesh changes.

Share this post


Link to post
Share on other sites

I think this thing produces such nice hassle-free results that it has a great future for Effects type work too - for affecting particle flow (post-processing it) and so on. Bones and capture weights freak out FX artists a little;) Its like swatting a fly with a Buick. Points-mode LatticeSOP use is very useful but I believe these results are even more tweak-free.

So basically I'm motivating for an optimized C++ implementation here.. :rolleyes::D

Share this post


Link to post
Share on other sites

I'm going to give it a go, but if Edward wants to do it i'm sure it would be better, i basically have to learn C++ and the hdk first...... ;)

Share this post


Link to post
Share on other sites

I think this technique has *all kinds* of potentially useful applications (not just smooth deformations) and is well worth an HDK effort. I'm in the middle of production right now, but was hoping to look into it when this is over.

And as I mentioned before, I'm really curious to see if there is a way in which this thing can be used with unstructured point clouds as the control points (I only had a very quick look at the paper so I'm not sure) --- much juicy damage could be done if that were the case :)

Very cool stuff indeed!

Cheers!

Share this post


Link to post
Share on other sites

No, it pretty much depends on have triangles. So it can't be used if you don't have connectivity between the points. However, for regular polygons, it has great uses.

Share this post


Link to post
Share on other sites

Indeed it does but the way it works is to calculate the solid angle of each triangle so could it work on triangle soup???

Share this post


Link to post
Share on other sites
Guest Guest
No, it pretty much depends on have triangles. So it can't be used if you don't have connectivity between the points.
Indeed it does but the way it works is to calculate the solid angle of each triangle

So...... if each point were interpreted as a surface element with an implicit area (as in that "other paper"), then.... still no go?

Sorry. I should just read the thing and stop asking silly questions <_<

Share this post


Link to post
Share on other sites

I think the problem would actually boil down to what is your frame of reference. Some points have to be the control points and others the data, how would you determine which were the data and which the control?

Share this post


Link to post
Share on other sites
I think the problem would actually boil down to what is your frame of reference. Some points have to be the control points and others the data, how would you determine which were the data and which the control?

19034[/snapback]

As you can probably guess, I'm thinking about whether there is a better interpolation method for point clouds lurking in this method (better than the built-in proximity-based pcfilter() method, that is). In that context, the control points would be the points in the point cloud, and the "data" would be the surface itself... but again, I'm just thinking out loud here, so ignore if it's a whacked idea <_<

Share this post


Link to post
Share on other sites

I see, the problem there is that for points actually on the surface they just use the barycentric coordinate of the containing triangle, that's what the calculation collaspes down to.

Share this post


Link to post
Share on other sites
I see, the problem there is that for points actually on the surface they just use the barycentric coordinate of the containing triangle, that's what the calculation collaspes down to.

19058[/snapback]

Ah. OK. Never mind then.

As you were...:P

Share this post


Link to post
Share on other sites
Guest Scott Schaefer
So...... if each point were interpreted as a surface element with an implicit area (as in that "other paper"), then.... still no go?

19030[/snapback]

The construction in our paper was designed off of an integral formulation of the interpolant. Our proofs of linear precision (which is required for deformations) require that the surface be closed or composed of a collection of closed surfaces. So to answer a related post: yes, the coordinates can be calculated off of triangle soups as long as those triangles form a closed surface. Notice that if you don't care about linear precision (in other words, performing deformations), then the surfaces do not have to be closed.

If you do want to use points, you could think of each point as being the center of a sphere. In this formulation, our construction still applies. The integrals get a bit complicated for a perfect sphere. I haven't been able to work out a closed form solution quite yet. I suspect that if you assume a constant function over each sphere, then our formula will reduce to a weighted form of shepard's interpolant (which does not have linear precision).

In order to perform deformations using this point technique you'd have to assume a linear function over the boundary of the sphere. This makes the integrals even more complicated. However, the "point" is that it is possible to use points with this method.

You could always "hack it". For each point p_i add the vectors e_j=r_i(<0,0,0>, <1,0,0>,<0,1,0>,<0,0,1>) to the point to form a tetrahedron. Find the barycentric coordinates of your point "x" with respect to all of the little tetrahedra out there. Notice that each tet has verticies of the form p_i + e_i and x=Sum[b_i * (p_i + e_j)]. Therefore x = Sum[b_i * p_i]+Sum[b_i * e_j] with the last term being a constant vector. I added the r_i into the e_j so that you can give more influence to some points rather than others by increasing the size of the r_i. This deformation method is not ideal because we've introduced a directional bias into the coordinates by using tetrahedra instead of spheres. However, we're just using a simple approximation to the sphere so that we don't have to compute the full integral. Finally, there are better choices for the e_j that will spread the error out more. I used those vectors because they were easy to write down.

If you use the above method, "x" is not just a weighted combination of the points p_i but has an offset as well. This offset kind of makes sense. For instance, consider your point cloud p_i to have three points. Affine combinations of these three points only yield other points in that plane. If "x" is outside of that plane, then you must add an offset vector.

Anyway, I haven't tried this point method at all so I don't know what the results would look like. My notation was a little sloppy above, but I hope you can understand it. If anyone does happen to implement this technique, I'd love to see the results (good or bad). You can always reach me at sschaefe at rice.edu.

Share this post


Link to post
Share on other sites

That sounds very interesting, not sure if I have I use for it at the moment but I might.

One limitation of the original scheme, in some of the situations I need it for, is that I don't necessarily want the weigting to be completely evenly distributed, but I don't really need to turn up the weight of the control points but rather introduce more control points in the interior that effect more precisely points that fall nearby. I wonder if therefore it is possible to combine the original interpolant function with this other one so that interior points can be added for extra refinement?

Share this post


Link to post
Share on other sites

Thanks for the insights Scott!

It'll take some time for me to digest it all, but I'll let you know of any results (as you say, good or bad) from any point-based experiments I might make.

Cheers!

Share this post


Link to post
Share on other sites

Just to show more visually the difference between using lattice point weights and mean value coordinates

Check this hip file out i think it illustrates the point very nicely.

Note if my warp sop wasn't working for you then ignore the warp example and look at what the lattice is doing, and trust me the mean value way doesn't suffer the same problems.

pointlattice_compare.zip

Watch this space for an HDK warp sop, it's should be up soon.

Share this post


Link to post
Share on other sites

HDK'd warp sops, now done as capture and deform flavours for extra speed.

Windows only compile for H7. But the source is supplied if you want to compile a Linux version.

odWarp.zip

Share this post


Link to post
Share on other sites

wow.. the hdk'd version is night and day difference in speed.. thanks a lot simon, this is extremely useful..

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×