Jump to content

Tessellation and limits


Jason

Recommended Posts

Hi all,

Has anyone done something like this - I am anticipating needing a polygonal mesh soon that would be tesselated down to polys of less than a certain area.

First of all, there doesn't seem to be a way to simply tessellate select polys, convex polys or tri's except by Subdivide which creates unwanted points along the edges.

Assuming we tesselate a face, this would be the perfect thing to have a LoopSOP for where we could measure area, partition into groups based on an area conditional and then subdivide and feed it back in to the loop.

Anyone have any good ideas?

Link to comment
Share on other sites

Basically, I'd love to avoid patch-cracking if I had to displace the points. Also, to avoid inserting points inserted in polys which share edges with the poly we're splitting.

I was thinking of putting a vertex at the centroid or barycenter of the (convex) polygon.

This whole thing is basically to allow me to refine a mesh till i reach an even as possible distribution of polys and then displace it, and then use a Scatter SOP for SSS stuff.

Link to comment
Share on other sites

Doesn't mantra handle T connections properly?

Hmm ... You could use the Divide SOP but I guess that won't necessarily give you the shape you want. It's interesting that the Scatter SOP seems to handle non-coplanar polygons differently than what the Divide SOP outputs.

I wonder if there isn't some way to just store the uv's into the scattered points and then displace based on that ... it should be the same as if you displaced the original geometry? I'm out of my depth here as I know next to nothing about rendering.

PS. would something like how they dice up cloth work?

Link to comment
Share on other sites

Doesn't mantra handle T connections properly?

Hmm ... You could use the Divide SOP but I guess that won't necessarily give you the shape you want. It's interesting that the Scatter SOP seems to handle non-coplanar polygons differently than what the Divide SOP outputs.

I wonder if there isn't some way to just store the uv's into the scattered points and then displace based on that ... it should be the same as if you displaced the original geometry? I'm out of my depth here as I know next to nothing about rendering.

PS. would something like how they dice up cloth work?

14033[/snapback]

Mantra handles T connections by building coving polygons to fill gap created by displacement.

Doing the displacement in the pointcloud afterward wont be possible because the pcfilter() or pciterate functions wont know about the displacement. It'd never be able to sort your points by distance properly, and I'm sure that the displaced points should cause a different KDTree to be built, etc. I'm pretty sure that when you pcopen() a .bgeo or .tbf file that it initialises a tree structure to accellerate lookups by pcfilter() etc.

How do they dice up cloth, btw? Do they just generate tri's? or do they tessellate it up?

So here I'm pimping a PolyDiceSOP with a few nerdy techniques to split up polys and tris, and a much desired LoopSOP! :ph34r:

Link to comment
Share on other sites

Of course, sorry for not thinking.

Yeah, it triangulates ... evenly scatter points on the model and then connect the points back up. The advantage is that you end up with polygons that are about all the same size.

14035[/snapback]

That could be a useful thing, for sure. :) And since you have that already written... ;)

Link to comment
Share on other sites

Hey guys,

Sorry for being obtuse over here (I'll blame it on the sore-throat medication), but what is it exactly that prevents you from displacing the cloud points (in the exact way that a displacement shader is displacing the source geometry)? ... a genuine question, since I haven't tried SSS on displaced geometry yet, so I half-convinced myself that that's the way it would work... but now I see there are "issues" :(

Link to comment
Share on other sites

You'd think that the points would have to have been displaced prior to pcopen(), right? If not, you will definately be messing with the accuracy and efficiency of the whole thing. There isn't a way (at render time) to displace all the points and then re-initialise the pc kd-tree.

Right now the only way to do this is to displace your points in a VEXsop prior to writing out the pointcloud. And hopefully your displacement code, with all its filtering based on derivatives only available in the shading context provides accurate enough displacement without them in the SOP context.

It seems like the pc*() functions would need to be able to accept changes to the pointcloud on the fly and be able to reinitialize its tree. I'm sure this could be terribly slow, no? pcdisplace() or something..

Link to comment
Share on other sites

Right now the only way to do this is to displace your points in a VEXsop prior to writing out the pointcloud. And hopefully your displacement code, with all its filtering based on derivatives only available in the shading context provides accurate enough displacement without them in the SOP context.

Right. That's exactly what I meant: duplicate the displacement (and yes; sans filtering) in a VEXsop. By definition, filtering will kick in when a large amount of surface area is being squeezed into one pixel; so two things occur to me: 1) at the extremes, the amount of error due to high-frequency displacement in the final filtered result shouldn't be that noticable, and 2) your pointcloud will be sampling at a much, much coarser resolution to begin with... which forces you to use pcfilter()... whose radius you could adjust as a function of the shading filter size (IOW: the filter base for the pcloud will be *much* larger than the shading filter to begin with anyway)...

Again; I'm just speculating here, so I could be all wrong about it.

But since you're obviously in the middle of doing this, I'm really curious: have you duplicated your displacement in a VEXsop and found a large amount of error in the expected result? (I'm really interested in this as it might affect me in the not-to-distant future).

It seems like the pc*() functions would need to be able to accept changes to the pointcloud on the fly and be able to reinitialize its tree. I'm sure this could be terribly slow, no? pcdisplace() or something..

14043[/snapback]

Yup; that would be slow, as I would think the space partitioning would take place either at pcopen(), or (maybe) when creating the texture version. But pcopen() *does* get called on every shade point in its current implementation -- so varying the radius per-call wouldn't incur extra overhead... I wouldn't think....

Sorry. I shouldn't be talking without actually trying this stuff out... but I *am* curious about it :)

Link to comment
Share on other sites

Right. That's exactly what I meant: duplicate the displacement (and yes; sans filtering) in a VEXsop. By definition, filtering will kick in when a large amount of surface area is being squeezed into one pixel; so two things occur to me: 1) at the extremes, the amount of error due to high-frequency displacement in the final filtered result shouldn't be that noticable, and 2) your pointcloud will be sampling at a much, much coarser resolution to begin with... which forces you to use pcfilter()... whose radius you could adjust as a function of the shading filter size (IOW: the filter base for the pcloud will be *much* larger than the shading filter to begin with anyway)...

Yeah - I tend to think you're right here. If the filtering behaves as expected then your unfiltered pointcloud should still match close enough to the filtered displacements. Just for arguments sake it worries me a touch still.

But since you're obviously in the middle of doing this, I'm really curious: have you duplicated your displacement in a VEXsop and found a large amount of error in the expected result? (I'm really interested in this as it might affect me in the not-to-distant future).

Sorry. I shouldn't be talking without actually trying this stuff out... but I *am* curious about it :)

14046[/snapback]

Actually, I'm not really DOING it right now - its a labour of love thats going to have to kick in next week only after this other project wraps. I've only spent a few minutes trying it and I'm really just only thinking about it right now.

No new ideas spring to me right now.... :tumbleweed:

Link to comment
Share on other sites

Right now the only way to do this is to displace your points in a VEXsop prior to writing out the pointcloud. And hopefully your displacement code, with all its filtering based on derivatives only available in the shading context provides accurate enough displacement without them in the SOP context.

Sorry to be such a noob, but what are these derivatives that are missing in the SOP context? I have really only built simple shaders in VOPs but I assumed you could copy a VEX displacement network into the VEX SOP context, and with a little tweaking for the differences in the available VOPs, get the same results. That's not correct?

Link to comment
Share on other sites

Sorry to be such a noob, but what are these derivatives that are missing in the SOP context?

14073[/snapback]

Hey DaJuice,

The concept of "filtering" or "anti-aliasing" a shader (and the tools with which to do it) exists in the shading contexts (and in COPs) but not in the other ones. This is because a pixel in the final output image could project to an arbitrarily sized amount of surface (or many pieces from different surfaces) all crammed into one pixel, so the shader should normally try to come up with an adequate value that represents the *whole* thing... not just a single point sample.

This situation doesn't exist in SOPs. It does exist wherever you have an output device (e.g: an image) with a limited resolution, which forces you to *resample* the data.

Hope that makes some sense :rolleyes:

Cheers!

Link to comment
Share on other sites

Nice explanation Mario..

An example, Da Juice:

If you imagine a checkerboard pattern on a grid getting shrunk down in your camera view, your shader needs to know how big a space one shader sample represents so that it can return a grey colour if many white and black checks end up in the area of one pixel. For some patterns this is easy and for some, exceedingly difficult.

For Fbm noise, we often just stop adding noise if the frequency is too high to be properly represented. The only way we know when we reach this threshold is by looking at how large an area a sample is meant to represent. (Look up "Nyquist Limit" - a common signal processing term). Luckily, SESI has provided VOPs to do this for us - look at the code in voplib.h for how the Anti-Aliased Noise VOP does it. Almost all of the pattern VOPs have anti-aliasing built into them so you hopefull will never have to worry about it and just be creative with VOPs.

Some raw VEX calls also automatically use derivatives too - compare the shader function "texture()" to the generic "colormap()" function. You cannot use texture() in the SOP context because calculating derivatives is complex and subtle task for an offline renderer and depends very much on resolution and shading quality.

By contrast, rendering into the OpenGL viewport does not calculate derivatives a SOP can use. OpenGL is different beast.

Link to comment
Share on other sites

  • 2 weeks later...

Maybe if Houdini does begin to support Subd's as primitive types then the Scatter SOP could possibly be enhanced to scatter points on the limit surface instead? (Or an "Assume Polygons Are Subdivision Surfaces" toggle in the SOP). It could even possible evaulate any VEX displacement shaders too? ;)

Link to comment
Share on other sites

I keep bugging them to add subdivision surfaces as a real geometry type.

I think the only 'true' real surface evaluation is done at render time. The subdivisionSOP itself doesn't go directly to the limit surface. It is more of an level off approximation not an absolute. I would be surprised if they scattered to the limit and not to a level close to the limit...

Without getting into too many details.. there are a bunch of rules and ways to get to a subD surface depending on what your hull is like. Some are faster, others depend on setting up the topology first so it can be subdivided.

I'll stop now before someone comes over and hits me.

-k

Maybe if  Houdini does begin to support Subd's as primitive types then the Scatter SOP could possibly be enhanced to scatter points on the limit surface instead? (Or an "Assume Polygons Are Subdivision Surfaces" toggle in the SOP).  It could even possible evaulate any VEX displacement shaders too? ;)

14354[/snapback]

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...