Jump to content

VDB maintain UVs


the MAT Hatter

Recommended Posts

Hello All, 

I am using VDBs to combine geometry and I am interested in a way of maintaining specific attributes from the two input geometries. Currently I have two poly objects which are being converted to VDBs and then combined using VDB combine and then converted back to polys using VDB convert. What I want is to be able to identify which polygons on the resulting combined/converted object comes from which input object and also maintain their respective UVs.

 

I have had some success with using the second input of the convertVDB node to transfer attributes.  Using this method however, I am only able to maintain some of the objects attributes, not all.  I've attached some images and a file to demonstrate my issue.  

 

In the attached file I have three objects; Base, Add Sphere and Sub Sphere.  All objects contain UVs and an ID attribute (tmp1), they are all converted to VDBs then combined using VDBcombine nodes.  Finally the resulting VDB is converted to polys using convertVDB.  When both original sphere are input into the second input, the UVs and ID attributes are transferred correctly however the Base object is not.  When all three objects are input into the second input of the convertVDB, only the Base and one sphere's attributes are transferred correctly.  I have tried several different combinations and methods of making this work to no avail, but it feels like I'm so close if I can get all three objects to transfer the attributes correctly, just not all at the same time.  It is possible to use two convertVDB nodes, one with the two spheres in the second input and the second with all three objects in the second input, then deleting the problem polys from either (using the ID attribute) and then merging the two together, but this is not very ideal.

 

In the image on the left you can see the results of the convertVDB using only the two spheres as the second input, the middle image is when using all three objects (base, add_sphere and sub_sphere), and the right is an image of all three input objects showing their UVs.

 

I am new to VDBs so I was hoping someone might have some alternative methods or could point me in the right direction. 

Thank you all in advance! 
Cheers, 
-Mat
 

post-13981-0-41424700-1455644382_thumb.j

VDB_attributes.hip

Link to comment
Share on other sites

By clicking on the plus sign at the Surface Attributes Parameter in the VDB from Polygons Node

one can transfer/convert Surface attributes into Volume attributes.

Also take a look at the Attribute from Volume Node to convert them back to surface attributes.

  • Like 1
Link to comment
Share on other sites

Yader,

Thank you for the response!  The method you described does successfully transfer UVs from one VDB object/volume to the combined mesh.  When I tried using multiple volumes in the second input, the uvs get distorted at the seam polygons, do you know of a way to fix this?  

 

I tried out a method where I use two convertVDB nodes and delete the polys without correctly transferred UVs then merge them together which seems to produce cleaner seems but probably more computationally heavy with two convertVDB nodes.

 

In my attached file you can see a couple different methods I tried (one of them is the way you suggested Yader).

 

VDB_attributes.hip

Link to comment
Share on other sites

  • 1 month later...

Distortion you see is the result of low resolution volume you are using. It is visible only at  seam polygons because your example has base (ground) with UV ideally parallel to volume container. Missmatch exists all over the surface but it is not visible in this idealized example. You can test that. Put one transform node after your UVtexture3 node and make all existing connections of that node go through that XForm node. Make UVquickshade2 node outs to viewport and change rotation of Y axis. You will see standard sticked UV rotate smooth with geometry. Now select Switch1 outs to viewport and do same rotation on XForm node. You will see UV is deforming during rotation. To make it drastic, increase  voxel size to 0.5 on all your VDBfromPolygons nodes so it would be visible (from an airplane) .

 

To avoid distortion or let say to put it in an acceptable level you have to increase volume resolution (decrease voxel size) to some reasonable value. 

 

When you say "transfer UV attribute to VDB" you actually create new volume field of type vector3. Think on that as another volume layer all over container with same voxel size as base SDF. That much differs from concept of attribute you are using when creating one on points or primitives. In that context, if you delete point, you will delete all the attribute values assigned to that point. In volume context, attribute field is another layer (like SDF is scalar field usually called surface). You can delete SDF field without any impact on another let say UV field . 

All manipulations with VDBs you did are computed over SDF. Your UV volume is preserved doesn't matter what you did with SDF. Same computation could be done over UV (or any other volume field).

 

And yes, you can create integer field too. Just click + button on your VDBfromPolygons chose that tmpl attribute and input name you want for resulting field. Middle click on that node will give you a list with all volumes (their names, resolution etc). In other computational nodes instead @name=surface type @name=YourVolumeName  to perform calculation on it.

 

Little tip about integers and their interpretation in volumes. When ever is possible try to avoid them. During internal volume creation, Houdini usually samples several nearest values around each particular voxel and make average of them. It depends on type of node you are using for volume creation. So if your integer attribute represent let say polygon ID value. Probably there would be a lot of places inside container where some voxel is intersected by more than one primitive (polygon) which have completely different ID values. Averaging those will generate some new truncated integer value which doesn't have sense if it is latter used as ID value. To avoid that you can manually force sampling to 1 which will pick just only one (first found) value and store it in voxel. That have sense but it results in another known problem called jittering. If your geometry is moving through container, and several primitives intersect same voxel, sampling algorithm doesn't guaranty it will pick the same in two successive frames. By other words, your integer ID value inside volume can jitter. Conceptually, storing integer could be avoided by planing what is purpose of it. For example, if you want to store integer ID and later use it as lookup index value to grab another attribute from that primitive (let say color) then better way is to create color volume field and grab that color attribute inside volume so proper sampling and averaging produce stable voxel value and not using integer at all. If you have "no other way" situation and you must to use integer then in some situation it would be better to store it as a float. Later you can decide what to do based on read value. If value is some float with fractional part then you know it is wrong (several IDs are averaged in voxel) and you can discard it or whatever, use value from previous frame or have some other mechanism to handle such situation. The point is you can recognize that, in contrast to storing integer as integer you always read integer and you can not know is it good or wrong.                

Link to comment
Share on other sites

  • 2 weeks later...

"Fused as you say" is something you define by volume operators inside VDBcombine node. Math for different volume fields ("volume layers", in this case SDF and UV field) does not have to be the same. In general, you can have one VDB combine node for SDF "merging math" and another VDB combine for UV fields "merging math". However, from the point of view of single voxel at boundary seam, values from UV fields will be averaged because it can not hold both UV values from both UV fields. So you have to decrease voxel size (increase resolution) on all your volumes. With smaller voxel size, seam distortion will be thinner.

 

If you read my previous post about integer primitive ID, similar things happens here, you have two different UV sets each is reference to pixels from let say different texture map. And now imagine one voxel at boundary seam, from SPHERE UV field it gets let say (0.1, 0.9, 0) UV coordinate and from BASE UV field let say (0.8, 0.1, 0). Averaging those values will result in new UV value (0.45, 0.5, 0) which is not even close to neither of sources UVs. So you see it as distortion. 

 

Conceptually, it is bad idea to use UV field for "fusing, as you say" two (or more) different UV sets into single UV field.

 

Better approach is to store each UV from geometry to different UV field. So you have (uv, uv1, uv2 etc volume fields). When math over SDF is done and VDB is converted to polygons, then just by using PointVOP you can sample every UV field manually (using 2 "volume sample vector" nodes if you have 2 UV fields) and take which UV you want in the case both sample nodes read value different from 0,0,0.  Or some other way which depends on problem you wanna solve. In attached example, UV from base mesh is copied on the resulting mesh, then, at  places where UV is not defined both UV fields (from sphere1 and sphere2) are sampled and based on the sampled values final UV is chosen.

 

scene file:

http://wikisend.com/download/310528/VDB_attributes2.hip

Link to comment
Share on other sites

Thank you djiki!

I am not sure I will be able to dive further into this today, but I think there is a way to do this better. With your method you still have a thin stretch of UVs that are completely distorted. Thy are like that, because what should be discrete UV shells are fused around the outlines. The solution is theoretically simple:

1. Make a decision to which original mesh the polygons on the new mesh are corresponding.

For most polygons you would decide by proximity, but around borders you make a discrete decision based on input order of the original meshes. This could be implemented as a step by step loop in which each new transfer of attributes overrides the previous transfer in a discrete fashion. So the attribute transfer has to happen in two steps:

First the identifier attribute is transferred interpolated by proximity. The result is a float value stored in a buffer attribute. Then the buffer is compared to the integer identifier value that was already on polygons. Polygons that have not received an identifier, yet, have an identifier of -1 (or something like that). Based on a certain threshold value and some basic logic, a new (integer) identifier value is decided upon for each polygon. (Repeat for all original meshes.)

Polygons that have an identifier of -1 at the end of the process, could receive some form of default UVs, like an automatic unwrap later in the process.

This might not be perfect, but this would ensure that we get an "integer" identifiers for each polygon.

2. Split the polygons based on their identifier attribute.

3. Transfer the UVs per shell (corresponding polygon shell to corresponding UV shell)

4. Fuse the polygon shells, but do not fuse UV shells. (This step seems trivial, but I am actually not sure how to do this step in Houdini.)

 

The result should be UVs that have seams but acceptable levels of distortion, even at relatively low VDB sampling density. The identifier could then also be used to transfer material types and more. 

Link to comment
Share on other sites

It is possible for simple tasks like in Mat's scene. (in that case you don't need UV volumes at all but he asked for solution using volumes). UV volumes you will use in cases where you  can not make a decision to which original mesh the polygons on the new mesh are corresponding to.

 

For example if you have an emitter of fluid which fills some objects and collide with other etc then its resulting mesh can not be co-related to source emitter geometry . Or you write your custom pyro solver or shader where you need UV on your smoke, fluid mesh etc ..... I made an example scene where UV from source geometry is converted to UV field which is then advected by velocity field of pyro solver and in custom smoke shader used to map texture to your smoke.   In scenarios like that using UV field is the good option. If you simulate several frames of attached scene and render it, you will see mandril's pic mapped to UV field of pyro smoke.

 

Scene file:

http://wikisend.com/download/262402/Pyro_with_UV.hipnc

 

cheers

  • Like 1
Link to comment
Share on other sites

  • 1 year later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...