Jump to content
itriix

coke can disintegration expirements

Recommended Posts

Hi everyone, after learning quite a bit about the delayed load shader from the krakatoa/delayed load discussion and the sidefx help on delayed load found here, i decided to start putting my knew found knowledge to use on a little project... i know everyone in the world has done a "disintegration" effect, but i just wanted to use this as good testing grounds for the delayed loads capabilities.

I quickly found myself running into a variety of challenges that I thought would make this a good post for learning.

Anyways, i'm including a "very very very terrible first highly compressed render" of the project, so that i can use that as a starting grounds for some of my current issues/learning/questions. (which if you scroll down is at the bottom of this long post)

Setting aside the dynamics, which i'll tweak LATER, here are some of my first initial challenges:

1. The original model caused me a few different issues, all which i think are great for learning purposes: I found the free coke can model here. It was originally a .3ds file, that i imported into maya, reversed the normals, and exported as an fbx to houdini. The export from Maya to Houdini applied the uv information to the vertices. This caused me an issue at first when I was trying to inherit the color for my particles. I was using the colormapVOPPOP approach to get the color information applied to the particles, which has been discussed here. It's a much faster approach that using the PIC function indeed. However, this left me with a new problem that I posed here. My solution ended up being that because the uv information was applied to vertices, the colormapVOPPOP approach, used a colormap, which needed POINT information not VERTEX, so I used an attributePromote to get the uv's from vertices to points... I'm not sure if there is a better way to do this, because i've heard this approach isn't very good... any ideas about this? All in all, for the time being, it seemed to work....

2. Because the model was broken up into 4 peices: LABEL, TOP AND BOTTOM METAL, SCREW and TONGUE of the Coke Can, each with their own material applied and own uv's, this caused me some issues when trying to simulate the particles. To begin, i love Miguel Perez's workflow for procedurally generating an ALPHA matte and DRIVING the particle's emission, this topic can be found here. Great approach, however, as soon as i applied this to 4 individual pieces, i noticed things got a bit more complicated, *not terrible, but it just meant that i had to essentially repeat the same alpha matte generation for each piece, and then eventually, simulate particles for each piece individually. Since the color information for the particles was being generated by the colormapVOPPOP approach, i HAD to simulate each piece's particles seperately because they pointed to different "texture maps"... The answer for now was, pull each piece into a single popnet, each in turn plugged into the input1, 2, 3, 4 *luckily there were only 4 pieces*... And then sourcing each one seperately from within the popnet. This allowed me to use a different texture map via the colormapVOPPOP for each of the pieces individually, solving my particle color issue. I just used a collect pop to merge the results. Once again, this led me to believe that if this model was just one piece, with one texture map, things would have been much easier. I kept considering just going back to maya, and trying to make that happen, but just continued on in houdini for the time being anyways.

3. Finally, everything was set up, i just needed to get the alpha information back on to each piece individually, and apply the material. Alright, but "darn", the materials were VEX FBX SURFACE SHADER, that when I rendered, i noticed didn't seem to recognize alpha! So, i couldn't actually edit the vopnet since it was VEX. I'm not very proficient in VEX, so when I took a look inside, I saw a few places that maybe i could try and tweak, but just decided to stay away. I just found an aluminum material that i could edit the vopnet, and just multipled the "alpha" by the "specular" contribution... this did the trick! AWESOME!

4. Lastly, here was the next issue that i'm faced with now. I cached multiple sims to disk with different seeds... created geo nodes for each cache, used the delayed load shader on each geo node, turned on point rendering and RENDERED the sequence... When I got the results, i noticed, DARN, oh yeah, the "materials" lighting information isn't captured with the "particle" color. So, i see a nice shiny can, with specular highlights and all, but as soon as the particles are generated they are "plain flat metal" color with no lighting. This is where I am now. Currently i'm looking into "baking" out the material information found at this article. I'm running into some issues though and my hunch is it's got to do with the individual pieces. If I select my "final merged can" that has all of the pieces with materials, merged into one final node, and use that as my "geo node" to bake from, i get a beautiful texture map with lighting baked, but i can't seem to APPLY it correctly BACK to the model. I tried just adding a material to the merged pieces, and pointing it to a constant shader that points to the "baked map"... but it's as if there are "NO UVS" on the geometry pieces... So I tried a uvproject *which i don't think is correct* because the uv info is supposed to be there already, i mean the textures show up correctly before hand, with the original materials... so uvproject just doesn't seem right, but anyways, i tried it, i was able to see the texture map, and it looked correct, but of course, as soon as i looked at a different camera angle, the texture map was indeed "stretched and warped" because it's being projected! This isn't right. Just not sure how to fix it.

Anyways, that's it for now, if you've read this far, WOW thanks so much! I really appreciate that. I tried to document all of my findings and learning so far through this little endevour. I hope that by the end, i have a very nice beautiful render that has been well documented in all of the approaches and ways to go about something like this for the different particular cases in the end contributing to the learning of anyone else who finds themselves with any of these particular case scenarios found out in here.

Thanks again so much,

Jonathan

coke_can_disintegration.mov

Edited by itriix

Share this post


Link to post
Share on other sites

okay, so i feel i'm on to something with the whole material/light baking... but still not getting correct results. i'm posting some pictures of the textures that are being generated. i'm then using a constant material that points to the individual texture map, and simply applying it back onto the original piece of geometry. for sake of file size, i just opened each image in preview and took a screen grab. also i'm upping an image of what the correct render should look like compared with the unwrapped texture maps applied back to the individual pieces of geometry....

*EDIT* SORRY IGNORE THE BLACK IMAGE, it didn't upload correctly... The other image is fine though. It's the unwrapped texture maps.

unwrap.tiff

post-3821-125717711963_thumb.jpg

Edited by itriix

Share this post


Link to post
Share on other sites

and a side by side comparison of the correct render vs the unwrapped applied texture maps

Please excuse the blown out lighting on the correct render. i'm more concerned with just getting the material/light baking procedure to work first before i start tweaking the "beauty" of the actual image... As you can tell, the two images are quite different. the lighting is different and the shadows are different!

sidebyside.tiff

Edited by itriix

Share this post


Link to post
Share on other sites

Here is a pic of my mantra render settings. As I stated earlier, the model is broken up into 4 pieces... Label, Top and Bottom Metal (one piece of geometry though), Screw and Tongue... So i'm trying to bake out the material for each piece individually, then applying a constant material that points to the individual piece's baked texture map, for each individual piece. Based on the odwiki, it suggested using the extra parameter UV Object and UV Attribute instead of using the mantra -u command line way of baking. So that's the method i tried. Since it didn't seem to work correctly i tried the other method too but didn't seem to work either. It seems as though it's PARTIALLY working though. So there's probably some settings i'm just not doing. For example: It suggests setting dithering to 0, but i don't see a dithering anywhere. It also suggests, setting the HIDDEN pixel filter to minmax edge... Is this referring to the Pixel Filter that by default says gaussian 2 2 in the Properties Tab --> Output subTab of the Mantra ROP? Also thinking maybe the problems are pertaining to the way i'm doing shadows.... Still have quite a bit of testing to do. If anyone catches anything suspicious please let me know!

Thanks,

Jonathan

some_of_my_render_settings.tiff

Edited by itriix

Share this post


Link to post
Share on other sites

Just to attempt simplifying things, I tried going back into maya, selecting the 4 pieces of geometry, and doing a POLY --> COMBINE, turning them into one piece of geometry. Then I exported as an fbx. I was able to get the model into houdini, with materials and all assigned correctly based on groups that came along with the fbx. I tried a render, and it looked just like the original model with multiple pieces. So that's cool. This would make everything so much easier, if I only needed to have "one particle" sim for the entire can instead of each piece... etc etc... I made a new network and set things up. I then was back to the "bake texture step"... Everything came to a screeching halt once again. When I baked the texture out, obviously something is super wrong. I just am not a UV Officianado so i'm really not sure what or how to fix this.

Here is a pic that shows the uv's, unwrapped baked texture map and a render with the texture map applied. It seems like the textures are just all merged on top of eachother. Makes me feel like having the individual pieces might end up still being the better solution? It just seems like since this model looks correct BEFORE baking, even though it's merged and has multiple materials applied using groups. It still, to me, seems that it means that the uv's are correct. The fact that the texture maps line up in the right areas.

So i'm going in circles a bit. :) if anyone has any suggestions, please let me your thoughts!

Jonathan

Share this post


Link to post
Share on other sites

I haven't looked at any of your attachments. But for future reference, consider posting .png files instead of .tiff. It just makes it a lot easier when web browsing.

Share this post


Link to post
Share on other sites

I haven't looked at any of your attachments. But for future reference, consider posting .png files instead of .tiff. It just makes it a lot easier when web browsing.

Okay thanks for your tip!

I've started a new topic specifically devoted to the "texture" baking issue here. I used .pngs :) much much muuuuuuuuuch prettier!

When I reach a conclusion to the texture baking, i'll make sure to post the final solution to the problem here.

Edited by itriix

Share this post


Link to post
Share on other sites

A new development:

Not related to the "texture baking" issue that i'm having. BUT, it's something that i've been trying to figure out for a while. So, originally, for each SIMULATION cache, I used a geo Node that used a delayed load shader to load in the particular cache. I felt like this might be inefficient however for something like this. If say I had 100 caches, that would be 100 delayed load shaders. That's why I pursued trying to get this to work with only ONE delayed load shader with a material override for the file path to the cache. Now it's working!

Here is a link to the topic where I discovered how to get this set up and working.

The jist of it is this:

1. turn the delayed load shader into a MATERIAL.

2. on the GEO NODE, don't point to the delayed load shader anymore in the RENDER TAB --> GEOMETRY SubTab --> Procedural Shader. Instead, Use the Material Tab and point it to the Delayed Load Shader's MATERIAL node.

3. Use the pulldown menu on the Geo Node's MATERIAL tab and select: Select and Create Local Material Parameters. Select the "file" parameter.

4. In the new parameter that was created: use a script such as:

`chs("../../out/GEO_" + opdigits(".") + "/sopoutput")`

In ROPs, my GEO ROP nodes are named: GEO_1, GEO_2, GEO_3... etc...etc.. the /sopoutput is the parameter that points to the file path of the "PARTICLE SIM CACHE"...

In OBJs context, my GEO nodes are named: COKE_DELAYED_SHADER_1, COKE_DELAYED_SHADER_2, COKE_DELAYED_SHADER_3... etc...etc. And so, with the above expression, the opdigits will return the digit at the end of the GEO NODE, which will in turn change the above path to point to the matching GEO ROP node, which then will point to the correct PARTICLE SIMULATION CACHE... That then get's used in the SINGLE DELAYED LOAD SHADER MATERIAL node for the "file" parameter path, that then will give me the correct sim file cache for each GEO NODE! I already tested it on my "four geo nodes" and it works wonderfully! Now the funny part is, i'm not really sure HOW much more efficient this is... I guess it would keep down the file size though :)

I'm hoping it makes the "look up" a bit faster too! Instead of having to look up 100000 shaders for example if there were that many geo nodes, it would only need to look at one.

Thanks,

Jonathan

Edited by itriix

Share this post


Link to post
Share on other sites

A small note. An alternative to fbx from maya to houdini is to use an old fashion .obj. It may seem arhaic, but they remember group name, normals, and UV's. The plus is you don't get all the transform nodes and subnets from the fbx, sliming down your network and cleaning up your pipeline, and then you can easily rop it out as a bgeo, for a small file size with the same data plus more info if you want. Using the facet node on the .obj will also clean up the normals to make it look better, but the only draw back with the facet node is that it can give you weird artifacts with sub-d's. I use it in my own pipelines a lot.

I have to say I do like your coke can experiments, I do a bunch of shader and dynamics test with one. I attached my current looking version. Coca-Cola and Houdini what more could be better... perhaps Guiness, too...

post-5070-125757870195_thumb.png

Share this post


Link to post
Share on other sites

This is a new render... terrible compression though. However, you can get the jist of it. In order to fix my "inheriting material color" for my particles, I decided not to render as POINTS but as spheres so that I could apply a metallic type shader to them in order to respect the material/lighting look of the coke can.

Ben:

The problem happens with both FBX or OBJ. I tried both. Something is just quirky with the way i'm trying to BAKE a MATERIAL into a texture map. Not sure why, but the lighting/shadows are not correctly baking...

Any luck with baking?

Cheers,

Jonathan

coke_can_disintegration_03.1_loqlty.mov

Edited by itriix

Share this post


Link to post
Share on other sites

Lol, Jon I didn't even realize it was you who was posting the wip, crazy. I need to pull my head out the computer. Actually I haven't done much texture baking yet, though I for see I will be digging into heavy gear into that in a week. My thesis is due on the 11th of December and so far I've learned how to instance, and delay load my whole entire junkyard(currently hundreds of hand placed objects rendering in just over a minute, certainly going into the thousands, with a tricked out subnet that carries me most of the way), but I haven't begun testing the shaders. I've just been doing it in AO. A possibility you could try is to render out an irradiances cache sequence from the mantra node and see if that works, I don't have time to test it, but it is a point cloud pretty much like prman's colorbleed occlusion...

Oh pull your light further back the dust is leaving the deep shadow map.:ph34r:

The real question is why don't you just do it as a comp trick? Render out the effect with just a plain white/50%grey lambert shader to grab the light info, and one as a pure white constant shader to make an alpha version. Then do your beauty on top of it and just fade it out as it breaks up. I think the dust looks pretty cool separate from the object, you would have to look at the sand man guy from spiderman to see if once his stuff breaks up if the lighting is really that dynamic, a.k.a. moving spec and such, but I'm guessing you can cheat that in comp anyways and it would be more efficent for rendering. Use a noised(turbulence maybe) out alpha math-ed(not sure if an add, multiple or what would make it look better) on top of it. Using the 50%grey lambert should kill any spec that would be apparent in the depth/shadow...

Another funny thing Jeannie just came by and i think she'll be bakeing off some fractured objects too for her thesis, because she wants the pieces to retain their color after using the shatter/cookie nodes... The info is somewhere, for this all... lol

bytw... wicked cool dusting B)

Edited by LaidlawFX

Share this post


Link to post
Share on other sites

sup dude, you almost done with your final? thanks for your comments. aside from that i still don't think you get the "question" i'm posing...

I simply just want to know WHY the texture baking isn't working. I know how to do render passes, change my lighting, ALL of that. that isn't my concern. I've already even rendered this out using a totally different method unrelated to the texture baking. however, it doesn't change the fact that i just want to know the reason the texture baking isn't working! As for your thoughts to check out sandman from spiderman, i attended the tech talk at siggraph that was totally devoted to that... they implemented a technique which allowed them to get SO MANY particles by breaking down the color of the particles to a fundamental 3 pixel choice... of white for spec, black/dark for shadow, sand color for normal... This very small/simple choice allowed the rendering to be SUPER fast compared to using regular methods. But, implementing that with this doesn't make sense because i'm not using something as simple as "sand" color. I need a full variety of TEXTURE color. Besides, as I said above, that's not really my question anyways.

if you talk to jeannie please ask her to email me her discoveries. i'd be interested to know if she has any ideas what's happening with my example here. g'luck on your final thesis

Thanks,

Jonathan

Edited by itriix

Share this post


Link to post
Share on other sites

hmmm... So I re-read through your post and I saw the image in the texture baking thread that had the image comparison Texture Baking Thread... Lighting Issues. So I saw the difference in baking, I'm guessing that is still your furthest progress. Until I play with it some more in a week I don't know first hand anything.

The one thing I could guess in that vein, if it is the right vein this time, is that your model is really heavily tessellated. It could be possible your UV's are overlapping and not clean enough laid out where information is getting confused, but it doesn't seem to be like that since you do get the color and are just missing the lighting info. I guess the simple way to test that would be to try it on a simpler model a.k.a. box and UVpelt it flat to see if that is really a solution or not. I could give you my can which I did from scratch encase that is a difference.

**Shoulder shrug, and head scratch** Good luck man. I'll ask Jeannie to forward any knowledge on to you that she comes across. I'll let you know what I find out when I get all my junk placed.

...last though after I hit the reply button. It could possibly be your shader not sending the right info out, or that version of Mantra not reading it right(The obvious I know) I know when I crack open my shaders and rebuild them I toss out a lot, and have never tested them for baking issues. Also when I jump back and forth between versions of houdini at the school I get different errors from the slightly small things they change with shaders(For instance downgrading the houdini 10 occlusion VOP to 9.5 doesnt digest easily) Maybe inside the shader the lighting model isn't getting exported to the right directory in mantra. They have Houdini 10.0.428 at the school right now it might be a higher build than the one you got currently. I know 10.0.401 was giving me issues with my delayed reads and instancing networks which is why I made the school put on a higher build before they installed it randomly through out the lab and classrooms.

Edited by LaidlawFX

Share this post


Link to post
Share on other sites

yeah i dunno either. i'm thinking it has to do with the "material baking" because from what i've read it's not "fully" supported in houdini *yet?*... it involves invoking some "hidden" rendering parameters on the ROP or using a mantra command, however they suggest using the "hidden" paramaters over the mantra command... anyways, thanks for your comments. i'll post more when i solve it. as for now... time to integrate this biaaaaaa with a scene :)

Jonathan

Edited by itriix

Share this post


Link to post
Share on other sites

Updated my coke can disintegration...

Improvements:

Fixed "render passes" problem... *involved multiplying my Alpha by all of my aov's* and also set up switch shop and RENDERPASS variable in pre/post render script of mantra node so that I didn't need to set up TAKES... Also used an Object_ID pass for being able to break out the individual objects later in the compositing program.

Simplified the .hip file. (used material overrides instead of having multiple materials)...

Just added a very temporary scene so that i could begin testing shadows/lighting. Will replace with a matchmoved realworld scene soon!

Issues:

Still finding a few issues trying to use the delayed load for this. As it turns out so far, I think that using the delayed load for "heavy" particle sims that require "materials" other than point rendering, isn't VERY FLEXIBLE and easy to set up. Example. Usually i would use a wedge rop and geo rop in order to wedge my popnet seed... that way i could easily get multiple versions of the same simulation with a click of a button. Then with a couple expressions, I could read this in with my delayed load. Problems however: The delayed load would not recognize my "aov's"... This I'm not thinking is due to the GEO ROP... So I tried the MANTRA ARCHIVE instead... I saved out a "bgeo" version and an material .ifd. In fact, this did fix my "material" color change and also I was able to get AOV's. BUT, this leads to a new problem. I am unable to use the WEDGE with the MANTRA ARCHIVE. That meant that I could only get ONE SIM PASS OUT! THIS SUCKS compared to just using the wedge and the geo rop, and then just READING in my files and rendering as normal with the delayed load! If the wedge would work with the mantra archive then all would be good though. I even tried setting up a variable on OBJ level and pointing that to my seed and then using the wedge to change the object level seed paramater, hoping that the mantra archive would take this, it still only would just save out ONE sequence, it wouldn't respect WEDGING at all then. Then one more issue with the mantra archive is that, "unless i'm doing something wrong here", i'm unable to use a SWITCH SHOP effectively with it... It doesn't seem to save "all inputs" but only the current one! So that doesn't help me when I want to render out different mantra rops, that activate different inputs to the switch shop and therefore once again reinforces why for at least NOW, i don't think that it's a good idea for me to use the dl on this one.

If anyone knows any workarounds for what i'm talking about PLEASE post your comments. *I'm aware that I could just create MULTIPLE mantra archives, for each seed sim, create MULTIPLE geo nodes with popnets with diff seeds, cache out that way, then read all of those in with the delayed load, WHILE also setting up takes to use different materials at render time instead of using the switch shop.* Sounds a bit "over the top" though for a setup to me so i'll stick to the way i'm doing it for now unless someone has a better solution to the problem also.

Thanks,

Jonathan

coke_can_disintegration_05_w_bg_x264.mov

Edited by itriix

Share this post


Link to post
Share on other sites

NEW INFO FOUND:

Okay, so I was trying to re-sim using my WEDGE and noticed I wasn't getting multiple results... Then I noticed, I was trying to render the GEO ROP instead of the WEDGE ROP... Seriously I must have been brain dead when I was trying this with my MANTRA ARCHIVE! So now I've found out that in fact the Mantra Archive works fine with the WEDGE! "DOHHHH"....

:)

Jonathan

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×