Jump to content

Justin K

Members
  • Content count

    123
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Everything posted by Justin K

  1. Revolving Pyro Simulation

    Hey! Im trying to get a pyro simulation that slowly rotates around in a circle, with an open center. Best way to imagine this is with a camera that is inside a hollow tornado, surrounded by a circular wall of smoke. I have to make a section of this wall. Then Im hoping I can instance the simulation a few times to get more depth...... Ive had decent success with the vortex force and a static collision object. Vortex force drives a circular motion and then a static obstacle object in the center keeps it hollowed it out. I run this for a bit and then I get a revolving band of smoke. PROGRESS! There are some issues though. The simulation itself is slow (a thousand reasons for this I know there could be). The thing is I have been pretty thorough with keeping the resolutions and voxel grids as large as possible for the test part of the process. I was hoping I could iterate quickly. But its not happening. I have a just a few spherical emission points arrayed in a circle (only 11,000 voxels), and then I feed this into a pyro sim with a pretty agreeable division count (result is around 150,000 voxels). Its still taking a long time to sim. I udnerstand this is hard to debug without a scene file, but out of curiosity has anybody found an efficient alternative way to do this? I originally created a radial velocity field in hopes of driving the sim, but the control was nowhere near as good as the vortex force. Any tips example files or suggestions would be appreciated! Thank you!!!!
  2. Trying to recreate a venation algorithm.

    Hey Ballington, Its been awhile since I have worked with this stuff. I have successfully gotten a leaf venation system to work, building off of the efforts of those who have gone before me. It turns out, the most efficient method to get you started with leaf venation is a reuse of the space colonization algorithm showcased here: Now i guess it depends on what your goals are. I wanted to make realistic veins for texturing displacement. Using seeds to randomize the result. This is quite possible. Ive attached some picks that show the system working based off entagma's code. There is a problem though. The algorithm is not intelligent enough to organize the resulting geo in a logical manner. Its a madhouse of prims and points, trust me. Thats where you come in. You need to create a logical system to help with the creation of geometry itself. First, you need a generational pattern. Main stem, first gen, next stem, second gen etc. Ive attached a pick that shows the logic of this process. Once you have a generation setup, you can loop through each generation using connectivity, and create a a width attribute along each 'vein' this will be what drives the size of each vein when you actually add geo. Ive shown some picks of a nice result. You can then literally project this geo onto a heightfield and export a displacement texture. Now its challenging and there are all sorts of loopholes along the way. I wish I could show you the code, but its not soemthing I am allowed to share. All the best mate!
  3. Hey folks, trying to get a heightfield map, exported as an exr, to work as a texture on a grid in solaris. Im trying to get it work with both the viewport first, and then karma second, but I am not having much success. Anybody have any experience with this? Attached the scene file and the exr texture. Thanks!! height_v001.exr heightfield_testing.hiplc
  4. Storage

    Thank you Symek, thunderbolt 3 in conjunction with the ssd g-speed shuttle would be perfect, but it seems like getting it working with an AMD system might be a bit tricky, which is unfortunate. I guess only two motherboards for amd cpus are thunderbolt approved, and they trade the 40 GB network spot on the motherboard for a thunderbolt 3, and so dont have both :(. Might seem trivial, but it makes flexibility more of an issue--if I ever wanted to run things using a network. The usb 3.2 gen 2x2 (usbc type) is the closest equivalent (20Gbs), and is about half as fast as thunderbolt until usb4 comes out, so perhaps that will be good enough.... At any rate, that speed shuttle is beautiful--seems like it would be perfect for storing footage on location, though as you say despite redundancy probably would need to have that stuff saved elsewhere. Now for the network option, can I get away with not using one of those expensive switches if I only plan on directly connecting to one pc?--and then perhaps just being able to look at files here and there on a slower wif based laptop? The actual NAS storage seems somewhat 'reasonable' otherwise-by that I mean around 3k. Reading this article here anyways: https://nascompares.com/2019/10/07/thunderbolt-3-vs-40gbe-nas-in-2019/ It conveniently also showcases thunderbolt, but I am not going intel for this build largely for price reasons....... At any rate appreciate the points though. I might wind up getting a high speed 2 tb ssd to work locally off of, and then get a network nas to just back stuff up to on a daily basis. It wont be live, but at least it will be safe (enough). Im just a bit worried about the 2 tbs. I feel like with iterations you can run out quite quickly :(.
  5. Storage

    Hey folks, I’m building a new workstation, first personal build, and I’m struggling a bit with understanding the best way to set up high speed storage for simulation work. Now in my current pc I use a small high speed ssd (128 gb) for my boot drive, and an additional mirrored raid for storage. That raid drive does not have nearly the read write capabilities of the ssd drive though, which in the end impacts performance. I’m upgrading and I want to find the most optimized way to work with large amounts (> 1 tb) of data, in a way that doesn’t require constantly transferring files and folder structures. So this is what I’m struggling to grasp. I’ve been told to get the lowest latency fastest read write combo for the boot drive. So I’m getting a Octane 905P (storage up for debate) for my boot drive coupled with a threadripper processor. That all makes sense. That in and of itself can have up to 1 tb of storage, which is a lot. However, does it really make sense to work off of that drive? I frankly don’t think so. Though it would fit a lot of simulations, if I had a 14 shot sequence, with iterations, then it would probably make sense to have a live storage capacity of let’s say 10 Tbs worth of storage to hold the data....... But then your speed is compromised right? If my Houdini files are housed in the same locale as my sim and render data, then I’m at the mercy of that drive’s read and write speeds regardless of how fast my boot drive is right? I guess that’s the key here. How do I pair a high profile boot drive with a blazing fast storage solution. Ive done research into setting up nvme raids in pcie4 slots, which could get me say 4 tbs of storage, but anything higher than that is seagate nas level storage with I don’t know much about and doesn’t seem to be the fastest way to work live.... I don’t have an insane budget, around 6k and also know nothing about setting up something like a home nas. But usually that seems to be for long term storage not ‘live’ storage, though I have heard about fiber optic cable connections with (10 gb/s) transfer rates- I’m all ears for insight on that. Well.....trying not to be ranty here, but I hope this all makes sense. Absent a render farm, the multi core threadripper (3970 or 3990) with an optane boot drive seems to be the fastest home combo for Houdini sim and rendering, now I just need to understand this elephant in the room, STORAGE. thanks in advance for any insight!!!
  6. Hey all, posted this a bit ago but I am updating the post with a lot more context. I am trying to find a way to render volumetrics in houdini. More specifcially, I have a night scene, with street lights. Because it will be raining there will be volumetric influences on the light sources. I want the light sources to be cut with god ray like light patterns, or at the very least have convincing fog throughout the scene. I am struggling to achieve this affect with mantra. The first two images are what I am using as reference. The first is the scene I am trying to emulate in 3d --complete with a rain sim and an umbrella rig (ya the same one from a previous post ---all help was apprecaited) (source: https://angelganev.deviantart.com/art/City-Nights-III-Day-82-700866138 ) pic 1 The second is an inspiring render I cam across with very well designed volumetrics in the sky--and convincing wet maps on the ground level (source: https://www.behance.net/gallery/35226551/52Hz ) pic 2 The focus now is on the volumetrics. I have now attached the master scene file with the assets included as a zip in hopes of getting some detailed help--its a pretty simple file. Now, Ive been trying to use this as a reference https://www.sidefx.com/tutorials/god-rays-light-beams-through-volumes-updated-20170508/ but am struggling with the implementation. Something in materials seems to have changed in houdini 16 (or I am more likely just missing something). Overall, the suggested technique seems to be to simply apply a volumetric shader to a bounding object and use that to apply volumetrics to the scene. This approach seems to work as intended but only if the camera is not within the bounding volume which in this case defeats the purpose. See pics 3 and 4--one pic shows the volumetric box from an external view-- and shows its affect on the light sources--the second is inside the volume, where nothing seems to be happening--even if I try to trick it an dreverse the normals the result is just wrong. Is there a way to do this on the shader level? I know I can use an isooffset and build volumetrics on the object level, but when I was working with isooffsets my render times were ridiculous. I dont want to have a scene with crushingly long renders if I don't have to. It was pointed out that if you keep your volumetric sample 'cranking' on the material rather than scene level your results will be much faster--hence the reason Id like to implement the shader level approach if possible. Again, any help would be appreciated, Thank you! Painting_JK_01.zip
  7. How can I set up my preferences so that simulations are disabled and update mode is manual whenever I open a houdini file? Thx!
  8. Hey! Can anyone explain the difference between these two approaches when applying velocity fields to a pyro sim? Example 1: sop vector field piped into a field force after the pyro solver itself Example 2: Same sop vector field imported into dops using a source volume dop. Volume is called force. This volume is called in a gas advect dop --piped into the advect input of the pyro solver itself. Im having issues with method 2. Overall I am trying to make a pyro sim narrower--and am using a custom velocity vector field to try and force the gas to stay more compressed, but I cant get convincing interaction with the divergence stage of the sim. I had been trying the gas advect for a while, but now Im trying the example 1 approach. I was curious as to whether or not the field force would work better because it is actually operating independent of the pyro solver with its nondivergent stage . Thanks!
  9. Hey--im reposting this here -- this video was extremely helpful in resolving my problem. Thank you Andrew!!!
  10. Hey, besides gas curve or gas blur (the microsolver being used for the viscosity parameter), is there a way to keep a smoke sim column tightly compressed for a longer period than usual? Basically I want to keep a smoke plume compact for a long period of time (tumultuous but not dissipating), even as it is being pushed by a wind force. Im struggling to achieve this affect without the column being blown apart. Viscosity keeps the column together, but also makes the plume move like a fluid . Id like to keep the column compressed but also have a lot of turbulence within the column. The result is achievable easily enough without wind--with wind it seems difficult.... Ive attached flipbook of what i have at the momvent--the column shape would be great if the smoke movement wasnt so contrived looking Thanks sim_test.mp4
  11. Hey, Ive been tasked with recreating smoke plooms for a composite. Im new to pyro, but Im slowly familiarizing myself with the pyro solver for the task at hand--apllied houdini dynamics has been enormously helpful for this. The best reference for what we are attempting to achieve large scale structure wise is this https://www.shutterstock.com/video/clip-1006794988-chimneys-power-plant-sunset-air-pollution-concept Secondary example would be this: https://www.shutterstock.com/video/clip-1022524201-hamilton-ontario-canada-january-2019-factory-steel Two issues I am really struggling with are: 1) keeping the structure of the plooms together over time (say 800 frames)--while still moving in the specified direction. Im referring to this as the large scale shape. 2) keep high levels of disturbance and movement within the plume structures even when they get far away from their original vel source. Small scale detail retention I have noticed that generally speaking, over time in the pyro simulation, the sim inevitably starts to disperse and smooth out. I know there could many reasons for this:. A few things that could be affecting this as far as I would know: 1) temperature diffusion (this could cause smearing)--consequently to counteract this, I have this very low. 2) turbulence issues: Its needed as far as I can tell to give the plumes variation in structure. However turbulence applied globally over a long enough duration with secondary wind force will slowly spread the density apart in all directions throughout the volume. This makes sense but is perhaps also a problem. We need turbulence, but the turbulence cant blow apart the structure with time in this case. 3) rules of incompressibility in pyro sims? --Ive listened to some explanations where this rule is actually somewhat problematic when it comes to something like a smoke plume or an explosion, where the volume of the sim should actually be increasing over time. I currently am NOT using combustion though....perhaps I should be? Perhaps a useful question to ask is: When smoke leaves an exhaust pipe is it combusting? Or simply cooling off very quickly when it interacts with air? --science help! (thx!)) 4) Incorrect temperature change?--right now I have it losing temperature very quickly--which I believe is what happens in nature with smoke plumes, but Im not sure. Im hoping this gives the smoke a heaviness and keeps it from rising--However, the smoke column still needs to be very active within the confines of its own shape....... 5) Incorrect time steps? Right now I have the timescale of the sim set quite low (.2) --in hopes of keeping the sim from emitting and blowing apart too quickly--again, not sure though At any rate, any tips on this stuff would be REALLY appreciated. I understand on a basic level what the differnet microsolvers are doing, difference between importing volume fields to affect things, as opposed to using microsolvers, and alternative effectors like wind force, so any tips would be helpful. OK, so, the current state of my terrible sim lol: I have tried a variety of different things to achieve the desired look. One of the first things I tried was driving the plumes along a curve. However, I could not get the sweet spot as it were with this method-- the sweet spot being: the plume followed the curve structure wise, but did not stretch out and loose shape as it followed the curve. Basically the plume wiould start to smear with the curve if the attraction was too high, or, conversely, if the curve attraction strength was not strong enough, the velocity pushing it along the curve would not be enough to keep the plume from rising into the atmosphere with temperature. I did try for a while to get the velocity force along the curve and the temperature to play nicely together, but was unsuccessful overall in keeping the plumes from looking like they were being pulled rather than pushed in a direction. I gave up on this art directable shape idea, and instead went with a costume wind volume I built in sops. This is currently pushing the volume in the wind directionx, and there is some slight oscillation in the y values and the amplitude to give it some visual interest. A turbulent noise is running with a time offset as well. THis velocity volume, I should point out, is NOT being applied globally to the voxels, Im using the density as a mask in hoping of keeping the thing from falling apart--and yet it still is .. One side noteL As a side note: the intial shape of a pyro sim is different look wise from the look in say 200 frames, where the emission and the general look have become more consistent. In other words, there is an initial blast structure and then what comes afterwards. In my case, I dont want the initial blast of plumes, but instead want what the sim looks like when it has achieved some sort of balance. Is there a way to avoid having to look at this rev up time every time I sim? I was thinking of saving checkpoints, say to frame 200, then simming from that checkpoint. I would sett keyframes for everything I was tweaking at frame 200, and then at 210 dialing in new values and see what my sim looks like as the changes evolve? Is that a common thing.? Any other questions you have about my set up feel free to ask! Ive provided a scene file--SCENE IMAGE shows the network that is being used for this --everything in the purple background. Ive attached a flipbook still for whatever reason as well. Hope this is enough enough here all, Any tips or pointers would be greatly appreciated. Thank you! smoke_plooms_v003.hiplc Still_Ref.zip
  12. 3d Volume from Image?

    Just as an update on this--we have killed the project. The image we were given did not relay itself well to the concept itself-- it was an image of a man comped against a sky to begin with. If we were to do this in the future, we would generate some 3d geometry to help the cause. Szymon your heads up on the depth matte from image software is a really great thing to keep in mind though. Thanks for the tips both of you. Best, JK
  13. Hello, Is it possible to generate a 3d volume with depth from an incoming image? Request is to convert a figure from an image into a cloud like structure, and Im trying to figure out an intuitive way to do this. Currently Ive just manged to take an image mapped to the uv coordinates of a piece, scatter points based on the color, and then create a volume from that, but that is all i have so for. My thought was to perhaps deform the geo based on the luminance values of the image so as to give it more depth, but overall Im not really sure how to approach this. Ive attached a pic and the current scene file. Any tips would be appreciated. Thank you! image2cloud_testing.hiplc
  14. 3d Volume from Image?

    Thank you as well Szymon, Taking a look at your file as well. Unfortunately we do not have depth from the image. As you questioned, that is the challenge here. And I agree with you completely, having some sort of geo should help. Im gonna see what happens applying Konstantin's code to a premade 3d cloud volume and see what i get. What we might try--is to generate some real proxy geo of the image, project the image onto the geo as a texture, convert that into a volume, and drive both the color and the density of the volume with the color from the point or vertex level. Maybe? Lol Thank you both. Ill post here again with a file if I have any success. Cheers, JK
  15. 3d Volume from Image?

    Thank you Konstantin,! Implementing this code this morning.
  16. Well....it works!!!! you dont even need to go into the subnet--just bind your parameter to the outputs you want--you can then rename the channels if you want the export names to be different.
  17. Hey all, the bake texture rop provides you with a convenient way of sending out point colors to a texture map which is really helpful. I sent out a costume curvature mask in this way created from the game dev curvature sop. All you need to do to get the curvature values out into a map is 1) Apply a principled shader to the desired geo. 2) Have a light 3) Set the shader to use point color. 4) In the bake texture rop, check the Surface unlit base color (basecolor) That was all it took --some thanks for some online help for this by the way. My question is, what would be the smartest way to kick out multiple masks from the bake texture rop? Say I wanted to send out 5 or 6 different curvature maps as flat color information, can I hijack one shader to do this? Im pretty confident I can but Im not sure of the workflow. My thought was I build up a set of attributes corresponding to my masks: say v@curve_mask_one v@curve_mask_two etc etc and then try to just hijack some of the preexisting inputs with this attribute --say the metallic, the transcolor, etc etc. Looking inside the shader network i can see how the various parts --basecolor, metallic are set up in a very similar fashion. I showed a pick of the the surface and the metallic to show how similar they are (pics 1 and 2). You can see they both have an option for using point color. If you then dive inside there is this bind labeled Cd (pick 3)--is this Cd attribute bind what is calling the point color to begin with? If so, I figured I could just replaced the Name: Cd with say curve_mask_one and the bake texture would behave appropriately? Maybe...... there is nothing inside these subnets that actually applies to how the shader is building the differences between the metallic and transcolor maps though, thats my fear here. I am trying this out now, but also sending this in hopes of getting any suggestions so I can avoid any rabbit holes here. Thanks in advance for any suggestions!
  18. Hello! Im trying to utilize the attr token provided in arnold. Its supposed to allow you to use user defined attributes to change things in the shader --in this case allowing me apply different texture sets to primitive groups with the same uvs. In this case there are six pieces of geometry, with an id attribute and a string attribute set to the file name they should be grabbing for the texture. The syntax is supposed to be something like this: https://docs.arnoldrenderer.com/pages/viewpage.action?pageId=55711930 (see pic as well) So Im pointing to the folder and then writing in the attr as is suggested, but Im not getting any results. Anyone know what I might be doing wrong? They are exr files--in one texture folder. When I point directly to the file location the file reads- so i dont think it is the file--i think I am just not typing something correctly. Yeesh. Thanks!
  19. Hey all, as I said yesterday I have everything working (well from a technical perspective lol) for my maps thanks to all your help. I am doing look dev now, and the feedback is well....a bit slow. It's not like I'm being brutal with settings either--I just have one flower I'm look-deving. Im doing my due diligence and am gonna try to use mantra for this--however--as an example- a scene with just a 1k environment light+one area light, one flower without material overrides, 1 displacement map for theflower, and two shaders (one for a sphere sitting between the petals and the area light, and one for the petal) is taking 10 minutes to render with standard mantra settings and a 1280x720 camera ???? Displacement effect scale is at .002- the geo is low res, transparency is set to .1152 --slightly translucent-- and the subsurface is set fully to 1. Im not sure this is gonna be all that efficient, unless mantras optimization starts to showcase itself when the amount of geometry significantly increases...... My question is would it be worth switching over to Arnold for this. Arnold has some really nice translucency capabilities-- and I have some decent experience with it for a paper project I did a while big. However, the big question then becomes: can I use these material override setups inside a third party renderer? or is this just a mantra specific technique? Thank you!
  20. Hey, I have an object made up of 6 different poly groups--6 petals in this case. Each of the petals has the same uv set as all the others. The goal is to assign different textures to each petal. Im leveraging some of houdini capabilities to build venation systems and then sending these venation maps out as displacement maps. These are then driving materials and textures made in substance designer. My question though is, at some point all of these textures are gonna have to come back into the project right? So lets say eventually this flower with 6 petals is iterated 180 times-- so 1080 petals - this then becomes 1080 texture maps -- most likely will not be that abusive with the amounts but still you get the idea. To apply a different material to each petal id have to split every single petal off--either inside an object or outside (doesnt really matter--same principle) and give it its own material and texture assignments. Getting clever with expressions will probably alleviate some of the work here, but Im just wondering (1), if there is a methodology for this I should be following --im sure in games they do this type of procedural texture variation stuff all the time, and (2), is the render speed gonna be slow if am using so many shaders- as opposed to perhaps a udim tile workflow with 1 shader? --setting up udims would be tedious as well, but overall, just looking for some suggestions... Key thing--one uv set, same type of material, but different texture sets. Thanks!!
  21. Thank you everyone for your help! I have it working now! If anyone would like the file at some point just flag me and ill send out a simplified version All the best Justin
  22. Saw this while writing the other response--thank you! Looking at your file now
  23. I think im close to the answer looking through some old posts: https://www.sidefx.com/forum/topic/50236/
  24. Hey Im actually having some trouble implementing this--where does the id attribute have to live on the petals to be accessed by the shader? Also, just doing a stress test, if i put all my albedos in a folder, example: albedo_1 albedo_2 albedo_3 i can then read them in as a sequence, but this just changes which texture is called per frame. How do i set it up so that the petals with their ids recognize the textures coming from the folder based on their id? Sorry for the trouble, ill send over a small scene so you can check this out if you need it. Thanks! Justin
×