Jump to content

tortoise

Members
  • Content count

    17
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Everything posted by tortoise

  1. My easy fix for this exact issue, which I've been dealing with for a long time now as well: Take the asset that sends your job to Hqueue, open Type Properties. Under "Save" there is a Checkbox with "Unlock New Nodes on Creation". Check that. It sort of defeats the purpose of an HDA (that it is only saved under assets and accessed by your project file, because it saves the nodes inside of the HDA into the project file. However, it still acts as a tool you can just plop down and use pre-configured. But judging by my take on the asset, it's only a couple of nodes inside. So that shouldn't be a big deal to save into your scene each time. Hope that helps!
  2. Hello! I'm currently working on a shot where a rocket takes off. I've gotten the setup working quite well for the start and the look of the spreading of the fluid, I now need the rocket to take off further. This introduced the problem that my container becomes gigantic as the bottom of the container spreads horizontally but the trail is just narrow. That's why I dug into instancing over the last few days and got quite good results. The issue I am failing at is transferring values between my clusters. I take the smoke object from the current iteration and add 1 and subtract 1 to get the adjacent smoke objects and merge them. I took the setup from the incredibly generous Florian Bard (http://flooz-vfx.com/ It's the rabbit_trail.hip file where he does that) but when trying to apply that to my particular case I couldn't get it to work. I tried various ways of importing the other smoke objects but apparently it's not fetching them. This is an image of my latest Version (v67) showing the cut. Attached are also some dailies demonstrating the issue. It seemed to be fine before I introduced dissipation in my sim but v67 (pyro_daily_v67_[1001-1105].mp4) now has this very apparent cut where the smokeobjects meet. (The look I'm going for is v38 (pyro_daily_v38_[0980-1160].mp4) - that was before the sim needed to be bigger and thus worked in Instancing.) That's the basic gist of what I do with the clusters. Import current, merge with next, merge with previous, then smooth out based on voxel proximity to border. Here is the HIP as well if taking a peek could help. GoFullApolloDreizehn_In_v243o.hip [Also for some reason my smoke object clusters start at 2. In my scene I have only two, and they are named smokeobject1_2 and _3 which I can't quite understand.] Thank you in advance, Martin
  3. Tranfer between pyro clusters

    Huh. Well, I did think about it, but now that you say it in a structured manner, it becomes REALLY apparent that I should have been doing that all along, hahaha. Thank you! I suppose I wanted to try clustering for once. Well, initially I planned this setup to be also used for a second shot where the rocket is already in space and flies past the camera. Since the rocket ideally doesn't fly in a straight line (comicesque style) it would then definitely be useful to use clustering, but then I'd run into the issue again where the transition would be apparent when the source is inbetween clusters. That would either take a lot of overlap of the clusters or the setup of transferring velocities. I just feel there is very little that is missing for this setup to work, no? Some expression to be tweaked slightly or nodes to be hooked up in a different order. For now I've just ran out of motivation to keep trying.
  4. Hello, I recently went through Steven Knipping's tutorial on Rigid Bodies (Volume 2). And now I have a nice scene with a building blowing up. The simulation runs off low-res geometry and then Knipping uses instancing to replace these parts with high-res geometry that has detailing in the break-areas. The low-res geometry has about 500K points as packed disk primitives so it runs pretty fast in the editor. Also, it gets ported to Redshift in a reasonable amount of time (~10 seconds). However, when I switch to the high-res geometry, as you might guess, the 10 seconds turn into around 4 minutes, with a crash rate of about 30% due to memory. When I unpacked it I think it was 40M points, which I can understand are slow to write/read in the Houdini-Redshift bridge, but is there no way to make Redshift use the instancing and packed disk primitives? My theory kind of is that RS unpacks all of that and that's why it takes forever, because when I unpack it beforehand, it works somewhat faster - at least for the low-res. The high-res just crashes. I probably don't understand how Redshift works, and have wrong expectations. It would be nice if someone could give me an explanation. Attached you'll find an archive with the scene with one frame (Frame 18) of the simulation included as well as the folder for the saved instances of low- and high-res geometry. Thanks a lot for your help, Martin (Here's the pic of the low-res geometry, the high-res is basically every piece bricked down into ~80 times more polygons.) Martin_RBDdestr_RSproblem_archive.zip
  5. Wow. You are a genious for figuring this out - thank you very much! I definitely have to try this out soon.
  6. Hello, I've been using Houdini for close to two years now and in that time have gotten faily accomodated with the way the program behaves. However, one thing that up to this day is hard to get used to is the way the parameter syntax is set up in regards to expressions. The thing I mean in particular is how you need backticks for expressions in String parameters, but not in any others. I know there is an explanation to this, but as an artist this confuses me as soon as I didn't write an expression for two days because it doesn't get into my head. Just talking about intuitivity and learnability, I find that this just adds unneccessary confusion. I think somehow tweaking this would make all of Houdini just a tad nicer to work with, particularly in favour of those starting out with the program, as I know exactly that this issue was frustrating starting out. I just think it would make sense for expressions entered with a backtick to work in float parameters as well. I realize this is a tiny tiny issue that I'm ranting about here, but I just had to let off some steam because I just sat here for 20 minutes wondering why my expression wouldn't work. Cheers, Martin
  7. Rant about parameter expression syntax

    Already did - and it's filed. Just wanted to hear other people's opinions. Ohh! That is a very good explanation. Thank you for that! Well, it's both that I'm talking about. Because AFAIK in string params you need to put Python in backticks as well? And the point I was making was that it is a bitch to get used to when starting out. By now I don't really have an issue with it anymore, except the traumatizing memory of starting out with Houdini and failing to get expressions to work.
  8. Simple Noiser

    Heyya! Over the past couple of days I've been building and extending this, at it's core very simple, Noise generator tool. It's called, incredibly intuitively: Noiser. I've gotten quite sick of always doing the same simple Noise-VOP over and over again so I built this nifty tool that saves me a small but accumulating amount of time (and energy) every time I need some noise. I'm really fond of it and it rarely takes me more than a couple of minutes into any project that I drop it down. Here's a quick video demonstration: https://vimeo.com/271007816 and a simple demo screen of the defaults: And that's the relatively simple setup. I hope someone else will find it as useful as I do! Cheers, Martin PS: I just found a volume-noise tool in my OTLs-folder, so I thought I'd just share this as well. Practically the same thing working for both SDFs and Volumes (VDB & Primitive). noiser.hda vol_noise.hda
  9. Hello fellow odforce-people, at the moment I'm working on my bachelor thesis which I'm writing about Dynamic Simulations for Film in Houdini. In order to demonstrate these effects, I'm producing two shots of a short film that I've wanted to do for a long time. And for the environment of this short film I decided to do some procedural modelling with Houdini. And since the houdini community is so extremely generous, I decided to take part myself and share my entire project file. It's slightly messy, but I hope it's useful. * I started out in Cinema 4D where I did some basic cloner-modelling to get all the basic shapes right. * Then I moved into Houdini, converted everything to an SDF and cut out dynamically generated holes. (Scatted loads of spheres into a vdb that I generated from the sdf, displaced the spheres, made an sdf from it and "difference-d" it away from the main mesh.) * Having converted the object back to polygons, I ran the mesh through Instantmeshes to get a clean quad-mesh. (Love this program almost as much as Houdini.) * For uv-mapping and memory-efficient exporting I got rid of all the faces that were not visible from the inside. (more than half) (I wrote a little script that sends rays from every point of the mesh to the 'god-points' in the center (because godrays, haha... ha...) and when any of these rays hit, it's marked as visible (green). Then I get rid of all the invisble points (red). * I unwrapped the mesh using Auto-Unwrap from the game-dev-toolkit and exported it (as well as the high-res-version from before) to Substance Painter where I baked all the maps (normals, AO, etc.) from the hi-res mesh to the low-res mesh. * In painter I just smashed a couple of smart material on there that I found on Substance Share, scaled up the UV-Tiling (because huge mesh) and was pretty much done with texturing. The render is a screenshot from painter's integration of I-Ray. (When I get everything together in Cinema and render with Octane, I'm gonna step the lighting up a notch of course. ) Here's a couple of pics: And attached you'll find the project file as well as the base-obj I started out with from Cinema (And one remesh to work with in case you don't have Instant-Mesh set up). To make viewing easier, I turned on a switch that sclices out only a small part of the arena for editing and increased the voxel-size so that loading only takes 15 seconds instead of half an hour. Cheers, Martin ARENAWRANGLE_TOODFORCE.zip
  10. To be honest, I gave up on it as I didn't feel it actually paid off to spend this time setting up. Probably because porting to the live-view still took about a minute, even with the instances. I was pretty close though. I think all that was missing was a deeper understanding of prim intrinsics. And the way the sim worked originally and how Redshift handles its instances (where the center is when saving and where it is projected to when re-loaded), because that is hardly documented, as far as I could find out. (Also, I unfortunately lost the saves, so that cut my motivation to go on as well.)
  11. Baby steps. I tinkered with the instances a lot and fingured out that the instancing if applied to the packed sim data projects an instance to every unpacked point. I fixed that by copying a point to every instance (with keeping the orient on the copy-to-node) and transferring the needed attributes. My next issue that you identified very precisely Atom was that the saved instances are saved at the center. So I had to transform the instances' center to the origin (when saving out the proxies) for the instances to appear where they were intended. And my last and not yet solved issue is that the rotation is not applied entirely to the instances. It's weird, but I feel like I'm getting there. I can post some pics and the hip tomorrow. At this point I think I'm just looking for a way to get the intrinsic rotation of the sim's pieces to the points that I clone the instances onto. Any suggestions I'd be very happy about.
  12. Alright, so I tried out your approach and it seems to almost work. The thing that's missing now is the transformation of the original pieces. It looks like this now: Now I wonder if redshift stores the transformation values/matrices for the instances in a different manner as houdini? That's the code I used from Steven Knipping's original setup to transform the Pivot and Transform from the original points to the instances. Any more ideas?
  13. Oh wow, that's the perfect video for my issue! Thanks so much. I haven't gotten to it yet, but it looks like it has to work. I raise my hat to you, Atom!
  14. Few questions: Redshift on H16.5/OSX

    I can't give you definite answers, but from what I've heard: 1) I don't think you need Indie to run the trial. 2) 2gb should definitely be enough, especially because RS uses RAM when VRAM is used up. It takes a little more time but it should work. 3) From installing my versions, I a pretty sure the RS-installs need the exact version of Houdini they are made for. For example I installed the 16.0.678 RS to my 16.0.5xx version of Houdini and it wouldn't work until I matched the versions. I hope that helps, cheers.
  15. Redshift Volume Emission Problems

    Hello Atom, thank you so very much, I got it working now. Apparently all the tweaking in the world is useless if you ignore the fact that there are two more parameters with slightly weird names!
  16. Hello fellow scientists,I am currently working on a shot where I want to render out an indoor-explosion over about 50 frames. Now since I can't get my mantra render-times below 20 minutes (in FHD) I had hoped to render the volume in redshift more quickly because I have had good experiences with rendering just density in RS.The problem comes up as soon as I start to use emission with redshift. I start to get very weird boxes of emission around the edges of my volume (it seems to be like 16 cubed boxes of voxels reminding me of compression.) - screen attached. Screenshot_50 has about the look I want to achieve, but with this weird halo. When I amp up the emission, the boxes become apparent.Not sure if I tried switching to VDB to render, but I'm not hopeful that would change much.Has anyone come across this issue before?Attached is also my hipfile with the way I set up my render and one cached frame so you don't have to sim again. (Including my approach to an indoor-explosion for anyone who might learn something from that - please do!)I would really appreciate any thoughts on this. (And if my decision to move away from mantra in this case is actually remotely sensible - I'm really unsure.)Thanks,Martin Grenade_RnD_v20.hip
  17. Render ROPs in background

    Hello! I have a question concerning ROPs, because I can't seem to figure out how to render them "in the background" as you can do with other write-to-disk SOP-nodes or Mantra-render-nodes. I really like this options as it gives me the ability to still work in the same scene without having to open another window. My particular case is that I have a flip sim and want to render two caches out at the same time (one fluid cache, one surface), frame-by-frame dependent on each other. If, as I expect, you say that there is no way to do this, because of the dependencies, then I'm fine with that. I just wanted to ask if I missed some usual Houdini-black-magic. Thanks!
×