Jump to content

Leaderboard


Popular Content

Showing most liked content on 04/06/2017 in all areas

  1. 2 points
    You have to personally set $HOUDINI_PATH as a system environment variable. It does not automatically get created, by default it will only reference your $HOME directory. In either case you will need to manually create the scripts/python directory in the respective directory. an example would be $HOUDINI_PATH = z:/projects/1234_asuscommercial/projectResources/houdini/ and then within that directory, add /scripts/python
  2. 1 point
    Submit the RFE to SideFX, https://www.sidefx.com/bugs/submit/ You just need a basic account on SideFX, and if you have already downloaded the software then you have one. The forums are only useful for general public discourse. SideFX only officially actively reviews BUGS and RFE in their database, even the Wishlist are just for a weird form of group ranting and self therapy, lol.
  3. 1 point
    Have you tried driving the outer level controls with a relative reference expression to the inner controls? Find the inner node parameter and right-click copy. Return to the outer level parameter and right-click paste relative reference. The field should turn green. You can now change the value at either level and two values will remained synched.
  4. 1 point
  5. 1 point
    Hello Everyone, This training is an update to the Tea and Cookies training. The training covers fairly similar topics such as modeling, shading, lighting and rendering. The primary difference is that instead of Mantra the training focuses on using the third party render engines namely, Redshift, Octane and Arnold. The modeling part of the training covers a variety of techniques ranging from basic poly modeling, VDB, fluid simulations and even POP grains to build the scene. This shading and lighting part primarily focuses is on building all the various shaders required for the scene using a variety of procedural textures and bitmaps. The training will also cover SSS, displacement and building fluid shaders using absorption. We will also build relatively detailed metal and plastic shaders. Trailer for further details kindly click on the link given below http://www.rohandalvi.net/dessert/
  6. 1 point
    Mainly I just wanted to create a training using the other renderers available for Houdini. That was the primary reason. The other reason was speed. Literally every major object in the scene has either SSS or absorption or both. Mantra is never happy with either of those things. I'm not saying it doesn't give the results . It just gives it really really slowly. Which is why for the initial bit I focused on the GPU renderers. I still might make a version for Mantra when I start working on the Arnold version. I just haven't decided yet. I guess you'll find out in May.
  7. 1 point
    being able to put the display flag on the new Network dots please.
  8. 1 point
    Hello magicians, I tried to replicate a cool pattern picture I saw today as a way to practice: Couldn't get good results, this was my first try: And then this one: When I traslated to a face get even worst, any tips on how to achieve this kind of effect?, hip attached, sorry is kinda messy Thanks! honeycomb_pattern.hip
  9. 1 point
    Ditto to what Atom said. In generally I put the python scripts in the "\scripts\python" of any of the HOUDINI_PATH, or other environment variable based python re-directs. I can't say it's 100% standard, but this has been common for me at many studios. Plus any directory you place inside the python folder for organization sake, make sure to have the __init__.py script in there. Generally the only python scripts I have at the "\scripts\" level are the standard Houdini based overrides i.e. 123.py, 456.py, hescape.py.
  10. 1 point
    If you want the code to travel with the HIP file you can paste or type your code into the Python Source Editor. If your code is more tool like you can make your own button on your own shelf and paste or type your code into the Script tab. More often I will simply drop down a Python node at the root or inside a SOP, depending upon what the code needs to do. One advantage to placing code inside it's own Python node is that you now have a named container that you can force cook, if necessary. Any time I use Python to drive animation I have found I have to force cook the Python node on export in order to advance the animation through time.
  11. 1 point
    I got it working! In order to apply a pivot to a matrix you need to invert the original matrix before multiplying it by the pivot. After that you will need to invert it again and add the pivot xyz to the position on the matrix. ApplyTransPivWorking.hip
  12. 1 point
  13. 1 point
    Here's another hacky way to do this, abusing a rest attribute to restore point positions after scaling by inverted pscale/resampling. Minimal code at least... --Dave resample_withRest_DS.hiplc
  14. 1 point
    Exactly what Mark said. The extra PCI Express lanes are also going to be useful for high performance storage clusters with NVMe arrays or machines with tons of drives like ZFS. All around it's going to be a very useful platform in CGI production if the pricing is competitive.
  15. 1 point
    "Proper" simulation of a tornado: http://news.wisc.edu/a-scientist-and-a-supercomputer-re-create-a-tornado/
  16. 1 point
    Sorry for being such a noob... I've been checking the scene out (so cool!). I try and fail at introducing a modeling iteration. I'd like to introduce a deformation / morph / model process to first create a broader folding form (app frames 5-20) before the more uniform crumple deformation happens. Here a ripple. It seems to have no affect. It's before the DOPimport node in the cloth item, so I thought it would influence what the DOPNetwork has to work with. A ripple option in the DOP network itself only creates a new mesh. How / where do you introduce a modeling iteration? Is it because I have keys on the effect? I find no option to have the DOP respect animation or such. paper_crumple_v02dm.hipnc
  17. 1 point
    curvature computation on grid data like volumes is fairly easy. just compute the hessian and extract the curvature information you need. hth. petz vol_curv.hipnc
  18. 1 point
    OK after 1 day and a Half of fight i finally suceed to install Houdini 15 HQUEUE on windows 10. I have assemble a detail pdf of all the process. It was supposed to be for SESI support , but i finally get it work. Luke the service trick doesn't work on windows 10 see my .pdf for more detail. I must confess that seting up HQUEUE is really a huge PITA. Cheers E EDIT : I have updated the file to simplify and add some correction. hqueue_windows_10.pdf
  19. 1 point
    Also : screentogif http://screentogif.codeplex.com/ Similar to licecap with additional options : progress bar, text overlay, frame editor, free hand drawing, ... Cool but windows only
  20. 1 point
    licecap. its boss. windows and osx, and linux via wine. http://www.cockos.com/licecap/
  21. 1 point
    Methods to Stir Up the Leading Velocity Pressure Front We need to disturb that leading velocity pressure front to start the swirls and eddies prior to the fireball. That and have a noisy interesting emitter. Interesting Emitters and Environments I don't think that a perfect sphere exploding in to a perfect vacuum with no wind or other disturbance exists, except in software. Some things to try are to pump in some wind like swirls in to the container to add some large forces to shape the sim later on as it rises. The source by default already has noise on it by design. This does help break down the effect but the Explosion and fireball presets have so much divergence that very quickly it turns in to a glowing smooth ball. But it doesn't hurt. It certainly does control the direction of the explosion. Directly Affecting the Pressure Front - Add Colliders with Particles One clever way is to surround the exploding object with colliders. Points set large enough to force the leading velocity field to wind through and cause the nice swirls. There are several clever ways to proceduralize this. The easiest way is with the Fluid Source SOP and manipulate the Edge Location and Out Feather Length and then scatter points in there then run the Collide With tool on the points. Using colliders to cut up the velocity over the first few frames can work quite well. This will try to kick the leading pressure velocity wave about and hopefully cause nice swirling and eddies as the explosion blows through the colliders. I've seen presentations where smoke dust walls flowing along the ground through invisible tube colliders just to encourage the swirling of the smoke. You can also advect points through the leading velocity field and use these as vorticles to swirl the velocity about. The one nice thing about using geometry to shape and control the look, as you increase the resolution of the sim, it has a tendency to keep it's look in tact, at least the bulk motion. As an aside, you could add the collision field to the resize container list (density and vel) to make sure the colliders are always there if it makes sense to do so. Colliders work well when you have vortex confinement enabled. You can use this but confinement has a tendency to shred the sim as it progresses. You can keyframe confinement and boost it over the first few frames to try and get some swirls and eddies to form. Pile On The Turbulence Another attempt to add a lot of character to that initial velocity front is to add heaping loads of turbulence to counter the effect of the disturbance field. You can add as many Gas Turbulence DOPs to the velocity shaping input of the Pyro Solver to do the job. Usually the built-in turbulence is set up to give you nice behaviour as the fireball progresses. Add another net new one and set it up to only affect the velocity for those first few frames. Manufacturing the turbulence in this case. In essence no different than using collision geometry except that it doesn't have the regulating effect that geometry has in controlling the look of the explosion, fireball or flames, or smoke. As with the shredding, turbulence has it's own visualization field so you can see where it is being applied. Again the problem is that you need a control field or the resize container will go to full size but if it works, great. Or use both colliders and turbulence pumped in for the first few frames and resize on the colliders. Up to you. But you could provide some initial geometry in /obj and resize on that object if you need to. Hope this helps...
  22. 1 point
    I've used it to make little 3d sculptures. -- more as a hobby though - but the vdb stuff and adaptive meshing are great for this. Especially because the shapes I am interested in are more organic. -- If I wasn't working full time I would make more of these, they are a lot of fun. --- Half as a joke I tried using it as an autotrader for the stock market.... but that was just stupid (yet fun). -- That was mainly because the data (from yahoo finance) can be easily imported and processed with chops or sopsolver and vops. There are actual node base high frequency trading platforms out there that look very much like vops.
  23. 1 point
    After checking out the "classic" Lagoa teaser video, I started thinking if something like the crumbly stuff in the beginning could be done with variable (high)viscosity FLIP fluids. Well, this falls short of the Lagoa stuff, but it's an interesting look anyway, I think. (click for anim) It's quite simple really, I just init the per-pixel viscosities with a VOPSOP noise inside a SOP Solver, behind an Intermittent Solve DOP set to run "Only Once". Hip attached for inquiring minds. variable_viscosity_v005.hip
  24. 1 point
    An adventure into generative art. I ran into Multi-Scale Turing Patterns, and thought it would be fun to try and do that for volumes. It was satisfying to be able to do it with volume SOPs and VOPs without needing to write any code Viewport flipbooks [mov] [mov] Renders [mov] [mov] I also tried putting a point light inside the volume and rendering with scattering, and got this nice happy accident: Most of the scatter tests looked like crap, but I hope I'll get a cool animation rendered soon.
  25. 1 point
    Wow, this is turning into a watercooling thread Anyway, this is my setup for adding some details to an explosion (or to some volcano smoke): volcano_001.zip Quite simple, as Peter explained. Just think about hot ashes in volcano smoke, with cold air around. Those hot ashes makes the air expand (positive divergence). That gives you the basic expansion and rolling motion, that you can break up with vorticles. let me know if you have any question. have fun
  26. 1 point
    All right. Particles! This tale starts last summer, when I, like many, thought I'd like to render a lot of particles. Like many, my first idea was to instance a bunch of particles to each simulated particle. This, not surprisingly. led to something resembling a lot of fuzzy cotton balls. Thinking of how to break up the cottonballsiness, I ended up creating a looping 1000-frame bgeo sequence of twenty or so particles gyrating around the origin. (cos is your friend.. This sequence was then instanced to each particle, with pseudorandom rotation. Also, the particles had 30 different time offsets for the loop. Still the cotton balls were there. (couldn't find frames from this phase) In retrospect I probably should have spent more time tweaking this instancing approach, as it seems to have worked for many in actual production. I started thinking about ways to create new particles between the existing ones, and do it procedurally without simulation, so that a random frame can be calculated. Every scheme I came up with was lacking, usually it wouldn't have been temporally coherent. So I dropped the requirement of not needing to know the previous frame. Simulation it will be, then. Also by this time I thought I'd up the ante. I would learn cuda, write the simulation bit myself, and render a BILLION particles. Good luck. It wasn't too bad. The method I ended up with was simple; the new particles get their velocity from nearby particles and move according to that. Think of it as the old particles being some sort of attractors, or just as v attribute transfer. Just transferring P would've been "safer", but the result would be more boring. With v there would be some new emergent behaviour. Transferring force/acceleration would be even more exciting, but more risky too. This is one of the first successful outputs, looking promising. The red balls are the input seed particles. And then the first renders - now I knew the approach really worked. different alphas To be able to go as high as possible I kept the data per particle as small as possible, and at this point I also converted the cuda code to work in buckets and also to 64 bits. Getting the buckets to work resulted in some wild results on the way.. Now, armed with a more linear algorithm, I started cranking up the point count. With this version I made my way up to 115 million, and this render is probably the best I've come up with, yet. I love how it has some nice new detail that does not exist in the seed sim. So, at this point my executable read a .geo sequence with 1 million points in it, did it's resim magic and spit out a .geo with 115 million particles, which I would then render out in Mantra. The resim here took perhaps 15-20 mins per frame on a G260, and writing the resulting 4.5gig ascii geo (uggh!) took 20 minutes Mantra mem usage here was getting close to 10 gigs (the machine has 12) so I knew I had to change the angle to get higher. Still one order of magnitude to go after all! I was hoping that mantra would throw stuff out of memory when no longer needed. (no raytrace) If I wrote the pointcloud out in buckets, it might be a lot easier for Mantra, memwise. That I did, and also finally rewrote the load/save to use the binary format. Save times dropped 10x After having a discussion with Hoknamahn on the subject (thanks dude), I understood that converting stuff to i3d for rendering might be an option too. Enter the current beast. Now it reads a 10 million particle seed sim .bgeo sequence and spits out: a) ~ 7k .bgeo buckets with 400 million particles total, an ifd fragment that loads in the above buckets as dla's, c) a script that converts all the pointcloud buckets into i3d's via i3dgen.exe, d) the resulting 7k i3d's, and e) a similar ifd fragment for the volumes. Phew. Ugly. Currently my code needs all the particles to be in cpu memory, so 400mil is about the highest I can get to with 12 gigs. When loaded into Mantra, the ifd fragment then assembles these volumes into a ~30^3 cube (empty buckets are not saved) with correct bounding boxes. The combined resolution of the i3d's is ~1800^3. So far so good, but now I hit a bit of a wall. My assumption about mantra throwing out no-longer-needed dla's seems to be wrong, or at least I haven't been able to coax it into doing that. So I haven't been able to really render this 400 million dataset yet, only without shadows it reaches the end. *sniffle* Ideas, anyone? I haven't tested with the particle buckets yet, here's to hoping.. My idea-bag isn't completely empty yet, though Doing this in passes/layers would of course be easier but I want to do it all at once.. Chasing after big numbers and "rendering a billion particles" might sound like a childish goal, especially as I have no production need for it, but I've learned a LOT during this - and will likely learn still more until there Particle diary will continue.. higher peaks are still waiting eetu.
×