Jump to content

Leaderboard


Popular Content

Showing most liked content on 02/02/2016 in all areas

  1. 3 points
    Hi, I ported Hable's tonemapping algorithm, which is popular in video games now (Uncharted, Unity engine), as a VEX COP node It has only one setting - exposure. It's extremely simple to use, yet preserves saturation nicely and leaves non-burnt-out places untouched. The result is... safe. If I needed more dynamic image, I'd correct it later with other nodes. To use it, create a VOP COP2 Filter node in the img (compositing) context. Then open it and pass R,G,B input through the Hable Tonemapping node: EDIT: Added an additional tonemapping algorithm, ACES - used in Unreal Engine 4.8 onwards. ACES gives more pronounced dark tones (even compared to the original photo!) and less desaturated colors than Hable. On the pictures, Hable, original HDR photo and ACES are compared. Both are equally simple to use. Oskar hable_tonemap.hda aces_tonemap.hda
  2. 2 points
    for pbr, i tend to go with high pixel samples and no extra ray samples -- ie, 9x9 pixels, 1 min ray, no variance aa. stochastic transparency i usually try to keep lowish (1 if i can, but maybe up to 4-8 if needed). the theory being the primary samples are more important the secondary samples for volumes. for normal smoke, i usually use the "billowy smoke" shader. it's also worth trying micropoly vs pbr. sometimes one is faster than the other... haven't really found a consistent winner. likely depends a lot on what's in your scene.
  3. 2 points
    This time something that might even be useful, a tonemapping COP. I've recently worked on game tonemapping, and thought why not implement some of the operators in Houdini. This time written in VEX. Operators I implemented are from Reinhard, Insomniac Games and John Hable. Some of the parameters can be a bit opaque, so I implemented a simple preview of the curve overlaid on the image. A preview video. I guess I should do this in Nuke too.. tonemap_example.hip ee_tonemap.otl EDIT: a version patched by @fsimerey to work with newer houdini versions is at
  4. 1 point
    hi guys, I have made a video presentation about a chairs and tables. In this video I will explain how I have used this tool to import the objects for Unreal Engine 4, and how I approach procedural UV's. Please give me some feedback or any ideas to better the tool so I can improve myself. Thanks in Advance
  5. 1 point
    Hi Strob! The Vosilob asset uses surface tension based on curvature decimation, so you compute the curvature and gradient of your surface field and use the two fields to push and pull in convex and concave zones, so the particle field tends to form drops and shape in nice forms. I'm using a different approach to this, I combine the curvature decimation technique with more additional custom forces and PBD based approach; So you got nice drops, tendrils, uniform distribution, and more volume conservation control; About a "fill hole" approach, I don't use a direct particle filling method mainly because of the volume gain, it woks in some scenarios but in some is not so practical, also all the particle field tends to grow and shape in a very thicker way; So for the hole filling problem I'm using a field based approach with some force, frame and pressure conditionals to control the volume gain and thickness, but this is still WIP, I'm just developing this at free time. I hope this helps! Alejandro microTension_v1.5.avi
  6. 1 point
    when you say fluids, you mean volumes still, yeah? not like meshed surfaces... i don't tend to use deep shadows a whole lot. i just bite the bullet and do raytraced. maybe that's why i end up needing to drop the secondary rays down so far... if you don't have a lot of primary rays, then stochastic transparency should probably be higher. figure your overall number of samples in your volume will be (at most) your pixel samples times your stochastic samples. so 3x3 with 4 stochastic samples is 9x4 or 36 voxel samples at most. which might equate to an unacceptable amount of noise depending on the nature of your volume. if you have 1/9 variance sampling, each of those 36 primary ray intersections will fire off multiple secondary rays for shadow calculations (if it's ray tracing shadows) -- like in the upper values. which seems to be overkill for lighting when the bigger problem is your alpha. 9x9 with 1 stochastic sample is 81 samples. and if i don't do variance sampling, each of those samples sends out a single ray when it comes to shadow calculations so there's a better primary ray to secondary ray ratio. at least, that's my working theory.
  7. 1 point
    Hi, I am not sure if it will be sufficient for your case but you can also use packed primitives. Quick and simple See example file. Juraj prim_id_or_group_id__to_follow_point_id_motion_packed_prims.hipnc
  8. 1 point
    VEX code: // Hable (Uncharted 2) Tonemapping // // Adapted from code by John Hable // http://filmicgames.com/archives/75 vector hableTonemap(vector x) { float hA = 0.15; float hB = 0.50; float hC = 0.10; float hD = 0.20; float hE = 0.02; float hF = 0.30; return ((x*(hA*x+hC*hB)+hD*hE) / (x*(hA*x+hB)+hD*hF)) - hE/hF; } vector inputColor = set(_R, _G, _; vector tonemapped = hableTonemap(_exposure * inputColor); float hW = 11.2; vector whiteScale = 1.0f / hableTonemap(hW); tonemapped = tonemapped * whiteScale; assign(_R, _G, _B, tonemapped); // ACES Filmic Tone Mapping Curve // // Adapted from code by Krzysztof Narkowicz // https://knarkowicz.wordpress.com/2016/01/06/ // aces-filmic-tone-mapping-curve/ vector ACESFilm( vector x ) { float tA = 2.51f; float tB = 0.03f; float tC = 2.43f; float tD = 0.59f; float tE = 0.14f; return clamp((x*(tA*x+tB))/(x*(tC*x+tD)+tE),0.0,1.0); } vector tonemapped = ACESFilm(set(_R,_G,_ * _exposure); assign(_R, _G, _B, tonemapped);
  9. 1 point
    well i am no mantra expert and this might be a bad idea ... but i tend to keep the same practice as in Mental Ray by keeping my pixel sample very low like 2 * 2 / 3 * 3 and just push the min ray sample to balance this. - deep shadow - micro PBR - no stochastic - very low pixel sample - pushing a little the min samples is generally what i use when i have very few times but it might be a little old school for your needs ...
  10. 1 point
    Hello Guys, i finally finished my video tutorial on volume render using houdini and arnold. Please check out the intro on vimeo: https://vimeo.com/153745362 Still preview http://sjvfx.com/arnold/preview_hd.jpeg Get it now Volume Rendering using Houdini and arnold Note: i have included scene/vdb files for each of the examples. i hope you guys like it. Thanks Saber
  11. 1 point
    Hope you don't mind, Karl, I coded a version of this in vex after checking out your file. Slightly different than yours, but using the same principle. I put two options in there, one to do even divisions between particles, and another to use a step size for divisions, so you always get a somewhat even distribution of new points. Match this to your flip particle step size, and I think that will provide optimized results. I also approximate normals on the point cloud and provide a random spread along the surface tangents which can further help fill in the gaps. There's an option in there to try and detect isolated particles and delte them, but that will slow things down a lot. HIP and OTL attached. splashExample.hip pc_gapfiller.hda
  12. 1 point
    Works here. But again, I'd suggest you try and stop using the point sop, and use other things instead (colour sop, normal sop, wrangle sop, point vop)
  13. 1 point
    Something completely different for a change. Greenpeace had a video projection show here in Helsinki, footage of the arctic projected onto the local cathedral. I helped a little bit by adding an "ice" effect on top of the footage that made it look as if the front parts of the cathedral were made of ice. Of course there was a bit of trickery involved, so it has a place here in the Lab. The editing guys wanted to have freedom to change the edit as late as possible, and rendering refractive stuff can be sloooowww, hence the trickery: First I took some photos and and did a fast model in Agisoft Photoscan. I did a very simple model of the front parts in Houdini, and placed a large quad behind it - about where the wall of the cathedral would be. I set the quad to be emissive, with R going 0..1 according to u and G going 0..1 according to v. The pillars were refractive, so every pixel in the final render pass was of the color signifying the uv-coordinates of the location where the refraction ray hit the quad. In essence it's just a distortion uv-map. I had a photo from the future location of the projector, so I eyeballed the model and camera to match the real view. In COPs I then performed with VOPs the per-pixel uv-ampping operation needed to comp the ice on top of the footage. It was real fast, about a second a frame for 1080p, quite a bit snappier than it would've been to actually render each frame.. (short mp4) The problem with this approach, well one of them, is that every pixel only picks up one color sample from the background, so no soft refraction or multiple refractions/reflection on top of each other. And no antialias. To mitigate this I rendered the uv texture in 4k and did the compositing step in 4k as well, so I got at least a bit of antialias going. It was still fast. I also did a simple sss diffuse render that I blended a little into the final frame. Here's some footage of the final thing I found on Youtube: ice_ice_cathedral_v011.hip
  14. 1 point
    Hi all, Whilst looking around at papers on PSD and MWE I came across all sorts of info on RBF's (radial basis functions) since these are used to calculate the weights for the PSD. I also came across a way of using them to do feature morphing between topologically different meshes using a method called Thin plate spline mapping. Anyhoo I've been experimenting and put together this basic sop that impliments it. Basically the idea is that you mark feature points on two different meshes and this sop would deform the input mesh to match (approximately) the second mesh using the transformed feature points. In the papers they generally use it to morph different heads and then do weight transfer from the morphed head onto the new unmorphed one. The transferred weights are capture weights so the head can then be animated without having to manually set the weights up for it. The applications of this are quite varied though so I thought I'd share it. I have one problem with it at the moment. It works but after a few clicks in the hip file it will crash with a memory error. So if anyone can see what I've done wrong that would be great. It's a very simple piece of code, I must have just missed an initialisation or something but I can't see it. I'll look at it again later and see if I can spot it with fresh eyes. The attached just uses a linear basis but I've commented the bit you need to change to try different types of interpolations. RBFmorpher.zip enjoy!
×