Jump to content

Thomas Helzle

Members
  • Content count

    178
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by Thomas Helzle

  1. Toms "Learning Houdini" Gallery

    "The Tree" Another R&D image from the above VR project: The idea for the VR-experience was triggered by a TV-show on how trees communicate with each other in a forest through their roots, through the air and with the help of fungi in the soil, how they actually "feed" their young and sometimes their elderly brethren, how they warn each other of bugs and other adversaries (for instance acacia trees warn each other of giraffes and then produce stuff giraffes don't like in their leaves...) and how they are actually able to do things like produce substances that attract animals that feed on the bugs that irritate them. They even seem to "scream" when they are thirsty... (I strongly recommend this (german) book: https://www.amazon.de/Das-geheime-Leben-Bäume-kommunizieren/dp/3453280679/ref=sr_1_1?ie=UTF8&qid=1529064057&sr=8-1&keywords=wie+bäume+kommunizieren ) It's really unbelievable how little we know about these beings. So we were looking to create a forest in an abstract style (pseudo-real game-engine stuff somehow doesn't really cut it IMO) that was reminiscent of something like a three dimensional painting through which you could walk. In the centre of the room, there was a real tree trunk that you were able to touch. This trunk was also scanned in and formed the basis of the central tree in the VR forest. Originally the idea was, that you would touch the tree (hands were tracked with a Leap Motion controller) and this would "load up" the touched area and the tree would start to become transparent and alive and you would be able to look inside and see the veins that transport all that information and distribute the minerals, sugar and water the plant needs. From there the energy and information would flow out to the other trees in the forest, "activate" them too and show how the "Wood Wide Web" connected everything. Also, your hands touching the tree would get loaded up as well and you would be able to send that energy through the air (like the pheromones the trees use) and "activate" the trees it touched. For this, I created trees and roots etc. in a style like the above picture where all the "strokes" were lines. This worked really great as an NPR style since the strokes were there in space and not just painted on top of some 3D geometry. Since Unity does not really import lines, Sascha from Invisible Room created a Json exporter for Houdini and a Json Importer for unity to get the lines and their attributes across. In Unity, he then created the polyline geometry on the fly by extrusion, using the Houdini generated attributes for colour, thickness etc. To keep the point count down, I developed an optimiser in Houdini that would reduce the geometry as much as possible, remove very short lines etc. In Unity, one important thing was, to find a way to antialias the lines which initially flickered like crazy - Sascha did a great job there and the image became really calm and stable. I also created plants, hands, rocks etc. in a fitting style. The team at Invisible Room took over from there and did the Unity part. The final result was shown with a Vive Pro with attached Leap Motion Controller fed by a backpack-computer. I was rather adverse to VR before this project, but I now think that it actually is possible to create very calm, beautiful and intimate experiences with it that have the power to really touch people on a personal level. Interesting times :-) Cheers, Tom
  2. Toms "Learning Houdini" Gallery

    Over the last couple of weeks I started to learn Houdini, partly from tutorials by Entagma and others, partially by implementing things I worked on in Grasshopper for Rhino (which is single threaded and rather slow for larger assets) over into the Houdini world. This one started out from the Entagma tutorial on differential growth but I then first extended it to avoid the text areas and today implemented a first version of a threading solution to make it look more like yarn: This one is similar to an older test I once did in Softimage XSI and Arnold, but this time with Houdini, exported as FBX and rendered with my favourite renderer Thea Render: And the last one for today was a study on how one can visualise noise in Houdini. It uses a solver and the trails node. I found it hard to make it look really subtle in Mantra, so in the end I created poly wires and also exported it to Thea Render via FBX: Thanks everybody on Odforce for knowledge and inspiration!!! Cheers, Tom
  3. Differential curve growth

    Yeah, just put some very small random offset into the system before the solver so that the plane isn't perfectly flat anymore. IIRC that was enough to make the system leave the plane. Cheers, Tom
  4. Controlling Focus Distance

    Thank you symek and no hurry at all! Redshift can use the Mantra camera parameters (which is what I tried) so that shouldn't be the issue. I also tried to create a Z-Depth AOV in Redshift to no avail. I never used python scripting in Houdini so am a bit out of my depth... :-) Thanks and Cheers, Tom
  5. Controlling Focus Distance

    I totally lost track of this but now finally tried it out - works great, but only for Mantra. With Redshift I get the Python error: So I guess Redshift (2.5.62) doesn't provide the needed handles (Houdini 16.5.378). I'll probably have to wait for their render viewer to provide this natively. Cheers, Tom
  6. Controlling Focus Distance

    // Camera focus distance to a null called focus: vlength(vtorigin(".","../focus")) Works for me in the same version you use. Did you parent the null to something that it creates a loop? I just use a camera and a null on the obj top level.
  7. Toms "Learning Houdini" Gallery

    "Brushed" I'm currently working on my first VR-Art-project with "Invisible Room" in Berlin and was exploring 3D-brushes from - in this case Tiltbrush - and Quill in our R&D. Here the result was exported as Alembic, procedurally coloured in Houdini and rendered with Redshift over a scanned-in rice-paper-texture: Interesting times... :-) Cheers, Tom
  8. Toms "Learning Houdini" Gallery

    "Contours" Experimenting with contours on a terrain mesh. Rendered in Redshift and post-processed in Lightroom: And with different post in Luminar 2018: Cheers, Tom
  9. Toms "Learning Houdini" Gallery

    "Lantern" A test image for organic spatial subdivision rendered in Redshift: Cheers, Tom
  10. Houdini 17 Wishlist

    I see. I guess I work too much with organic shapes where this is actually what I want... :-) Cheers and sorry for the noise, Tom
  11. Houdini 17 Wishlist

    I usually just put a resample node before my wrangle and let it create curveu... Although this works best when actually resampling.
  12. Toms "Learning Houdini" Gallery

    "Shells of Light" My second piece with extreme DOF in Redshift. The basis is a shell made of wires. There are no lights other than a HDRI environment in the scene and no post effects are used other than color correction in Luminar2018 & Photoshop. All the structures are the result of the very shallow depth of field. Rendered at 10000 pixels square for high res printing. Took about 9 hours with a GTX 980 TI and a GTX 1080 TI at 32768 samples per pixel max. Prints: https://fineartamerica.com/featured/shells-of-light-thomas-helzle.html
  13. Fabric Engine no longer being developed

    Weird...
  14. Too much contrast, how to tweak indirect lighting

    @AntoineSfx Check Diffuse Depth in the Mantra ROP. Great Link @Skybar. Has anybody converted those LUTs to something Houdini/Redshift can use directly? I'll have to dig deeper into this I think. :-) Thanks and cheers, Thom
  15. I don't think topology changes at all in the case of the SideFX ident, just the form. Could be a mixture of deforming noises running through the structure or other parameters being animated. Didn't look at your scene yet, but you could always project your end-result (high-resolution) back onto your first steps with a ray node and also transfer the normals from the low-res mesh, so that the rendering looks similar or the same and then morph between those projected states - simple since they have the same topology. Cheers, Tom
  16. Too much contrast, how to tweak indirect lighting

    Looks like you maybe need more GI bounces, but it's also a normal problem in reality that if your camera captures the sun on the floor and makes it look well balanced (like in your image), the room around it will be too dark. And if you balance out the room, the sun will be too bright. The human eye (= how you remember it) is much more flexible there and makes it look "good" automagically. CG != "Reality". Tonemapping can help in this situation too. Cheers, Tom
  17. 16.5 edgetransport

    Well, in this specific case I think it's easier to just enable "curveu" in the resample node and use that instead of @distance, but for more complex cases I also look forward to using this node a lot! Cheers, Tom
  18. Fabric Engine no longer being developed

    XSI was my main software for about 10 years and I actually bought Shake from the dead Apple, so I am fully aware of this. ;-) And no, I did not enjoy either. Cheers, Tom
  19. Fabric Engine no longer being developed

    I find this kind of thing really sad each time it happens. No matter if they went bust or got bought out, it always leaves a crater of frustration with everybody who was using the software before and invested time and money into it. So in the future, people will be less willing to trust a new and small company, which makes the already rather monoish-culture in the field even stronger. Especially if it would have been bought by AD, it really would be a sad day seeing another hopeful contender for a fresh approach getting sunk into that black hole. Would feel like that old carcass Maya getting another injection of fresh blood from a youth... I keep my fingers crossed that it is something at least halfways non-necromantic... ;-) And that everybody involved is well... Cheers, Tom
  20. 16.5 drums

    Good stuff! "Quick marks are now saved with the scene file." Hallelujah :-) I may even use them now... No changes to the code editor though it seems... :-( Well, there's always Houdini 17 to hope for a decent editor... Still the Nr. 1 feature I'm missing each and every minute I use Houdini... Funny how devs can be blind to something they use probably 24/7... Cheers, Tom
  21. I use the same and it works. #Redshift: PATH = "C:/ProgramData/Redshift/bin;$PATH" HOUDINI_PATH = "C:/ProgramData/Redshift/Plugins/Houdini/16.0.705;&" Check if you have the Redshift toolbar. Cheers, Tom
  22. Renderfarms renderers and plugins

    Used FoxRenderfarm a while ago: http://www.foxrenderfarm.com/ They are one of the few who support Houdini and GPU rendering with Redshift. Not Europe though - China/Shenzen No plugin for Houdini, only Max/Maya. Wrote down some of my experiences here: https://www.redshift3d.com/forums/viewthread/14391/ They are actually looking for feedback and took even my critique very positive. Cheers, Tom
  23. Houdini 16.5 Sneak Peek

    A built in code editor with tooltips for command syntax and some other things to make VEX more fun. :-)
  24. Houdini 16.5 Sneak Peek

    Looks all awesome. Let's hope the VEX editor saw some love too finally - still the biggest flaw in the whole application.... ;-) Funny to see a pretty much 1:1 copy of Lulu xXX work with the flow field , for instance: Check his other work for inspiration - great stuff! :-) "The planned released date for Houdini 16.5 is November 7, 2017." Nice, Christmas comes early this year :-) Cheers, Tom
  25. Toms "Learning Houdini" Gallery

    "Feathers" Working on building a feather-designing tool from scratch... Rendered in Redshift. Cheers, Tom
×