Jump to content


Popular Content

Showing most liked content since 07/01/2021 in all areas

  1. 5 points
    Here is another implementation of reaction-diffusion running across a mesh using OpenCL. reaction_diffusion_ocl.hiplc
  2. 5 points
    Hi Ezequiel, you could design these shapes with voronoi cells, too. shoe.hiplc
  3. 4 points
    @caskal, thanks this more or lest a wip , not intended to be shown in fact hehaha// Here few few I did procedural from houdini starting prim excepted the man... Where are you with your bot maker series man? When I seen the episode 2, i smile, I had the same vibe for the scott roberson video What are you up at the moment, Are you still using this method? Did you finally post the file? There was some interesting idea as well the the first method, using pop Here, little by little i find my own way in houdini, here few example on the subject of procedural modelling, something i quite enjoy even if only on few occasion i have to do it in my job. People don't really believe in it , YET formz_april2021A.mp4formz_april2021A.mp4formz_april2021A.mp4formz_april2021A.mp4 ________________________________________________________________ Vincent Thomas (VFX and Art since 1998) Senior Env and Lighting artist & Houdini generalist & Creative Concepts http://fr.linkedin.com/in/vincentthomas
  4. 4 points
    Another similar approach using deltamush, distances and polycuts. helmet2.hiplc
  5. 3 points
    Whoaaaaa @vinyvince those look AMAZING! reminds me of master Akira Saito stuff, love it! @crispr_boi @vinyvince here are 2 setups I made on the robots: https://drive.google.com/file/d/12yuO8erudQjuRiMnjUVN4hGl8aBrDKOy/view https://drive.google.com/file/d/1YhByn2_jndpXJ3fNd9pgVo7Zkb_MSCtU/view?usp=sharing (this is older than above) Regarding the status, haven't play with it for a while, been burnt out with client stuff This is the latest stuff I did (quite old xD): Keep pushing that bot setup, love it! Cheers
  6. 3 points
    Hi, thought I'd share this in this section too: I wrote an article for the german “Digital Production” magazine about my free LYNX VFX toolset. For the article I made a couple of renderings using the LYNX fabric tools. Luckily it even made the cover Here are my personal favorites, the rest of the images can be found on Artstation. You can also find the complete scene on GitHub under the Demo Files. So now anyone can design an ugly Christmas Sweater;) Looking forward to seeing what you guys come up with, enjoy! Links: LYNX VFX Toolset Odforce Thread: https://forums.odforce.net/topic/42741-lynx-free-opensource-vfx-pipeline-tools/ LYNX VFX Toolset (Sweater Scene File included): https://github.com/LucaScheller/VFX-LYNX Artstation (HighRes Renderings): https://www.artstation.com/artwork/OyeY6g Digital Production Magazin: https://www.digitalproduction.com/ausgabe/digital-production-01-2020/ Alternatively view the article in my latest blog post: https://www.lucascheller.de/vfx/2019/12/15/ynybp7wpiqtshoy/
  7. 3 points
    It is. Just transfer a bunch of curves from tubes, grids and lines with pscale and velocity attributes into a volume. Reshape VDB will blend all shapes eventually. dinosaur.hiplc
  8. 2 points
    Okay, here's what I understand, and I got some questions. On different datasets, you got a string attribute named "type" with different values. Let's say you import once. The tool auto populates B F C D. You now have a multiparm or whatever that you can control the color and whatnot as you want. But now, you import again, and now you got F X A S F is still there, but X A and S are new. What do you expect to happen then ? The easiest to do is on each new import, wipe the entire UI and start from scratch But this means that for different datasets, if you got the same value, you'll have to re-enter the parameter manually. Here is a mockup, where when you import a new dataset, it resets all parms and populate with default ones. dynamic_HDA_parms_mockup.hipnc Basically, with a button, it reads the unique values of the attribute "type" (this info was fetched from within VEX into the types detail attribute) and sets the multiparm to the amount of unique values in the attribute. node = hou.pwd() geo = hou.node("./OUT_script").geometry() # After the attribute wrangle fetching the unique values of the type attribute types = geo.stringListAttribValue("types") node.parm("multiparm").set(0) # Reset multiparm node.parm("multiparm").set(len(types)) # Set Name Parameter for i, type in enumerate(types): parm = hou.parm("name_" + str(i+1)) # parm starts at 1, not 0 parm.set(type) A few clarifications and details I input the data by wiring it, but this can be adapted to an input from a file. You said your data was on the prims, this is doable as well. If you want the values of the parameters from previous datasets to remain, I see two options. 1 - Instead of wiping the multiparm each time, we go through the current parameters, look if we got any new ones, and create only the missing ones. 2 - Have a file somewhere on disk containing the relevant values for the parameters, and after resetting the multiparm, auto-fill the parameters using those values. I think this will be harder to do, though.
  9. 2 points
    Made this a while ago, finally got around to uploading it to gumroad. https://gum.co/ofsD
  10. 2 points
    @konstantin magnus When you are calling the function on the button, use -1 or 1 to swap up/down (technically you can swap with a value that's at any offset but for practical reasons...) #SwapValues(kwargs, hou.ParmTemplate.name a, hou.ParmTemplate.name b) def SwapValues(kwargs, a, b): #a, b are parameter names (not labels) node = kwargs["node"] pA = node.parm(a) pB = node.parm(b) if len(pA.keyframes()) == 0: #if both params have no keyframes if len(pB.keyframes()) == 0: valueSelf = pA.rawValue() valueOther = pB.rawValue() pA.set(valueOther) pB.set(valueSelf) #if A has no keyframes but B does else: valueSelf = pA.rawValue() valueOther = pB.keyframes() pA.setKeyframes(valueOther) pB.deleteAllKeyframes() pB.set(valueSelf) else: #if A has keyframes but B doesn't if len(pB.keyframes()) == 0: valueSelf = pA.keyframes() valueOther = pB.rawValue() pA.deleteAllKeyframes() pA.set(valueOther) pB.setKeyframes(valueSelf) #if both params have keyframes else: valueSelf = pA.keyframes() valueOther = pB.keyframes() pA.deleteAllKeyframes() pB.deleteAllKeyframes() pA.setKeyframes(valueOther) pB.setKeyframes(valueSelf) #GetParamNames(kwargs, hou.parmTemplate mpBlock, int index, int swapIndex, int nestingDepth) def GetParamNames(kwargs, mpBlock, index, swapIndex, nestingDepth): node = kwargs["node"] for i in range(len(mpBlock)): #If the current parameter is of a valid type, check if it has channels if mpBlock[i].type() == hou.parmTemplateType.Int or mpBlock[i].type() == hou.parmTemplateType.Float or mpBlock[i].type() == hou.parmTemplateType.String or mpBlock[i].type() == hou.parmTemplateType.Toggle: #note that vector channels are suffixed after multiparm index - "vector_#x" instead of "vector_x#" if mpBlock[i].numComponents() > 1: for c in range(mpBlock[i].numComponents()): if mpBlock[i].namingScheme() == hou.parmNamingScheme.XYZW: if c == 0: vComponent = "x" elif c == 1: vComponent = "y" elif c == 2: vComponent = "z" elif c == 3: vComponent = "w" elif mpBlock[i].namingScheme() == hou.parmNamingScheme.RGBA: if c == 0: vComponent = "r" elif c == 1: vComponent = "g" elif c == 2: vComponent = "b" elif c == 3: vComponent = "a" elif mpBlock[i].namingScheme() == hou.parmNamingScheme.UVW: if c == 0: vComponent = "u" elif c == 1: vComponent = "v" elif c == 2: vComponent = "w" pName = mpBlock[i].name().replace("#","%s") % index + vComponent pOthrName = mpBlock[i].name().replace("#","%s") % (index+swapIndex) + vComponent SwapValues(kwargs, pName, pOthrName) else: pName = mpBlock[i].name().replace("#","%s") % index pOthrName = mpBlock[i].name().replace("#","%s") % (index+swapIndex) SwapValues(kwargs, pName, pOthrName) #if a folder is found, determine if it's a nested multiparm elif mpBlock[i].type() == hou.parmTemplateType.Folder: #if it is, compare the number of instances in each multiparm if mpBlock[i].folderType() == hou.folderType.MultiparmBlock: getNMP = mpBlock[i].name().replace("#","%s") % index getOthrNMP = mpBlock[i].name().replace("#","%s") % (index+swapIndex) nmpInstances = node.parm(getNMP).evalAsInt() nmpOthrInstances = node.parm(getOthrNMP).evalAsInt() #If both multiparms have the same number of instances, swap nested parameter values if nmpInstances == nmpOthrInstances: for j in range(nmpInstances): pA = node.parm(getNMP).parmTemplate().parmTemplates()[j-1].name().replace("#","%s") % (index, j+1) pB = node.parm(getOthrNMP).parmTemplate().parmTemplates()[j-1].name().replace("#","%s") % (index+swapIndex, j+1) SwapValues(kwargs, pA, pB) #Otherwise, save values to a temporary holder else: tempA = list() tempB = list() for j in range(nmpInstances): nestedParm = node.parm(getNMP).parmTemplate().parmTemplates()[j-1].name().replace("#","%s") % (index, j+1) if len(node.parm(nestedParm).keyframes()) > 0: tempA.append(node.parm(nestedParm).keyframes()) else: tempA.append(node.parm(nestedParm).rawValue()) for j in range(nmpOthrInstances): nestedParm = node.parm(getOthrNMP).parmTemplate().parmTemplates()[j-1].name().replace("#","%s") % (index+swapIndex, j+1) if len(node.parm(nestedParm).keyframes()) > 0: tempB.append(node.parm(nestedParm).keyframes()) else: tempB.append(node.parm(nestedParm).rawValue()) #initialize number of multiparm blocks SwapValues(kwargs, getNMP, getOthrNMP) #and update each block from the temporary holders for k in range(nmpOthrInstances): pA = node.parm(getNMP).parmTemplate().parmTemplates()[k-1].name().replace("#","%s") % (index, k+1) node.parm(pA).deleteAllKeyframes() try: node.parm(pA).set(tempB[k]) except: node.parm(pA).setKeyframes(tempB[k]) for k in range(nmpInstances): pB = node.parm(getOthrNMP).parmTemplate().parmTemplates()[k-1].name().replace("#","%s") % (index+swapIndex, k+1) node.parm(pB).deleteAllKeyframes() try: node.parm(pB).set(tempA[k]) except: node.parm(pB).setKeyframes(tempA[k]) #if it's not a multiparm, dive inside and swap each nested parameter else: GetParamNames(kwargs, mpBlock[i].parmTemplates(), index, swapIndex, 0) #Swap(kwargs, int targetSwapIndex) def Swap(kwargs,targetSwapIndex): node = kwargs["node"] button = kwargs["parm"] #Shorthand to access the index of a multiparm index = int(kwargs["script_multiparm_index"]) #Raise error if parameter hierarchy is configured incorrectly if not button.tuple().isMultiParmInstance(): raise hou.NodeWarning("Button is not inside a multiparm block.") #Get the parent multiparm folder mpFolder = button.tuple().parentMultiParm() #Count the number of multiparm instances -> raise errors if swapping is not allowed mpInstances = node.parm(mpFolder.name()).evalAsInt() #Raise errors if trying to swap up on first block, or swap down on last block if targetSwapIndex > 0: if index == mpInstances: raise hou.NodeWarning("No value below to swap with.") elif targetSwapIndex < 0: if index == 1: raise hou.NodeWarning("No value above to swap with.") #Get the other parameters inside this multiparm block so we can start swapping. mpBlock = node.parm(mpFolder.name()).parmTemplate().parmTemplates() GetParamNames(kwargs, mpBlock, index, targetSwapIndex, 0)
  11. 2 points
    Gentlemen, just wanted to mention that this saved my ass. Ectomaniac, you are correct that there is very very little documentation about this very problem. I struggle with the whole "animated to sim" jump. Drives me crazy how many attributes and influences cannot be changed per frame to build a simulation.
  12. 2 points
    Pragmatic VEX is listed as part of the official VEX learning path! Thanks SideFX! https://www.sidefx.com/learn/vex
  13. 2 points
    Okay then just make a cht() function, SideFX :-) I’m laizy
  14. 2 points
  15. 2 points
    Hi emesse92, I have a different approach to this problem. So, I made some changes to your file: 1. Changed the scale in the gas wrangle node inside your DOP Network to 1, so that we are not deleting any density. 2. Added a gas diffuse micro-solver to your DOP Network. Like Atom said, the density voxels are getting clamped. So basically it's an aliasing problem. I am assuming this is partly due to your velocity fields. One way of dealing with aliasing is to add a very small diffusion; more like adding a slight blur in photoshop to fix aliasing in images. The same concept somewhat applies, but here its done in all 3 dimensions. This is exactly what the gas-diffuse micro-solver does, pushing a small fraction of density onto the adjacent voxels. The diffusion has to be very small, like say 2.5% otherwise the simulation will look mushy. I did some tests on the file and even at half your original resolution, I was able to get rid of most of the sharp voxels. At the original resolution, I expect the results to be a bit better. Try experimenting with the diffusion rate to find out what works for you. The only downside is that you might lose some detail in your simulation. 3. Changed the mode under volume collision in static object to volume sample and linked a precomputed SDF volume of collision geometry into the proxy volume slot. Also changed the division method to size. For me, the previous setup was taking up a lot of processing time before the actual simulation. So your sim should be even faster now. In my opinion, with the division method to set to size, its easier to set the voxel size of the SDF collision geometry and also gives you the option to link this with your existing voxel size essential giving you more precise control. Please refer back to your obj/COMPRESSOR_COLLIDER to see all the changes made. For some odd reason, the 'attribnoise' nodes don't work in my version of Houdini. I believe this might be because I'm using a newer version of Houdini. So if some nodes don't work for you, try to recreate the changes within your version of the file. The setup is pretty simple and you should have all the other nodes that I used. Hope this helps you out. Turbine7_Modified.hip
  16. 2 points
    I think it's called "screen window X/Y" in Houdini: https://www.sidefx.com/docs/houdini/nodes/obj/cam.html
  17. 2 points
    Feature Request: Can we have Houdini remember the last Window Size and location? I open Redshift and resize the Render View window, and when I close the window and open up again in a few minutes it is in the default location. Also, for pop up dialogs, they seem to open in most inconvenient places, like top right, and minimised, have to resize them, make a selection etc. Next time they are opened they are in the defaul position. This would be a great little time saver.
  18. 2 points
    In addition to Atom's excellent suggestions, double-check your viewport display options: in the Texturing tab, make sure you're not limiting the 3D Texture display resolution which can lead to blocky and low-res viewport visualization
  19. 2 points
    Visible voxels means you have pushed density to the maximum. It's like clipping audio, after passing the red line on the meter all you hear is distortion. Try cutting your density in half inside a volume wrangle. @density *= 0.5; Alternately, add more dissipation to remove density. Also conduct some test renders. Sometimes the viewport is misleading.
  20. 2 points
    Simply put, n-gons and tris disrupt edge flow and can be a hassle to work with if you are applying subdivisions. Tris are somewhat forgiven because they resolve triangulation ambiguity (which sometimes causes normals-related issues with non-planar quads), but n-gons usually subdivide into a weird quad-pole due to the nature of the algorithm. There are methods to resolving such issues, such as adding supporting edge loops via insetting or bevels, but it's entirely within the realm of feasibility to not introduce problems in the first place.
  21. 2 points
    Hi, I did few rnd for distribute simulation and you can do distribute simulation on single workstation, for example if you have 64core you can split 2-4slices sometihng like 32core or 16core per slice. But you need to have a lot of memory here are some stats and tests - http:// https://vimeo.com/380432448 In your case 14 core with 64GB of ram, is not good option for single workstation distribute simulation. Also simulating on linux can be a lot better then on windows, also simuilating something with flip is more of optimization to solve problem then actual brtue force. You can see what Pixomondo did on Midway https://www.youtube.com/watch?v=kJ8Hz2Bjejc&t=10962s
  22. 2 points
    @Time is what you want. To answer your question, there is @TimeInc which is the time between frames or substeps. The reciprocal or 1/@TimeInc is equivalent to $FPS so there you go. The key in the above statement is the time between "substeps". When working with foreach workflows or simulation type workflows, you should bias all your time based attributes with @TimeInc fyi. Protects you from those times you deploy substepping.
  23. 1 point
    Swapping nested multiparms is definitely a recursive process, one that is on the verge of blowing my mind trying to think of how the nested parameter names are formatted. I've updated the snippet above to add the following features: - Can swap multiple (not nested multiparm) parameters at once - Can swap vectors, even ones with different naming schemes (XYZW, RGBA, UVW) - this means you can swap Color parameters - Can swap parameters within nested folders (still not nested multiparm) - Can swap keyframes, channel references, and expressions You may notice I have some unused code - was trying to figure out nested multiparms but I'm not quite there yet.
  24. 1 point
    Hi Mattia, here are a few examples on snapping or shifting values: 1) Once you have set the value to absolute with abs(), you can compare it to a threshold like 1e-1. Between more than -1 and less than <1 truncating will result to 0. if(abs(v@P.y) < 1e-1) {v@P.y = trunc(v@P.y);} 2) With vectors calculating the distance() returns a value that can be compared: if(distance(v@P, pos) < 1e-1) {v@P = pos;} 3) For transitions (or a cheap wormhole effect) the smooth function remaps values to between 0.0 and 1.0 float dist = distance(v@P, pos); float mask = 1.0 - smooth(0.0, thresh, dist, rolloff); v@P = lerp(v@P, pos, mask); zero.hiplc
  25. 1 point
    @Fenolis: Thank you again! This is a much needed feature extension imho!
  26. 1 point
    As there is nobody in the world who doesn't like fast render times, i was stumbling around, looking for interesting things. I came across renderers like eevee(blender), but also unreal. From my experience (not really that much experience), eevee was kind of buggy here and there, although the development is going fast. But especially the unreal engine looks.. unreal (really bad joke). At the moment i got to say, the opengl renderer inside houdini does the job most of the times pretty well, but it has it's limitations. I can totally image creating a scene in houdini, send it over to unreal, and render it from there. My intentions are mostly just non-realtime stuff, like motion graphics. but getting basic knowledge about realtime-applications would be a nice bonus. Are people actually doing that? is the workflow easy/fast?, is it a "realtime" link between unreal/houdini? I can't really tell if there is content out there using this workflow. I only see some game-related content, but i'm more into non-linear animations. I am in the first place still a beginner with houdini, but very interested.
  27. 1 point
    Everyone own few beers to Konstantin, @konstantin magnus you get drunked all you life and one more
  28. 1 point
    I'm searching for artists with the opportunity to join our studio for the upcoming project. It is a high-profile full CGI TV show for teenagers, fantasy genre, 12x24 mins, production is planned for July 2021 - November 2022. The style is similar to cinematics for League of Legends. We are looking for experienced lighting/shading artists, lookdev, compositors, and Houdini artists for effects. It is a really interesting project and great for a personal portfolio too. The project will start July/August 2021 and will last for about 18 months. We prefer in-house cooperation with relocation support but also remote cooperation is possible as well. Start based mutual agreement. PFX is based in Prague in Czech Republic. It is a growing and very stable studio currently with about 100 artists with a sophisticated pipeline and a fun team to work with. From our internal survey, we are happy to say that 95% of us are proud to work for PFX and that there is a positive and open atmosphere, with friendly, honest, and trustworthy relationships. You can see our work at www.pfx.tv
  29. 1 point
    Hi, To filter it further you have to use attribsize(): https://www.sidefx.com/docs/houdini/vex/functions/attribsize.html Returns The size of an attribute’s type. For a vector type, this is the number of components. For an integer, float, or string, this returns 1. For an array attribute, this returns the size of the tuples in the array. The tuple size is controlled by the Size parameter on the Attribute Create node. The best way to copy attributes is by using the Attribute Copy SOP, it's much faster also.
  30. 1 point
    Hi emesse92, Is it possible to post your .hip file?
  31. 1 point
    Maybe you forgotten to disable the transform node ? But anyway there is a difference of scale of ten, between the two models. You have to adapt some parameters. For the size you can use a bound sop (with padding at zero). Middle click, and you can see the size in the info. In the attached file you can comapre the two version. You can slighly adjust the isooffset in the VDBConvert. And in the VDBAdvect I change some value to speed up a bit the result. Don't really understand why but It is better to increase the substeps rather than the timestep. Check the doc. VOL DISPLACEWORK F2.hip
  32. 1 point
    Basically you reconstruct houdini inside houdini !
  33. 1 point
    Here is the VEX version of the streaking texture procedure. It's pretty flexible now: Starting curves from uncovered areas to any angle, jumping over gaps, gradually changing target direction, measuring curve length for any point and of course texture mapping. Model taken from threedscans.com streaks_VEX_2.hipnc
  34. 1 point
    Many ways to do this. My take: Put a timeshift node after the file node and add foreach point with the copy to points node second input in the loop coming from the line (look at the picture of the graph). Click on the Create Meta Input Node. When it creates the node, rename it to something shorter, which you can find later, for example, "shift_iteration". Go to your timeshift node and add a new integer parameter in your parameter interface (in my case I called it shift). Edit the Frame parameter in the timeshift node - the default is $F, but you can add data which comes from for-loop by using detail function and taking the value of the attribute "itteration" on the Meta Input Node (in this example: "shift_iteration"). You can check the detail function in documentation, but it is very simple - detail("addres_to_the_node", "attribute_name", index). Multiply this value to the offset parameter by using ch() function and delete it from the current frame ($F). The full expression in your timeshift frame parameter will be: $F-detail("../shift_iteration", "iteration", 0)*ch("shift"). By changing the value of the shift parameter on the timeshift node you are making the animation to delay number of frames you define it. If you want them to start at random times, you can either use the sort node at the line, to change the ptnum attribute. Or you can use another expression on the new parameter, by putting shift_iteration into rand function. Good luck
  35. 1 point
    12. lesson Vdb -Volumes. Rocks and Cream (endless possibility) Learned (odforce- you can find here everything)..trick from--- houdinifx.jp/blog/
  36. 1 point
    And with excellent file from Akira Saito and some old stuff(Sliced Boy) from some forum H or from here, I can make( endless possibility and cut every mesh and make fun stuff. slicedBoyFromUVs.rar
  37. 1 point
    Progress between lessons. Must learn expression and more about delete frames ( `ifs($F<4, 0, '24-29')` , restposition ojoj oj , too much fun, too much to learn. But it's the progress that feels the best.
  38. 1 point
    Renderman 23 CPU vs Arnold 6 GPU with 2 minutes time limit. Renderman 23 CPU Arnold 6 GPU the reflection differences come from normals, Arnold calculates it's on normals, Renderman took the normals from the Geometry. i've tried with Karma but i had issues and crashes with it.
  39. 1 point
    Try using uppercase <UDIM> or before H18 you have to use %(UDIM)d
  40. 1 point
    you can't use VEX as parameter expression language, all tools with parms that accept VEX are referenced by Snippet VOP as a part of VOPnet where it gets evaluated, in other words it's treated as pure text until compiled by VOPnet, so it's very specific case, and can't be generalized for all parameters for python the usage would be hou.node("../box1").geometry().boundingBox().sizevec()[1] OR hou.pwd().inputs()[0].geometry().boundingBox().sizevec()[1] also for the confusing meaning of "replaced by" you can read this https://www.sidefx.com/forum/topic/57809/#post-258876
  41. 1 point
    hip file if anyone wants to take a look! planksv9.hipnc
  42. 1 point
  43. 1 point
    Agree with @marty. In case there are multiple assignments per variable (HOUDINI_PATH, PATH, etc.) only the entries of last one will registered properly (variable shadowing). If HOUDINI_PATH already has an entry, you can concatenate the paths separated by a semicolon(;) on Windows. Please note that even on a Windows machine forward slashes are required. Also make sure that all paths are valid and there is no white space between each path and the separator(;). Arnold, Redshift & Renderman example: # # Houdini Environment Settings - houdini.env # # ... PATH = "$PATH;C:/ProgramData/Redshift/bin;C:/Users/<USER_NAME>/htoa/htoa-2.1.3_rcca6014_houdini-16.0.705/htoa-2.1.3_rcca6014_houdini-16.0.705/scripts/bin;C:/Program Files/Pixar/RenderManProServer-21.5/bin;&" HOUDINI_PATH = "C:/ProgramData/Redshift/Plugins/Houdini/16.0.557;C:/Users/<USER_NAME>/htoa/htoa-2.1.3_rcca6014_houdini-16.0.705/htoa-2.1.3_rcca6014_houdini-16.0.705;&"
  44. 1 point
    @marty@Infinite Rave Here's an attempt at creating convincing white hair. Funny trivia: polar bears are "actually" black - they have black skin and their fur strands are transparent. Multiple passes of light diffuse and refraction when it passes through each strand, makes the polar bear's fur look white. Inspired by this fact, that white hair IRL is actually transparent hair without pigment, I went and set the transparency (opacity tab of the hairshader) to a very low value. Also enabled "transparent shadows". Keep in mind that these will increase render times. A lot. The scene is lit with one direct light and an env. one. As soon as you add an hdr map, it will tint the color of the white hair. Ideally one would keep the original env. map for the entire scene and use a desaturated version of it (B&W is unrealistic) just for the hair. THis could be achieved in Houdini with light exclusions, but ideally one should be able to plug an env. map just for the hair material instead of using multiple lights. If this were Mentalray I'd tell you how to do that, alas it's Mantra and I don't know how or if it's at all possible. If Houdini wouldn't keep me way with its poor viewport and traditional modeling tools I'd probably be a lot more competent with Mantra too. In the image to the right is the default hair shader with colors set to white only and to the left with all the other adjustments. The lighting and all other settings are identical. Make sure you don't miss the attached .rar file - I've included the .hdr map so you'll get exact same results and if after your own experimentation you get better looking results and render times, don't forget to post them here. white_hairball.rar
  45. 1 point
    The constraints just represent the relationship between the pieces. When you're working with them in the sop solver they don't typically move at all. The viewport shows them moving in dop level, but the actual relationship geo is not animated. That said, I have that in the setup to move them to world space and then back. You can remove them in between those nodes when you want them to be unglued. Most of this is covered in the bullet masterclass video here, which I suggest you watch: https://vimeo.com/56916407 around 1hr24min.
  46. 1 point
    Here is my attempt at the effect. The particle separation is fairly high so the results are coarse. Feel free to lower the particle separation and re-sim. I have not included any whitewater, you can add that on top once you settle on a final particle separation. I have installed the pump, as best to my ability, following the instructions in the Jeff Wagner "flip_pump_bow_curl" hip file from the above link. I have enabled OpenCL on the solver to increase calculation speed. If you experience crashing try turning that off in the solver. I have linked the Division Size of the ship hull volume directly to the Particle Separation of the Flip tank to keep things balanced but as you drop the Particle Separation you may want to de-couple these two parameters as you may not need that much resolution on the hull volume. You could also just Clip off the top half of the boat, unless you expect waves to roll over top. I think eetu is right about the resolution. You really have to lower the particle separation get hi-quality results. ap_large_ship_wake_with_pump_1a.hipnc
  47. 1 point
    Basic: // Primitive wrangle. int pts[] = primpoints(0, @primnum); vector rest = point(0, "P", pts[0]); vector prev_pos = rest; matrix3 frame = ident(); for (int i = 0; i < len(pts); i++) { vector pos = point(0, "P", pts[i]); rotate(frame, 0.1, {0, 0, 1}); vector new_pos = (pos - rest) * frame + prev_pos; rest = pos; prev_pos = new_pos; setpointattrib(0, "P", pts[i], new_pos); } Advanced: // Primitive wrangle. #define TWO_PI 6.2831852 addpointattrib(0, "N", {0, 0, 0}); int pts[] = primpoints(0, @primnum); int npt = len(pts); // Loop variables. vector rest = point(0, "P", pts[0]); vector prev_pos = rest; matrix3 frame = ident(); for (int i = 0; i < npt; i++) { vector pos = point(0, "P", pts[i]); vector delta = pos - rest; rest = pos; // Make normal. Point normals could be used instead. vector normal = normalize(cross(cross({0, 1, 0}, delta), delta)); if (length(normal) == 0) { normal = {0, 0, 1}; } // Drive a shape with ramps and multipliers. vector axis; float ramp, angle; // Twist the bend axis. axis = normalize(delta); ramp = chramp("twist_profile", (float) i / npt); angle = fit01(ramp, -TWO_PI, TWO_PI) * ch("twist") / (npt - 1); rotate(frame, angle, axis); // Bend the curve. axis = normalize(cross(normal, delta)); ramp = chramp("bend_profile", (float) i / npt); angle = fit01(ramp, -TWO_PI, TWO_PI) * ch("bend") / (npt - 1); rotate(frame, angle, axis); // Compute new position and normal. vector new_pos = delta * frame + prev_pos; prev_pos = new_pos; setpointattrib(0, "P", pts[i], new_pos); setpointattrib(0, "N", pts[i], normal * frame); } curl.hipnc
  48. 1 point
    Yep, the new polyextrude uses attributes rather than local variables. On the 'local control' tab you'll find toggles to enable attribute overrides for various properties. To randomise the distance, turn on 'zscale', and make sure you have a randomized @zscale attribute on the primitives feeding the extrude. Attached 2 ways to do this, one with a prim wrangle, the other with an attribute randomize sop. rand_polyextrude.hipnc
  49. 1 point
    You can use the bbox vop and then average the min and max with average vop, this will give you the centroid. However this calculation would be performed for every point/voxel/prim whatever the vop is set to run over. An alternative is to just make a vector parameter with the usual: centroid(opinputpath('.',0), 0) centroid(opinputpath('.',0), 1) centroid(opinputpath('.',0), 2) Either one should be fine.
  50. 1 point
    If I understand you correctly you should be able to do just that using a Point SOP. Just set the Position attributes to $MAPU and $MAPV and you should be ready to go. It might be necessary to make the points unique so you don't get weird results. -dennis mesh2uv.hip