Jump to content


Popular Content

Showing most liked content since 07/15/2018 in all areas

  1. 4 points
    Hi! Here is something I've been working for a few time on and off. I was searching for something that I could use for plants that are colliding near the camera. So I ended up using the bullet solver with packed geometry. I'm not sure if it's the right way to go for this kind of things but at least I learned a lot about the constraints. Thinking afterwards I probably should have used a few more substeps but I hope you can enjoy it anyway! Suggestions for improvements are welcome
  2. 4 points
    Finally finished this example that's been sitting on my backburner. It's a really simple demonstration of how you can use the Pose Tool and Pose Scopes to get invisible rigs and motion paths in Houdini. Has anyone used them in production, by chance? What's been your experience? Caveat: that hip file is definitely NOT an example of good rigging practice, by any means! Industrial Robot Arm by UX3D on Sketchfab (CC BY-NC 4.0) invisible_rigs_motion_paths.hipnc
  3. 4 points
    @NNois Houdini already has that which is called as the /OBJ network where one can see the scene properly. Just wait till the LOPs context (possibly LookDev Operators) comes out. I guess that SideFX is trying to make Houdini a complete package which can be used from start to the end of the pipeline. By bringing LOPs, they are competing with Katana and also the new Clarisse Builder. But they need to really speed up Mantra and possibly introduce GPU rendering. They are also bringing the TOPs context (possibly Texture Operators) which can compete with Mari, Substance Painter etc... TOPs could be procedural texturing like Substance Painter. With Houdini 17, you can forget about using Katana. I hope that till Houdini 19, we can forget about using Nuke as well .
  4. 4 points
    In case it's useful to anyone, here's an asset I made a little while ago and finally got round to documenting. It takes one or m ore curves and generates 3 levels of braided curves - first level coiled around the input, second coiled around the first and the third around the second. Additionally it can make a final level of 'hairs'. It supports animated inputs, and will transfer velocity onto the output but in many cases it's probably better to use a timeshift to generate the rope on 1 frame and use a point deform or similar. This also avoids texture jumping problems if using the supplied 'pattern' colour method. There's a fairly comprehensive help card with it and below is a demo video. If anyone finds bugs or problems, please let me know and I'll try and fix them when I have time... I'd be very interested to see anything anyone makes with it... TB__RopeMaker_1_0.hda
  5. 3 points
    I'll spin up a wish list for 18 when they officially announced the launch date of Houdini 17. There has been some hurt feeling on the timing of the creation of the wish list in the past. Also until they officially announce we do not know what features are actually included, and or cut from this next version. The same reasons we do not do a wish list for a .1, .5 or other build. We only had a single 14.0 no 14.5, and a 9.1 and 9.5. While, yes, it is true there is a general degradation of a wish list the closer to release it is, you don't want to necessarily jump the gun. Also as always. This is a wish list unaffiliated with Houdini support services. Even if there are bunch of them on here, lol. If you want to make actual Request for Feature Enhancements(RFEs), or submit BUGs please contact through your company support, or use the online submission process https://www.sidefx.com/bugs/submit/ The wish list is more for rants, and collaboration of ideas. Comments on the list do not have a direct causality to changes in Houdini, but there is a strong argument for indirect causality. As the more official requests from companies get in with common interest, the more likely a request will happen (besides the insane request ;).
  6. 3 points
  7. 3 points
    I wanted to share this script with you guys. As we all know, viewport issues randomly occur at times. Geometry doesn't display properly, templated geo sticks to the view and doesn't go away, camera gets stuck in ortho mode, etc. So a common fix for this is to just close the viewport and make a new one. I have written a script you can add on your shelf for your convenience: cur = hou.ui.paneTabOfType(hou.paneTabType.SceneViewer,0) vp = cur.curViewport() so = vp.settings().displaySet(hou.displaySetType.SceneObject) fb = cur.flipbookSettings() somode = so.shadedMode() cam = hou.node("/obj").createNode("cam") vp.saveViewToCamera(cam) cam.parm("projection").set(0) new = cur.pane().createTab(hou.paneTabType.SceneViewer) nvp = new.curViewport() ndm = nvp.settings().displaySet(hou.displaySetType.DisplayModel) nso = nvp.settings().displaySet(hou.displaySetType.SceneObject) nso.setShadedMode(somode) nvp.setCamera(cam) ndm.setLinkToDisplaySet(hou.displaySetType.SceneObject) new.flipbookSettings().copy(fb) cur.close() cam.destroy() This will also save your current camera position, shaded mode, and flipbook settings. Enjoy!
  8. 3 points
    one more : ) gas match field and analysis to create scalar field getting length of velocity. Use that as control field for disturbance and turbulence. that + what I did above gives this: https://drive.google.com/open?id=13dfeWw2tefsNJ-NSSWJo9w3M9wQHsUEZ
  9. 3 points
    I got a chance to improve the tornado project a few months ago and here is the result, although it's a fire whirl now :
  10. 3 points
    Hi again! There's a new session starting up this week, and still some spots available. Late signups are fine, I think registration is open for another week or so. Feel free to get in touch if you have any questions about it. Also, I'll be presenting for SideFX at SIGGRAPH this year, so if anyone here will be going, feel free to keep an eye out for the schedule and drop by! :-)
  11. 3 points
    Bumping with the recordings : EPC2018 - Innes McKendrick - Working With Change (Hello Games: No Man’s Sky) EPC2018 - Anastasia Opara - Proceduralism and Deep Learning (EA: SEED) EPC2018 - Oskar Stalberg - Wave Function Collapse in Bad North EPC2018 - Twan de Graaf and Pierre Villette - Procedural Content Generation at Ubisoft Paris
  12. 3 points
    Basic smoke solver built within SOP solver, utilising openVDB nodes. Happy exploring & expanding =) P.S. DOP’s smoke solver still solves quicker in many cases though. vdbsmokesolver_v1.hipnc
  13. 2 points
    Hello @AntoineSfx That's Softimage Closest Location: Closest Smoothed Surface in ICE. Very useful for cage deform. I think I nailed this one, by stop focusing on trying to compute the correct direction from on xyzdist/primuv resulting normal. I'm sure there's a way and I wish I could do it for simplicity but the math it's just not coming to me... So what I did was to loop for each point, select only the candidate prims from the reference mesh that could be the closest primitive, "extrude" them along their point normals until they are coplanar with the point. Then check which prim is the point inside of. If you already know the right prim, maybe the problem can be resumed by finding the ray direction from the primitive's point normals. What I did is on the right, extruded to make it coplanar, xyzdist() on the extruded prim, and applied the returned prim and uvw with primuv() on the original primitive. It's like I'm dividing the 3D space into extrusions of the reference mesh and see inside of which extrusion the point is in. The only advantage with this is that in the process of finding the correct prim, I also right away get the corresponding uvw, hence a complete "location". On the other hand a bunch of SOPS and Loop block are required. As for the initial shape it looks, like so: I added all this info to the RFE. Hopefully in the future we will just need to type a function in VEX. Would be cool since what I came up with is inherently slow for heave geo Scene file here: https://jmp.sh/bELWJdX Cheers
  14. 2 points
    For the target polyexpand produces a straight skeleton, triangulate2d converts it to convex triangles, divide (remove shared edges) and resample (subdivision curves) turns them into a smooth outline curve. As you suggested I used minpos() to get the directions from the circle points towards the target: vector dir = minpos(1, @P) - v@P; and intersect_all() would be for shooting rays from the outer circle to the original input shape: int ray = intersect_all(2, @P, dir, pos, prim, uvw, tol, -1); Hit play to see how it performs on different inputs. dir_to_srf.hiplc
  15. 2 points
    I just wanted to let everyone who might be interested know, that Chaosgroup have released a beta version of Vray for Houdini. More info can be found here: https://www.chaosgroup.com/vray/houdini
  16. 2 points
    If anyone happens to have an issue with the IPR not working when using the houdini.env setup, the following fixed it for me: Instead of : PATH="${PATH};${VFH_PATH}" As suggested, expand everything. So: PATH="${VRAY_APPSDK}\bin;${VFH_ROOT}\vfh_home\bin;$PATH;${VRAY_FOR_HOUDINI_AURA_LOADERS}"
  17. 2 points
    The Vancouver Houdini User group is happy to host this years Houdini Users meetup mixer at Siggraph 2018. Please come and join us and fellow artists for a beer and nibbles, at Craft Beer Market (www.craftbeermarket.ca) on Monday 13th August from 7pm. You don't need a Siggraph pass but you do need to register to attend via this handy Everbrite link. We look forward to welcoming you all to Vancouver and Siggraph with a cool beverage, some Sidefx merchandise and the chance to win 200 Grid Market credits for online render time. The event has been made possible by the support of the following sponsors: Sidefx, Zerply.com, DNEG, MPC, and GridMarkets. We look forward to seeing you there Ed Lee Social Admin Vancouver Houdini User group
  18. 2 points
  19. 2 points
    I made a tool for creating fully procedural cave systems from curves for use inside UE4 with Houdini Engine. Check it out, let me know what you think. Available here: https://gum.co/hdacaves Enjoy!
  20. 2 points
    Or write the hipname directly into the cache as a detail attribute:
  21. 2 points
    You can fit-range an attribute with setdetailattrib() set to "min" and "max". 1st pointwrangle: setdetailattrib(0, 'height_min', @height, "min"); setdetailattrib(0, 'height_max', @height, "max"); 2nd pointwrangle: float min = detail(0, "height_min", 0); float max = detail(0, "height_max", 0); @Cd.g = fit(@height, min, max, 0.0, 1.0); fit_range_VEX.hiplc
  22. 2 points
    Here's a quick hip with the stuff to change in red static_gasturb.hiplc
  23. 2 points
    Here's a simple but costly setup. Density is used to modulate the phase function so that it's highly forward scattering (values over 0.9) where the cloud volume is thin and slightly more diffuse (values closer to 0.7) where the cloud is more dense. Scattering lobe changing it's shape as the ray travels inside the cloud was one of the main observations done by Bouthors. My solution is a very simple mimic of the phenomenon but it already does a lot. It has 64 orders of multiple scattering which might be more than enough. It also uses photon map to accelerate the light scattering. No Mie scattering LUT is used. Render time 3.5 hours. It's not identical to the Hyperion image but certainly has some nice features emerging. Some parts look even better IMO. I've also tone mapped the image with an ACES LUT and tweaked the exposure a bit to preserve the whites. EDIT: Oh by the way I haven't check how it looks when the sun is behind the cloud. There could be some extreme translucency that needs to be handled. Cheers! mantra_cloud_material.hip
  24. 2 points
    To the eye the results may seem the same, but the data tells you that they are not. there is a difference. From a quick investigation the reason it looks the same is because the explicit constraints take over detecting the collisions but even with them turned on you can see a different behaviour in the attached scene (the disabled one has a springy motion to it) [if you disable the constraints AND the final collision test, you dont get any collisions (apart from Internal Collisions) anymore, because now all possible detections are bypassed(gas coll detect) or turned off (constraints & final coll test)] It gets way more noticable if we turn off the explicit contraints (leave final coll test on): (1)- disabling the gas collision detect: you can see that both objects blow up. the cube because it is meant to when hitting the box but also the sphere because it is getting in contact with the cube and the Internal Collisions affect the sphere's particles (2)- modifying the group membership: you can observe that while the cube blows up when hitting the box, the sphere passes through everything untouched (and it of course affects the cube's particles). This is because all those pbd collisions make use of the __pbd_group and in (1) all the particles are in that group while - as said - (2) excludes the sphere's ones @ParticleSkull: so i guess the problem is when 2 grain objects hit each other and you have disabled the gas coll detect you get an unexpected behaviour grains_avoid.collision_1.03.hiplc
  25. 2 points
    it's called function overloading. for details see http://www.sidefx.com/docs/houdini/vex/lang.html#functions this is a quick basic example where VEX determines which function to use based on the argument types you feed to the function int func(int a; float b){ return int(a*b); } float func(float a; float b){ return a*b; } printf("value is %d", func(2.0, 2.75)); //change the 2.0 to 2 to see the difference
  26. 2 points
    @Andrea yes exactly. I noticed the same thing when disabling the whole thing but as @ParticleSkull said it feels wrong which is why I came up with that group method. the reason the box still collides even with the gascollisiondetect disabled is because by default the Final Collision Test is enabled (where I guess the gasintegrator DOP apparently DOES take collisionignore attrib into consideration)
  27. 2 points
    you could make a small change to the popgrains DOP and modify the group for which it computes collisions, see the file (nodes colored red) hope that helps grains_avoid.collision_1.02.hiplc
  28. 2 points
    we do not need any rush! we know that sidefx works really hard. when it's done, it will come out ; ) btw... I CAN'T WAIT FOR IT!!
  29. 2 points
    still working on a good solution, but here is something else that I tried that works a bit better. I added gas vortex confinement,but kept the values low (around 1 - 2). I plugged that into a gas repeat solver. That seems to work much better than using confinement and turning up the scale. Also added more sub steps on the dopnet. That seems to just create a lot of small swirl. This isn't perfect buts its getting away from disturbance patterns https://drive.google.com/file/d/1UNp13Arv7sv7XiWhF82A5OTH_CWhTME1/view?usp=sharing
  30. 2 points
    Download the Course Files & Watch for Free at CG Circuit https://www.cgcircuit.com/tutorial/houdini-for-the-new-artist Hello ODFORCE! I'm very excited to share my first Houdini course titled: Houdini For the New Artist Houdini for The New Artist is perfect for anyone interested in learning Houdini for the first time. To keep things interesting, we learn about the basics while building "Alfred the Rhino" from scratch. If you're looking for an intro tutorial that gives you a bit of everything, fun to work with, and straight to the point - this is for you. Be sure to check out the full course and download the course files and practice along. Thank you for watching! Course Outline Intro 42s The Interface 12m 26s Setting up Our Project 12m 53s Utilizing Attributes 10m 47s Caching 8m 23s Applying Materials 9m 55s Adding the Backdrop 6m 41s Basic Shading Parameters 5m 36s Lighting 9m 30s Rendering 12m 29s
  31. 2 points
    Shipping a hard drive is very traditional. DHL overnight shipping could be a more straightforward approach, time efficient, and probably cheaper approach(Hard Drive + DHL to which you can clearly line item the bill to the client). Plus you can include all the assets and files. Very 90s and 00s approach to handling data. Just remember if you are going to render online; you need to upload all the data, process it, and make sure it is received by the client correctly i.e. no processing errors. This can be time consuming and depending on how much you bill yourself costly, plus you need to be aware of how much the client is going to be flexible if the processing gets messed up. It cost a lot more money if you screw up rendering on a cloud, than at home. Grid Market is a good business, just need to pay attention to the economics of the situation. I take it remoting into their farm so you can monitor a new render on their machines is out of the question? If you are going to send files to an online cloud you could just go directly to their farm.
  32. 2 points
    Hi all, I had been doing a rnd project on how to generate knitted garments in Houdini lately. And one my inspiration was from a project which was done by Psyop using Fabric engine and the other one is done by my friend Burak Demirci. Here are the links of them. http://fabricengine.com/case-studies/psyop-part-2/ https://www.artstation.com/artist/burakdemirci Some people asked to share my hip file and I was going to do it sooner but things were little busy for me. Here it is, I also put some sticky notes to explain the process better, hope it helps. Also this hip file is the identical file of the one that I created this video except the rendering nodes https://vimeo.com/163676773 .I think there are still some things that can be improved and maybe done in a better way. I would love to see people developing this system further. Cheers! Alican Görgeç knitRnD.zip
  33. 2 points
    @Jesper Rahlff Here you go. Sops and Shader versions inside. volume_pc_colour_01.hip
  34. 2 points
    enable [x] tangent Attribute tangentu in node "resample2" You can store the cross product in @N if you want, but by doing @N / @tangentu / @tangentv , you have a normal, orthogonal frame for each point, which is a strong foundation for anything downstream IMO. Also see polyframe, which generates N, tangentu and tangentv. No need to do cross products by yourself
  35. 2 points
    I'm working on something related to art direct the swirly motion of gases; Its an implementation of a custom buoyancy model that let you art direct very easily the general swirly motion of gases without using masks, vorticles, temperature sourcing to have more swirly motion in specific zones, etc. Also it gets rid of the "Mushroom effect" for free with a basic turbulence setup. Here are some example previews. Some with normal motion, others with extreme parameters values to stress the pipeline. For the details is just a simple turbulence + a bit of disturbance in the vel field, nothing complex, because of this the sims are very fast (for constant sources: average voxel count 1.8 billions, vxl size 0.015, sim time 1h:40min (160 frames), for burst sources, vxl size 0.015, sim time 0h:28min). I'm working on a vimeo video to explain more this new buoyancy model that I'm working on. I hope you like it! Cheers, Alejandro constantSource_v004.mp4 constantSource_v002.mp4 burstSource_v004.mp4 constantSource_v001.mp4 burstSource_v002.mp4 burstSource_v003.mp4 burstSource_v001.mp4 constantSource_v003.mp4
  36. 2 points
    Since there's been a lot of talk around the web about graphics APIs this past week with Apple's decision to deprecate OpenGL in MacOS Mojave, I thought I'd take this opportunity to discuss the various graphics APIs and address some misconceptions. I'm doing this as someone who's used all versions of OpenGL from 1.0 to 4.4, and not with my SideFX hat on. So I won't be discussing future plans for Houdini, but instead will be focusing on the APIs themselves. OpenGL OpenGL has a very long history dating back to the 90s. There have been many versions of it, but the most notable ones are 1.0, 2.1, 3.2, and 4.x. Because of this, it gets a reputation for being old and inefficient, which is somewhat true but not the entire story. Certainly GL1.0 - 2.1 is old and inefficient, and doesn't map well to modern GPUs. But then in the development of 3.0, a major shift occurred that nearly broken the GL ARB (architecture review board) apart. There was a major move to deprecate much of the "legacy" GL features, and replace it with modern GL features - and out of that kerfuffle the OpenGL core and compatibility profiles emerged. The compatibility profile added these new features alongside the one ones, while the core profile completely removed them. The API in the core profile is what people are referring to when they talk about "Modern GL". Houdini adopted modern GL in v12.0 in the 3D Viewport, and more strict core-profile only support in v14.0 (the remaining UI and other viewers). Modern GL implies a lot of different things, but the key ones are: geometry data and shader data must be backed by VRAM buffers, Shaders are required, and all fixed function lighting, transformation, and shading is gone. This is good in a lot of ways. Geometry isn't being streamed to the GPU in tiny bits anymore and instead kept on the GPU, the GL "big black box" state machine is greatly reduced, and there's a lot more flexibility in the display of geometry from shaders. You can light, transform, and shade the model however you'd like. For example, all the various shading modes in Houdini, primitive picking, visualizers, and markers are all drawn using the same underlying geometry - only the shader changes. OpenGL on Windows was actually deprecated decades ago. Microsoft's implementation still ships with Windows, but it's an ancient OpenGL 1.1 version that no one should use. Instead, Nvidia, AMD and Intel all install their own OpenGL implementations with their drivers (and this extends to CL as well). Bottlenecks As GPUs began getting faster, what game developers in particular started running into was a CPU bottleneck, particularly as the number of draw calls increased. OpenGL draw calls are fast (more so that DirectX), but eventually you get to a point where the driver code prepping the draw started to become significant. More detailed worlds meant not only bigger models and textures, but more of them. So the GPU started to become idle waiting on draws from the CPUs, and that draw load began taking away from useful CPU work, like AI. The first big attempt to address this was in the form of direct state access and bindless textures. All resources in OpenGL are given an ID - an integer which you can use to identify a resource for modifying it and binding it to the pipeline. To use a texture, you bind this ID to slot, and the shader refers to this slot through a sampler. As more textures we used and switched within a frame, mapping the ID to its data structure became a more significant load on the driver. Bindless does away with the ID and replaces it with a raw pointer. The second was to move more work to the GPU entirely, and GLSL Compute shaders (GL4.4) were added, along with Indirect draw calls. This allows the GPU to do culling (frustum, distance based, LOD, etc) with an OpenCL-like compute shader and populate some buffers with draw data. The indirect draw calls reference this data, and no data is exchanged between GPU and CPU. Finally, developers started batching as much up as possible to reduce the number of draw calls to make up for these limitations. Driver developers kept adding more optimizations to their API implementations, sometimes on a per-application basis. But it became more obvious that for realtime display of heavy scenes, and with VR emerging where a much higher frame rate and resolution is required, current APIs (GL and DX11) were reaching their limit. Mantle, Vulkan, and DX12 AMD recognized some of these bottlenecks and the bottleneck that the driver itself was posing to GPU rendering, and produced a new graphics API called Mantle. It did away with the notion of a "fat driver" that optimized things for the developer. Instead, it was thin and light - and passed off all the optimization work to the game developer. The theory behind this is that the developer knows exactly what they're trying to do, whereas the driver can only guess. Mantle was eventually passed to Khronos, who develops the OpenGL and CL standards, and from that starting point Vulkan emerged. (DirectX 12 is very similar in theory, so for brevity’s sake I'll lump them together here - but note that there are differences). Vulkan requires that the developer be a lot more up-front and hands on with everything. From allocating large chunks of VRAM and divvying it up among buffers and textures, saying exactly how a resource will be used at creation time, and describing the rendering pipeline in detail, Vulkan places a lot of responsibility on the developer. Error checking and validation can be entirely removed in shipping products. Even draw calls are completely reworked - no more global state and swapping textures and shaders willy-nilly. Shaders must be wrapped in an object which also contains all its resources for a given draw per framebuffer configuration (blending, AA, framebuffer depths, etc), and command buffers built ahead of time in order to dispatch state changes and draws. Setup becomes a lot more complicated, but also is more efficient to thread (though the dev is also completely responsible for synchronization of everything from object creation and deletion to worker and render threads). Vulkan also requires all shaders be precompiled to a binary format, which is better for detecting shader errors before the app gets out the door, but also makes generating them on the fly more challenging. In short, it's a handful and can be rather overwhelming. Finally, it's worth noting that Vulkan is not intended as a replacement for OpenGL; Khronos has stated that from its release. Vulkan is designed to handle applications where OpenGL falls short. A very large portion of graphics applications out there don't actually need this level of optimization. My intent here isn't to discourage people from using Vulkan, just to say that it's not always needed, and it is not a magic bullet that solves all your performance problems. Apple and OpenGL When OSX was released, Apple adopted OpenGL as its graphics API. OpenGL was behind most of its core foundation libraries, and as such they maintained more control over OpenGL than Windows or Linux. Because of this, graphics developers did not install their own OpenGL implementations as they did for Windows or Linux. Apple created the OpenGL frontend, and driver developers created the back end. This was around the time of the release of Windows Vista and its huge number of driver-related graphics crashes, so in retrospect the decision makes a lot of sense, though that situation has been largely fixed in the years since. Initially Apple had support for OpenGL 2.1. This had some of the features of Modern GL, such as shaders and buffers, but it also lacked other features like uniform buffers and geometry shaders. While Windows and Linux users enjoyed OpenGL 3.x and eventually 4.0, Mac developers were stuck with a not-quite-there-yet version of OpenGL. Around 2012 they addressed this situation and released their OpenGL 3.2 implementation ...but with a bit of a twist. Nvidia and AMD's OpenGL implementations on Windows and Linux supported the Compatibility profile. When Apple released their GL3.2 implementation it was Core profile only, and that put some developers in a tricky situation - completely purge all deprecated features and adopt GL3.2, or remain with GL2.1. The problem being that some deprecated features were actually still useful in the CAD/DCC universe, such as polygons, wide lines, and stippled lines/faces. So instead of the gradual upgrading devs could do on the other platforms, it became an all-or-nothing affair, and this likely slowed adoption of the GL3.2 profile (pure conjecture on my part). This may have also contributed to the general stability issues with GL3.2 (again, pure conjecture). Performance was another issue. Perhaps because of the division of responsibility between the driver developer of the GPU maker and the OpenGL devs at Apple, or perhaps because the driver developers added specific optimizations for their products, OpenGL performance on MacOS was never quite as good as other platforms. Whatever the reason, it became a bit of a sore point over the years, with a few games developers abandoning the platform altogether. These problems likely prompted them to look at at alternate solution - Metal. Eventually Apple added more GL features up to the core GL4.1 level, and that is where it has sat until their announcement of GL deprecation this week. This is unfortunate for a variety of reasons - versions of OpenGL about 4.1 have quite a few features which address performance for modern GPUs and portability, and it's currently the only cross platform API since Apple has not adopted Vulkan (though a third party MoltenVK library exists that layers Vulkan on Metal, it is currently a subset of Vulkan). Enter Metal Metal emerged around the time of Mantle, and before Khronos had begun work on Vulkan. It falls somewhere in between OpenGL and Vulkan - more suitable for current GPUs, but without the extremely low-level API. It has compute capability and most of the features that GL does, with some of the philosophy of Vulkan. Its major issues for developers are similar to those of DirectX - it's platform specific, and it has its own shading language. If you're working entirely within the Apple ecosystem, you're probably good to go - convert your GL-ES or GL app, and then continue on. If you're cross platform, you've got a bit of a dilemma. You can continue on business as usual with OpenGL, fully expecting that it will remain as-is and might be removed at some point in the future, possibly waiting until a GL-on-top-of-Metal API comes along or Apple allows driver developers to install their own OpenGL like Microsoft does. You can implement a Metal interface specific to MacOS, port all your shaders to Metal SL and maintain them both indefinitely (Houdini has about 1200). Or, you can drop the platform entirely. None of those seem like very satisfactory solutions. I can't say the deprecation comes as much of a surprise, with Metal development ongoing and GL development stalling on the Mac. It seems like GL was deprecated years ago and this is just the formal announcement. One thing missing from the announcement was a timeframe for when OpenGL support would end (or if it will end). It does seem like Apple is herding everyone toward Metal, though how long that might take is anyone's guess. And there you have it, the state of Graphics APIs in 2018 - from a near convergence of DX11 and GL4 a few short years ago, to a small explosion of APIs. Never a dull moment in the graphics world
  37. 2 points
    There is Volume Resize SOP for dense volumes that is really fast. You can fetch polygon bounding box to its second input to keep only volumes inside that box. There is also VDB Clip for Sparse VDBs that does similar thing. Wrangle method is good if you don't want to change your original volume's bbox.
  38. 2 points
    I don't know such expression, but it is easy task for Python. You always can evaluate Python from Expressions. pythonexprs("'%0.2f' % " + (hscript_expression)) pythonexprs("'%0.2f' % " + ch("../grid1/sizey")) You also may switch expression language and use Python directly: '%0.2f' % some_float '%0.2f' % ch("../grid1/sizey") '%0.2f' % (hou.time() / 10.0) And maybe 10-15 different solutions exists. Really depends on your actual scene.
  39. 2 points
    Try this... Put down a measure SOP and set it to measure the perimeter of your curves. After that a primitive wrangle and write. #include <groom.h> adjustPrimLength(0, @primnum, @perimeter, @perimeter*@dist); groom.h is a included file containing some functions used in the grooming tools and one of the functions is... void adjustPrimLength(const int geo, prim; const float currentlength, targetlength)
  40. 1 point
    The way I used it was in a Primitive Wrangle. if(@Cd.r > 0.1){ s@constraint_name = "Glue"; @strength = 1000; } else s@constraint_name = "Hard"; Obviously this was using Red and Green color to set it up initially. There might be several ways to achieve it, but for what I was testing, it worked.
  41. 1 point
  42. 1 point
    On the Reseeding tab of the flipsolver1 node set Surface Oversampling = 48 and Over Sampling Bandwidth = 2.
  43. 1 point
    Hey All, I've been trying to work out a simple pipeline to generate a crowd from impostors for use in Unreal 4 using models from Fuse with animations from Mixamo. Creating characters in Fuse is extremely fast and simple, and the one-click option to send it to mixamo to rig it and load in animations opens up a lot of potential for easy crowd generation. However, with a lot of projects shifting towards VR / AR these days we need more and more efficient crowds. The impostor workflow offers a great solution for that, and the GameDev Impostor Camera Rig + GameDev Impostor Texture ROP really set it up to be really nice and easy to generate the output - but I'm running into some issues and could use some outside opinions. For reference, I'm following Mike Lyndon's documentation here: https://www.sidefx.com/tutorials/generating-impostor-textures/ So far, the following steps are working out pretty well - 1) Create Character in Fuse 2) Rig and Animate Fuse Character with Mixamo 3) Import FBX to Houdini 4) Extract FBX Geo to new Geo node 5) Create new Mat inside new Geo node, remap existing materials 6) Create GameDev Impostor Camera Rig at obj level 7) Create ROP Network, create GameDev Impostor Texture inside ROP network 8) Direct Camera Rig towards Impostor Texture, Direct Impostor Texture towards Camera Rig 9) Set Impostor Texture to 'Animation' mode and render out to $HIP/TEST/${OS}_${WEDGE}.$F4.png (or appropriate directory / filetype, this is for testing purposes) The problem I run into is at render time; the ROP is intended (afaik) to render out the frames of animation, rotate the camera, and then render out the frames of animation again from the new angle. It does exactly this, but overwrites the first frames of the animation each time the camera rotates. I thought the _${WEDGE} portion of the file naming would handle this but it doesn't seem to be appended to my filenames. Right now I'm getting "rop_impostor_texture1.0006.png" where 'rop_impostor_texture1' is the name of my impostor texture and '0006' is the frame. Any thoughts as to why this might be happening or possible solutions would be appreciated. Please let me know if any specific screenshots would help (ie. the ROP, Camera Rig, etc). UPDATE 001) **NOTE: This only applies to non-production build 1.20; the naming is set up properly in production build 1.12** Solved the naming / output issue, it turns out the GameDev_Impostor_Texture rop was looking for the wrong output picture parm. This is the field that needs to be re-directed for the output to work as expected: Also worth noting is that if you assemble the sprite sheet expecting the Unreal shader template to work, be sure to have the animation frames in the Y axis and and camera rotation in the X axis. The next major hurdle for me is to try to reduce the camera rotation range to 180 degrees, since I don't need the back views of my crowd characters, and to limit the animation frames... which I'm more concerned about at the moment. I need to retarget a 64 frame animation down to ~16 frames or less. The fewer I can get it down to, the potentially higher resolution I can let the individual sprites be. More updates to come.
  44. 1 point
    Great topic and solution! Basically the pbd_group you set to zero is the group that is used inside the POPgrains for the collisions? I've noticed that if in Alvaro setup I disable the "Gas collision detect" the box still collide and the sphere is not. I'm not sure if it happens because they are obeying to the popcollisionignore or something else it's going on.
  45. 1 point
    That's when you use voxel filtering: On the OBJ level, Render/Shading: set Volume Filter to Gaussian, Volume Filter Width 1.5
  46. 1 point
    Nope what you are doing with @shop_materialpath is the fundamental basics that has not changed in a long time. Albeit we do not use the shop context as much anymore... now we use materials, but they won't just shut it off at some point when they make the switch. Manipulating this path is the key for material assignments. It's always an absolute path, so if you store your materials at an object level context like /OBJ/MATERIALS (or always in the material context which I don't suggest) as you did it becomes real easy to reassign the materials with basic string edits, and especially python. You could make a string attribute like "material" as you were doing before or "name" as is common for dop workflow(though consult with your fx team first) and then use that to help you reassign materials if you need later. Having two string based attributes is really cheap as it saves them in a dictionary, unlike unique groups that use integers. Since it's a dictionary of values you can just list the unique instances of those materials, and create a respective hda with python based on where that material should be stored. So /OBJ/MATERIALS/Sequence/... /OBJ/MATERIALS/Shot/... /OBJ/MATERIALS/Character/... /OBJ/MATERIALS/Common/... you can also use those names to create the correct HDA (like principled shader) and apply the correct preset to those HDAs now however you stored them. There are a lot of possibilities here, so what ever works best for your system.
  47. 1 point
    Hello guys, I'm quite new here and recently joined the odforce forum. I don't know if there is any other category where I can get feedbacks hence posting here! Just when I was searching for cake recipes, I fell for the "Strawberry Raindrop Cake" and decided I should make one for myself! Link to the reference video: at 0:09 timestamp well, A "Raspberry Raindrop Cake" in Houdini ;-) Here is my progress so far, any feedback is appreciated! Thank you!
  48. 1 point
    "The Tree" Another R&D image from the above VR project: The idea for the VR-experience was triggered by a TV-show on how trees communicate with each other in a forest through their roots, through the air and with the help of fungi in the soil, how they actually "feed" their young and sometimes their elderly brethren, how they warn each other of bugs and other adversaries (for instance acacia trees warn each other of giraffes and then produce stuff giraffes don't like in their leaves...) and how they are actually able to do things like produce substances that attract animals that feed on the bugs that irritate them. They even seem to "scream" when they are thirsty... (I strongly recommend this (german) book: https://www.amazon.de/Das-geheime-Leben-Bäume-kommunizieren/dp/3453280679/ref=sr_1_1?ie=UTF8&qid=1529064057&sr=8-1&keywords=wie+bäume+kommunizieren ) It's really unbelievable how little we know about these beings. So we were looking to create a forest in an abstract style (pseudo-real game-engine stuff somehow doesn't really cut it IMO) that was reminiscent of something like a three dimensional painting through which you could walk. In the centre of the room, there was a real tree trunk that you were able to touch. This trunk was also scanned in and formed the basis of the central tree in the VR forest. Originally the idea was, that you would touch the tree (hands were tracked with a Leap Motion controller) and this would "load up" the touched area and the tree would start to become transparent and alive and you would be able to look inside and see the veins that transport all that information and distribute the minerals, sugar and water the plant needs. From there the energy and information would flow out to the other trees in the forest, "activate" them too and show how the "Wood Wide Web" connected everything. Also, your hands touching the tree would get loaded up as well and you would be able to send that energy through the air (like the pheromones the trees use) and "activate" the trees it touched. For this, I created trees and roots etc. in a style like the above picture where all the "strokes" were lines. This worked really great as an NPR style since the strokes were there in space and not just painted on top of some 3D geometry. Since Unity does not really import lines, Sascha from Invisible Room created a Json exporter for Houdini and a Json Importer for unity to get the lines and their attributes across. In Unity, he then created the polyline geometry on the fly by extrusion, using the Houdini generated attributes for colour, thickness etc. To keep the point count down, I developed an optimiser in Houdini that would reduce the geometry as much as possible, remove very short lines etc. In Unity, one important thing was, to find a way to antialias the lines which initially flickered like crazy - Sascha did a great job there and the image became really calm and stable. I also created plants, hands, rocks etc. in a fitting style. The team at Invisible Room took over from there and did the Unity part. The final result was shown with a Vive Pro with attached Leap Motion Controller fed by a backpack-computer. I was rather adverse to VR before this project, but I now think that it actually is possible to create very calm, beautiful and intimate experiences with it that have the power to really touch people on a personal level. Interesting times :-) Cheers, Tom
  49. 1 point
    General way is to use Python as Menu script: from itertools import chain node = hou.pwd() geo = node.geometry() attribs = [a.name() for a in geo.pointAttribs()] return list(chain(*zip(attribs, attribs))) attributes_menu.hipnc It is possible to avoid it in many cases. When you promote parameter it will usually pick the menu script (if it exist) from the node. It is not necessary to have such menu on the target node, as you may promote parameter from a different node (example).
  50. 1 point
    Hi Atom, Being in the industry for the past 20+ years I can say that those figures are accurate. And I would definitely NOT pay that much tuition, it's not worth it. The only down side of learning alone is that your inner voice will always tell you that this is hard, just don't listen to it and keep on working, you'll get there