Jump to content

malexander

Members
  • Content count

    807
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    41

Everything posted by malexander

  1. about flip_book

    If you mean into 1 movie, no.
  2. need advice on PC components upgrade for Houdini

    Have you noticed that the viewport is sluggish with the scenes you're using? If not, you can limp along with the 960 for a while until you notice it becoming a bottleneck. If you're going amd, the threadripper 2950X looks like a great value. If that's a bit rich for you, a Ryzen will do just fine. I'd recommend 32GB for a new system.
  3. Houdini 17 Wishlist

    Nvidia's Optix library is probably the most accessible way to get at those new RT cores from a non-GPU renderer. CUDA targets the existing shader cores, though I suspect there's probably a CUDA lib which handles dispatching to the RT cores (there is one such lib for dispatching to the Tensor cores).
  4. Ah, the days when a $5000 Quadro 6000 seemed like a lot...
  5. New Threadripper 2 - Core Speed vs Core Count

    I'm not saying the 2990WX is bad, just that you should temper your expectations of a 32x speedup in rendering
  6. New Threadripper 2 - Core Speed vs Core Count

    The 24/32 core parts are odd beasts. They're made of four 8 core modules (24 core has 2 cores disable per module). Unlike the server Epyc processor, only 2 of those modules have access to main memory (each have access to half of it via dual memory controllers), and the other two modules must hop through one of the mem-attached modules to get at main memory. So they'll scale well for low-bandwidth, high-compute workloads (rendering), but start to suffer in cases where memory bandwidth is important (massive sims). The 16 core part has 2 modules and each has access to half the memory. AMD's designed the scheduling such that the modules connected to memory are populated with threads first, then the mem-isolated modules. I'm curious how that works for SMT (fill 16 threads on the mem modules, then 16 threads on the other modules, then populate the 33rd+ thread on the loaded cores; or load up the mem-modules to 32 threads first). I'd be more tempted to go for the 16 core version (2950X) myself. Thread efficiency starts to drop off at high core counts as well, so you really don't want to be losing even more performance in your top 16 cores. The memory bandwidth issue would also make me think twice about launching multiple processes using a lower thread count too. Pretty in-depth analysis here: https://www.anandtech.com/show/13124/the-amd-threadripper-2990wx-and-2950x-review
  7. Multithreading

    Probably, since most modern CPUs use a higher boost frequency when only 1-2 threads are running. That being said, even the viewport display has highly multithreaded parts in it (generating normals, transfering data, convexing, etc).
  8. Houdini 17 Wishlist

    With the amount of ideas presented in this thread, I think wishlists for Houdini 18, 19, 20 and 21 have already been started!
  9. What driver is the Help > About Houdini, Show Details window reporting? Is the Vendor Nvidia?
  10. Show curve thickness in viewport

    This is what the Hairgen object enables to get the ribbon effect for curves. You can also change "Display As:" to "SubD Surface/Curves" to get the smooth curves as well.
  11. Houdini 17 Wishlist

    That sounds oddly specific... And only 17 point one for a total SOP rewrite? 8-o
  12. Creating custom palette from image in COPs

    The Lookup COP is probably a good starting point. If you have a colorful noise COP input and "wander" your lookup samples around the noise image (or slowly modify your noise) you can get a source of interesting morphing random colors.
  13. New eGPU for Macs

    Apparently the GPU isn't upgradeable, which sort of defeats the purpose of a external GPU enclosure, IMO. Sort of par for the course for Apple lately though. Still waiting on that upcoming upgradeable mac pro.
  14. MOPs: Motion Graphics Operators for Houdini

    We can't do both?
  15. color prims by attribute

    The color SOP with the group set to "@P=0" and class set to Primitive is probably the easiest, though you could use a primitive wrangle if you wanted to get a bit more fancy with the condition.
  16. I would try adding a CPU temp monitoring app (might be one in the motherboard software package) to see if you temps are getting high (>80C under heavy load). If it is high before it crashes (80-90C), try reseating the heatsink. With thermal paste you only want a tiny smear - you should still be able to see the copper beneath, looking slightly greyish. Then, if the temp is okay, try testing your RAM. You might have got a bad stick. You can do this by pulling some RAM sticks and retesting, and narrow it down to the bad stick, or use memtest86 (https://www.memtest86.com/). Also, you can try changing stuff in your EFI config (formerly known as BIOS): Disable memory EMP profiles Disable any CPU boost option, or overclocking Good luck!
  17. Building a new station

    The days of bad AMD graphics drivers are over. They've been just as good as Nvidia drivers for several years now. Hard to shake that bad rep though.
  18. Why no beer tutorial?

    Probably because the case study ends up derailing the actual work.
  19. OpenGL, Vulkan, and Metal

    Since there's been a lot of talk around the web about graphics APIs this past week with Apple's decision to deprecate OpenGL in MacOS Mojave, I thought I'd take this opportunity to discuss the various graphics APIs and address some misconceptions. I'm doing this as someone who's used all versions of OpenGL from 1.0 to 4.4, and not with my SideFX hat on. So I won't be discussing future plans for Houdini, but instead will be focusing on the APIs themselves. OpenGL OpenGL has a very long history dating back to the 90s. There have been many versions of it, but the most notable ones are 1.0, 2.1, 3.2, and 4.x. Because of this, it gets a reputation for being old and inefficient, which is somewhat true but not the entire story. Certainly GL1.0 - 2.1 is old and inefficient, and doesn't map well to modern GPUs. But then in the development of 3.0, a major shift occurred that nearly broken the GL ARB (architecture review board) apart. There was a major move to deprecate much of the "legacy" GL features, and replace it with modern GL features - and out of that kerfuffle the OpenGL core and compatibility profiles emerged. The compatibility profile added these new features alongside the one ones, while the core profile completely removed them. The API in the core profile is what people are referring to when they talk about "Modern GL". Houdini adopted modern GL in v12.0 in the 3D Viewport, and more strict core-profile only support in v14.0 (the remaining UI and other viewers). Modern GL implies a lot of different things, but the key ones are: geometry data and shader data must be backed by VRAM buffers, Shaders are required, and all fixed function lighting, transformation, and shading is gone. This is good in a lot of ways. Geometry isn't being streamed to the GPU in tiny bits anymore and instead kept on the GPU, the GL "big black box" state machine is greatly reduced, and there's a lot more flexibility in the display of geometry from shaders. You can light, transform, and shade the model however you'd like. For example, all the various shading modes in Houdini, primitive picking, visualizers, and markers are all drawn using the same underlying geometry - only the shader changes. OpenGL on Windows was actually deprecated decades ago. Microsoft's implementation still ships with Windows, but it's an ancient OpenGL 1.1 version that no one should use. Instead, Nvidia, AMD and Intel all install their own OpenGL implementations with their drivers (and this extends to CL as well). Bottlenecks As GPUs began getting faster, what game developers in particular started running into was a CPU bottleneck, particularly as the number of draw calls increased. OpenGL draw calls are fast (more so that DirectX), but eventually you get to a point where the driver code prepping the draw started to become significant. More detailed worlds meant not only bigger models and textures, but more of them. So the GPU started to become idle waiting on draws from the CPUs, and that draw load began taking away from useful CPU work, like AI. The first big attempt to address this was in the form of direct state access and bindless textures. All resources in OpenGL are given an ID - an integer which you can use to identify a resource for modifying it and binding it to the pipeline. To use a texture, you bind this ID to slot, and the shader refers to this slot through a sampler. As more textures we used and switched within a frame, mapping the ID to its data structure became a more significant load on the driver. Bindless does away with the ID and replaces it with a raw pointer. The second was to move more work to the GPU entirely, and GLSL Compute shaders (GL4.4) were added, along with Indirect draw calls. This allows the GPU to do culling (frustum, distance based, LOD, etc) with an OpenCL-like compute shader and populate some buffers with draw data. The indirect draw calls reference this data, and no data is exchanged between GPU and CPU. Finally, developers started batching as much up as possible to reduce the number of draw calls to make up for these limitations. Driver developers kept adding more optimizations to their API implementations, sometimes on a per-application basis. But it became more obvious that for realtime display of heavy scenes, and with VR emerging where a much higher frame rate and resolution is required, current APIs (GL and DX11) were reaching their limit. Mantle, Vulkan, and DX12 AMD recognized some of these bottlenecks and the bottleneck that the driver itself was posing to GPU rendering, and produced a new graphics API called Mantle. It did away with the notion of a "fat driver" that optimized things for the developer. Instead, it was thin and light - and passed off all the optimization work to the game developer. The theory behind this is that the developer knows exactly what they're trying to do, whereas the driver can only guess. Mantle was eventually passed to Khronos, who develops the OpenGL and CL standards, and from that starting point Vulkan emerged. (DirectX 12 is very similar in theory, so for brevity’s sake I'll lump them together here - but note that there are differences). Vulkan requires that the developer be a lot more up-front and hands on with everything. From allocating large chunks of VRAM and divvying it up among buffers and textures, saying exactly how a resource will be used at creation time, and describing the rendering pipeline in detail, Vulkan places a lot of responsibility on the developer. Error checking and validation can be entirely removed in shipping products. Even draw calls are completely reworked - no more global state and swapping textures and shaders willy-nilly. Shaders must be wrapped in an object which also contains all its resources for a given draw per framebuffer configuration (blending, AA, framebuffer depths, etc), and command buffers built ahead of time in order to dispatch state changes and draws. Setup becomes a lot more complicated, but also is more efficient to thread (though the dev is also completely responsible for synchronization of everything from object creation and deletion to worker and render threads). Vulkan also requires all shaders be precompiled to a binary format, which is better for detecting shader errors before the app gets out the door, but also makes generating them on the fly more challenging. In short, it's a handful and can be rather overwhelming. Finally, it's worth noting that Vulkan is not intended as a replacement for OpenGL; Khronos has stated that from its release. Vulkan is designed to handle applications where OpenGL falls short. A very large portion of graphics applications out there don't actually need this level of optimization. My intent here isn't to discourage people from using Vulkan, just to say that it's not always needed, and it is not a magic bullet that solves all your performance problems. Apple and OpenGL When OSX was released, Apple adopted OpenGL as its graphics API. OpenGL was behind most of its core foundation libraries, and as such they maintained more control over OpenGL than Windows or Linux. Because of this, graphics developers did not install their own OpenGL implementations as they did for Windows or Linux. Apple created the OpenGL frontend, and driver developers created the back end. This was around the time of the release of Windows Vista and its huge number of driver-related graphics crashes, so in retrospect the decision makes a lot of sense, though that situation has been largely fixed in the years since. Initially Apple had support for OpenGL 2.1. This had some of the features of Modern GL, such as shaders and buffers, but it also lacked other features like uniform buffers and geometry shaders. While Windows and Linux users enjoyed OpenGL 3.x and eventually 4.0, Mac developers were stuck with a not-quite-there-yet version of OpenGL. Around 2012 they addressed this situation and released their OpenGL 3.2 implementation ...but with a bit of a twist. Nvidia and AMD's OpenGL implementations on Windows and Linux supported the Compatibility profile. When Apple released their GL3.2 implementation it was Core profile only, and that put some developers in a tricky situation - completely purge all deprecated features and adopt GL3.2, or remain with GL2.1. The problem being that some deprecated features were actually still useful in the CAD/DCC universe, such as polygons, wide lines, and stippled lines/faces. So instead of the gradual upgrading devs could do on the other platforms, it became an all-or-nothing affair, and this likely slowed adoption of the GL3.2 profile (pure conjecture on my part). This may have also contributed to the general stability issues with GL3.2 (again, pure conjecture). Performance was another issue. Perhaps because of the division of responsibility between the driver developer of the GPU maker and the OpenGL devs at Apple, or perhaps because the driver developers added specific optimizations for their products, OpenGL performance on MacOS was never quite as good as other platforms. Whatever the reason, it became a bit of a sore point over the years, with a few games developers abandoning the platform altogether. These problems likely prompted them to look at at alternate solution - Metal. Eventually Apple added more GL features up to the core GL4.1 level, and that is where it has sat until their announcement of GL deprecation this week. This is unfortunate for a variety of reasons - versions of OpenGL about 4.1 have quite a few features which address performance for modern GPUs and portability, and it's currently the only cross platform API since Apple has not adopted Vulkan (though a third party MoltenVK library exists that layers Vulkan on Metal, it is currently a subset of Vulkan). Enter Metal Metal emerged around the time of Mantle, and before Khronos had begun work on Vulkan. It falls somewhere in between OpenGL and Vulkan - more suitable for current GPUs, but without the extremely low-level API. It has compute capability and most of the features that GL does, with some of the philosophy of Vulkan. Its major issues for developers are similar to those of DirectX - it's platform specific, and it has its own shading language. If you're working entirely within the Apple ecosystem, you're probably good to go - convert your GL-ES or GL app, and then continue on. If you're cross platform, you've got a bit of a dilemma. You can continue on business as usual with OpenGL, fully expecting that it will remain as-is and might be removed at some point in the future, possibly waiting until a GL-on-top-of-Metal API comes along or Apple allows driver developers to install their own OpenGL like Microsoft does. You can implement a Metal interface specific to MacOS, port all your shaders to Metal SL and maintain them both indefinitely (Houdini has about 1200). Or, you can drop the platform entirely. None of those seem like very satisfactory solutions. I can't say the deprecation comes as much of a surprise, with Metal development ongoing and GL development stalling on the Mac. It seems like GL was deprecated years ago and this is just the formal announcement. One thing missing from the announcement was a timeframe for when OpenGL support would end (or if it will end). It does seem like Apple is herding everyone toward Metal, though how long that might take is anyone's guess. And there you have it, the state of Graphics APIs in 2018 - from a near convergence of DX11 and GL4 a few short years ago, to a small explosion of APIs. Never a dull moment in the graphics world
  20. Nvidia driver 396.24

    hgpuinfo -g will also show you the graphics info Houdini normally prints to the Help > About Houdini details dialog.
  21. what is multithreaded?

    Generally speaking, things which are threaded: Anything written in VEX (wrangles, shaders) Anything written in OpenCL Mantra Many sim solvers, especially for volumes and particles COPs Things that aren't: GL Rendering, UI Expression and python evaluation Many nodes are threaded, but this varies on a node-by-node basis.
  22. blocky edges in viewport

    Because the color is assigned at the vertices of the quads and interpolated within the quad, you're essentially getting pixelation, as if you'd done that operation on a 100x100 image and zoomed way in. You can improve the interpolation by increasing the density of the quads.
  23. Rant about parameter expression syntax

    The backticks remove the ambiguity of whether the string field represents a true string or an expression. There's no ambiguity in float and int fields because all the characters must be numeric (or numeric related). If you're not a fan of the backticks, you can key the string parameter, then toggle to expression mode by LMB clicking on the parm label, then entering the expression. Keying and switching to expression mode removes that ambiguity.
  24. Create Heightfield from Input Geometry

    You can also use the Heightfield File SOP to load heightfield images directly, if that's the format your source is in.
  25. Which Nvidia driver are people running?

    I've been using 381.22 for a very long time with no issues (Quadro driver).
×