Jump to content

malexander

Members
  • Content count

    795
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    41

Everything posted by malexander

  1. New eGPU for Macs

    Apparently the GPU isn't upgradeable, which sort of defeats the purpose of a external GPU enclosure, IMO. Sort of par for the course for Apple lately though. Still waiting on that upcoming upgradeable mac pro.
  2. MOPs: Motion Graphics Operators for Houdini

    We can't do both?
  3. color prims by attribute

    The color SOP with the group set to "@P=0" and class set to Primitive is probably the easiest, though you could use a primitive wrangle if you wanted to get a bit more fancy with the condition.
  4. I would try adding a CPU temp monitoring app (might be one in the motherboard software package) to see if you temps are getting high (>80C under heavy load). If it is high before it crashes (80-90C), try reseating the heatsink. With thermal paste you only want a tiny smear - you should still be able to see the copper beneath, looking slightly greyish. Then, if the temp is okay, try testing your RAM. You might have got a bad stick. You can do this by pulling some RAM sticks and retesting, and narrow it down to the bad stick, or use memtest86 (https://www.memtest86.com/). Also, you can try changing stuff in your EFI config (formerly known as BIOS): Disable memory EMP profiles Disable any CPU boost option, or overclocking Good luck!
  5. Building a new station

    The days of bad AMD graphics drivers are over. They've been just as good as Nvidia drivers for several years now. Hard to shake that bad rep though.
  6. Why no beer tutorial?

    Probably because the case study ends up derailing the actual work.
  7. OpenGL, Vulkan, and Metal

    Since there's been a lot of talk around the web about graphics APIs this past week with Apple's decision to deprecate OpenGL in MacOS Mojave, I thought I'd take this opportunity to discuss the various graphics APIs and address some misconceptions. I'm doing this as someone who's used all versions of OpenGL from 1.0 to 4.4, and not with my SideFX hat on. So I won't be discussing future plans for Houdini, but instead will be focusing on the APIs themselves. OpenGL OpenGL has a very long history dating back to the 90s. There have been many versions of it, but the most notable ones are 1.0, 2.1, 3.2, and 4.x. Because of this, it gets a reputation for being old and inefficient, which is somewhat true but not the entire story. Certainly GL1.0 - 2.1 is old and inefficient, and doesn't map well to modern GPUs. But then in the development of 3.0, a major shift occurred that nearly broken the GL ARB (architecture review board) apart. There was a major move to deprecate much of the "legacy" GL features, and replace it with modern GL features - and out of that kerfuffle the OpenGL core and compatibility profiles emerged. The compatibility profile added these new features alongside the one ones, while the core profile completely removed them. The API in the core profile is what people are referring to when they talk about "Modern GL". Houdini adopted modern GL in v12.0 in the 3D Viewport, and more strict core-profile only support in v14.0 (the remaining UI and other viewers). Modern GL implies a lot of different things, but the key ones are: geometry data and shader data must be backed by VRAM buffers, Shaders are required, and all fixed function lighting, transformation, and shading is gone. This is good in a lot of ways. Geometry isn't being streamed to the GPU in tiny bits anymore and instead kept on the GPU, the GL "big black box" state machine is greatly reduced, and there's a lot more flexibility in the display of geometry from shaders. You can light, transform, and shade the model however you'd like. For example, all the various shading modes in Houdini, primitive picking, visualizers, and markers are all drawn using the same underlying geometry - only the shader changes. OpenGL on Windows was actually deprecated decades ago. Microsoft's implementation still ships with Windows, but it's an ancient OpenGL 1.1 version that no one should use. Instead, Nvidia, AMD and Intel all install their own OpenGL implementations with their drivers (and this extends to CL as well). Bottlenecks As GPUs began getting faster, what game developers in particular started running into was a CPU bottleneck, particularly as the number of draw calls increased. OpenGL draw calls are fast (more so that DirectX), but eventually you get to a point where the driver code prepping the draw started to become significant. More detailed worlds meant not only bigger models and textures, but more of them. So the GPU started to become idle waiting on draws from the CPUs, and that draw load began taking away from useful CPU work, like AI. The first big attempt to address this was in the form of direct state access and bindless textures. All resources in OpenGL are given an ID - an integer which you can use to identify a resource for modifying it and binding it to the pipeline. To use a texture, you bind this ID to slot, and the shader refers to this slot through a sampler. As more textures we used and switched within a frame, mapping the ID to its data structure became a more significant load on the driver. Bindless does away with the ID and replaces it with a raw pointer. The second was to move more work to the GPU entirely, and GLSL Compute shaders (GL4.4) were added, along with Indirect draw calls. This allows the GPU to do culling (frustum, distance based, LOD, etc) with an OpenCL-like compute shader and populate some buffers with draw data. The indirect draw calls reference this data, and no data is exchanged between GPU and CPU. Finally, developers started batching as much up as possible to reduce the number of draw calls to make up for these limitations. Driver developers kept adding more optimizations to their API implementations, sometimes on a per-application basis. But it became more obvious that for realtime display of heavy scenes, and with VR emerging where a much higher frame rate and resolution is required, current APIs (GL and DX11) were reaching their limit. Mantle, Vulkan, and DX12 AMD recognized some of these bottlenecks and the bottleneck that the driver itself was posing to GPU rendering, and produced a new graphics API called Mantle. It did away with the notion of a "fat driver" that optimized things for the developer. Instead, it was thin and light - and passed off all the optimization work to the game developer. The theory behind this is that the developer knows exactly what they're trying to do, whereas the driver can only guess. Mantle was eventually passed to Khronos, who develops the OpenGL and CL standards, and from that starting point Vulkan emerged. (DirectX 12 is very similar in theory, so for brevity’s sake I'll lump them together here - but note that there are differences). Vulkan requires that the developer be a lot more up-front and hands on with everything. From allocating large chunks of VRAM and divvying it up among buffers and textures, saying exactly how a resource will be used at creation time, and describing the rendering pipeline in detail, Vulkan places a lot of responsibility on the developer. Error checking and validation can be entirely removed in shipping products. Even draw calls are completely reworked - no more global state and swapping textures and shaders willy-nilly. Shaders must be wrapped in an object which also contains all its resources for a given draw per framebuffer configuration (blending, AA, framebuffer depths, etc), and command buffers built ahead of time in order to dispatch state changes and draws. Setup becomes a lot more complicated, but also is more efficient to thread (though the dev is also completely responsible for synchronization of everything from object creation and deletion to worker and render threads). Vulkan also requires all shaders be precompiled to a binary format, which is better for detecting shader errors before the app gets out the door, but also makes generating them on the fly more challenging. In short, it's a handful and can be rather overwhelming. Finally, it's worth noting that Vulkan is not intended as a replacement for OpenGL; Khronos has stated that from its release. Vulkan is designed to handle applications where OpenGL falls short. A very large portion of graphics applications out there don't actually need this level of optimization. My intent here isn't to discourage people from using Vulkan, just to say that it's not always needed, and it is not a magic bullet that solves all your performance problems. Apple and OpenGL When OSX was released, Apple adopted OpenGL as its graphics API. OpenGL was behind most of its core foundation libraries, and as such they maintained more control over OpenGL than Windows or Linux. Because of this, graphics developers did not install their own OpenGL implementations as they did for Windows or Linux. Apple created the OpenGL frontend, and driver developers created the back end. This was around the time of the release of Windows Vista and its huge number of driver-related graphics crashes, so in retrospect the decision makes a lot of sense, though that situation has been largely fixed in the years since. Initially Apple had support for OpenGL 2.1. This had some of the features of Modern GL, such as shaders and buffers, but it also lacked other features like uniform buffers and geometry shaders. While Windows and Linux users enjoyed OpenGL 3.x and eventually 4.0, Mac developers were stuck with a not-quite-there-yet version of OpenGL. Around 2012 they addressed this situation and released their OpenGL 3.2 implementation ...but with a bit of a twist. Nvidia and AMD's OpenGL implementations on Windows and Linux supported the Compatibility profile. When Apple released their GL3.2 implementation it was Core profile only, and that put some developers in a tricky situation - completely purge all deprecated features and adopt GL3.2, or remain with GL2.1. The problem being that some deprecated features were actually still useful in the CAD/DCC universe, such as polygons, wide lines, and stippled lines/faces. So instead of the gradual upgrading devs could do on the other platforms, it became an all-or-nothing affair, and this likely slowed adoption of the GL3.2 profile (pure conjecture on my part). This may have also contributed to the general stability issues with GL3.2 (again, pure conjecture). Performance was another issue. Perhaps because of the division of responsibility between the driver developer of the GPU maker and the OpenGL devs at Apple, or perhaps because the driver developers added specific optimizations for their products, OpenGL performance on MacOS was never quite as good as other platforms. Whatever the reason, it became a bit of a sore point over the years, with a few games developers abandoning the platform altogether. These problems likely prompted them to look at at alternate solution - Metal. Eventually Apple added more GL features up to the core GL4.1 level, and that is where it has sat until their announcement of GL deprecation this week. This is unfortunate for a variety of reasons - versions of OpenGL about 4.1 have quite a few features which address performance for modern GPUs and portability, and it's currently the only cross platform API since Apple has not adopted Vulkan (though a third party MoltenVK library exists that layers Vulkan on Metal, it is currently a subset of Vulkan). Enter Metal Metal emerged around the time of Mantle, and before Khronos had begun work on Vulkan. It falls somewhere in between OpenGL and Vulkan - more suitable for current GPUs, but without the extremely low-level API. It has compute capability and most of the features that GL does, with some of the philosophy of Vulkan. Its major issues for developers are similar to those of DirectX - it's platform specific, and it has its own shading language. If you're working entirely within the Apple ecosystem, you're probably good to go - convert your GL-ES or GL app, and then continue on. If you're cross platform, you've got a bit of a dilemma. You can continue on business as usual with OpenGL, fully expecting that it will remain as-is and might be removed at some point in the future, possibly waiting until a GL-on-top-of-Metal API comes along or Apple allows driver developers to install their own OpenGL like Microsoft does. You can implement a Metal interface specific to MacOS, port all your shaders to Metal SL and maintain them both indefinitely (Houdini has about 1200). Or, you can drop the platform entirely. None of those seem like very satisfactory solutions. I can't say the deprecation comes as much of a surprise, with Metal development ongoing and GL development stalling on the Mac. It seems like GL was deprecated years ago and this is just the formal announcement. One thing missing from the announcement was a timeframe for when OpenGL support would end (or if it will end). It does seem like Apple is herding everyone toward Metal, though how long that might take is anyone's guess. And there you have it, the state of Graphics APIs in 2018 - from a near convergence of DX11 and GL4 a few short years ago, to a small explosion of APIs. Never a dull moment in the graphics world
  8. Nvidia driver 396.24

    hgpuinfo -g will also show you the graphics info Houdini normally prints to the Help > About Houdini details dialog.
  9. what is multithreaded?

    Generally speaking, things which are threaded: Anything written in VEX (wrangles, shaders) Anything written in OpenCL Mantra Many sim solvers, especially for volumes and particles COPs Things that aren't: GL Rendering, UI Expression and python evaluation Many nodes are threaded, but this varies on a node-by-node basis.
  10. blocky edges in viewport

    Because the color is assigned at the vertices of the quads and interpolated within the quad, you're essentially getting pixelation, as if you'd done that operation on a 100x100 image and zoomed way in. You can improve the interpolation by increasing the density of the quads.
  11. Rant about parameter expression syntax

    The backticks remove the ambiguity of whether the string field represents a true string or an expression. There's no ambiguity in float and int fields because all the characters must be numeric (or numeric related). If you're not a fan of the backticks, you can key the string parameter, then toggle to expression mode by LMB clicking on the parm label, then entering the expression. Keying and switching to expression mode removes that ambiguity.
  12. Create Heightfield from Input Geometry

    You can also use the Heightfield File SOP to load heightfield images directly, if that's the format your source is in.
  13. Which Nvidia driver are people running?

    I've been using 381.22 for a very long time with no issues (Quadro driver).
  14. Live updates in Scene View without giving focus

    At the moment there's one too many degrees of separation from the scene viewer to COPs when a texture is sourced from COPs and the dependency isn't being added on the COP network nodes (scene > OBJ > Mat > COPs). It's a known issue which should hopefully be resolved soon. It's live in the sense that anything causing the viewport to redraw will pick up the changes; the COP node changes themselves just aren't causing a redraw.
  15. Have you installed the drivers for it yet?
  16. That error generally means that there is no hardware-accelerated GL present on the system. What platform, GPU and graphics driver are you running?
  17. Best GPU for Render man and vray /stand alone

    PSA: This thread was necro'd from last year.
  18. imported UVs

    You can also use the Vertex Split SOP to split the points where the vertex UV values differ. It'll also optionally promote the vertex UVs to point UVs. Just tossing that out there as a another option that might be useful.
  19. Display Options, Viewport tab, Display: View Mask Opacity.
  20. This should be fixed in the latest daily build (394). Only the alembic part of the geometry is cached now, additional attributes are layered on top.
  21. ? Radeon Rx 580 8gb ?

    That's correct, software GL rendering is unsupported on all platforms as most implementations are horribly out of date (GL1.1) or extremely slow (better-off-using-IPR slow).
  22. SciTech - SideFX accepts the Oscar, 2018

    Anyone who knows Kim will tell you that's not an uncommon expression of his - which is pretty great in itself
  23. Time Machine compositing node example?

    You can use animated sequences in either - but the current frame of the second sequence is the image that's used as the frame selection for images in the first.
  24. Time Machine compositing node example?

    What Time Machine does is to select a frame from the first input's sequence based on the second input's image, on a per-pixel basis. Black maps to 0, White maps to +10 frames (by default), relative to the current frame. The easiest way to see the effect is by plugging an animation into the first input, and a ramp into the second input. Smaller values, like -2 and +2, tend to give more interesting results, especially when noise is used as the second input.
  25. Please allow me...

    Discussions are fine, as are opposing views. But once it starts getting heated and Ad Hominem attacks starting appearing, most often both sides will dig in and the discussion rarely goes anywhere useful after that. It's best just to shut down the discussion to reset it - often that results in heads cooling off, and people can try again if they like.
×