Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

malexander

Members
  • Content count

    721
  • Joined

  • Last visited

  • Days Won

    37

Everything posted by malexander

  1. EXR is a linear-space format. No conversions are done when writing from mantra (which working in linear natively) to EXR.
  2. COPs has a standard color scheme, which was implemented with the original design: Generators are green VEX operators are purple Operators which don't affect image data are beige (timing operations, control flow, etc) Collapsible Color Corrections (precursor of Compile SOP networks) are blue The last three had special meanings with regards to cooking, which is why they were colored by default. VEX operators could bind planes named the same as VEX function parameters, Timing operators didn't take long to cook because they didn't modify image data, and Color Correction nodes would be collapsed into a single operation if there were no other non-color correction operations between them, saving memory and improving performance. But other than that, I don't believe there are any standard color schemes in Houdini.
  3. Crowd display unfortunately doesn't work on Mac because the GLSL shader just... doesn't draw anything (?!). The same shader works on other platforms. If you switch to wireframe you can at least see the rigs.
  4. It depends on the addressing mode of OpenCL - Nvidia took a really long time to add 64b. If "Device Address Bits" is 32, it can only access 2GB. It needs to be 64 in order to access the full memory on the card. This was fixed a couple of years ago in the Nvidia proprietary drivers, though I'm not sure what their state in MacOS is .
  5. It's probably good to get into the habit of thinking that anything you do at work, or with work assets, is the property of your workplace. If you keep work and personal materials separate, you'll never need to worry about losing your personal stuff. And more importantly, you won't have to worry about being accused of any sort of theft, leaks, or using work assets for personal gain. Best to play it safe and keep them well separated.
  6. Mac is in its own little world, you get whatever driver that particular OS version gives you. While some vendors like Nvidia allow you to download the back-end driver part, the OpenGL interface remains the same regardless of vendor or driver. So yes, there's no consumer or pro driver variants, and Apple in general seems to use consumer grade parts (though sometimes semi-custom). I guess there's no reason to offer pro parts when the GL interface doesn't allow access to pro features like quad-buffer stereo, genlock, etc. Not sure if 10b framebuffer support is the exception, as you'd think Apple would want to push their displays if they support 10b RGB color. I had a hard time even finding the bit depths of the Apple monitors listed, which might suggest that they are currently 8b.
  7. I'm using AMD FirePro version 16.50.2001 and it's quite stable (w/ W8000; hgpuinfo reports 21.19.234 so who knows what the actual version is). There is a bug in a later version of the drivers which AMD has on their radar which breaks vertex selection in the UV viewport (May and June drivers of this year). Not sure if this would also be in the driver for the Vega-based Radeon Boggle-edition, but there is a good chance it may be, as that would require a very current driver. I think that's the only known issue though.
  8. What marty says is good advice - if the driver you have is working for you, and you don't have any particular need to upgrade it, leave it be. OpenGL has effectively tapped out at GL4.5, and the only things added are vendor-specific extensions which we rarely use. Lately Nvidia has had some bad driver releases, especially on the consumer side as they alter the driver support the current AAA game of the month.
  9. It's due to the compositing of the color-managed beauty pass (in which the volume is drawn) and the unmanaged pass (where the grid is drawn). Any user geometry will appear correctly blended, it's just the grid that doesn't play nice. Out of all the solutions we had on hand we figured this one was the most acceptable in terms of speed/quality balance.
  10. It depends on whether it's a bug in our Exporter or their Importer. I'd submit it to both.
  11. Have you submitted a sample alembic to SideFX as a bug?
  12. That is due to how the grid and beauty pass are composited together. The faint edges of the volume are closer to the camera and block out the grid. They can't be drawn together because the volume is color managed and the grid is not.
  13. It's sort of possible. You can write a HDK scene hook which completely takes over the beauty pass, then renders the scene 5x with 5 different projections to a cubemap. Then you'd take that cubemap as a texture input to a GLSL shader which does the fisheye lookup and writes it to the beauty pass framebuffer. I say "sort of" because the rendered image will be out of sync with everything else in the viewport, such as picking, handles, construction plane, etc. so in my mind it's not really a valid solution (more of a hack). This is probably something that'd be better in the OpenGL ROP or the Flipbook, which is less concerned about those other things.
  14. You could be running out of file descriptors ("Too many open files"). On Linux, you can run 'limit' (or ulimit) to see how many files can be open at once. On my system, it's 1024 by default. Which OS are you running?
  15. There's also Ce (emission color), Ca (ambient color), and Cs (specular color). Only Cd is supported by the viewport, though. For the principled shader, Cd is considered the base color (PBR rules).
  16. The menu item does work for me - What desktop are you using, or is it a custom desk?
  17. The easiest way to do this is to click on the arrows in the middle of the split bar:
  18. No, you pretty much can't do this at the moment. Large angle FOV projections will severely mangle geometry with large triangles, so it'd have to render to a cubemap, then sample from there into a regular 2D image using a lens shader. I don't think that you could accomplish this with the HDK right now. You could render a cubemap with the GL ROP in 16.0 and post-process that, though.
  19. If they ran the test with the CPU-CL driver, I'd expect the 1800X to slightly edge out the 1700X. But even then, it'd be "roughly the same" in that you as a user wouldn't notice the difference unless you were sitting there with a stopwatch
  20. When you're simming or rendering, you'll notice those extra two cores. If you do that a lot, 400eu will pay for itself in a short fraction of the CPU's lifetime.
  21. On a 32" 4K you could also try Large UI Size, if you find High DPI is too large.
  22. Yep, you could run a bunch of them in 16x mode, and have some lanes leftover to access a couple of PCI-Ex-based SSDs for large datasets. This is particularly important as AMD GPUs now have virtual addressing which could target the data on SSDs directly (though I'm unsure if that's currently supported for external SSDs, or just the TB one built in to the new Radeon Pro SSG cards). Usually there's a few lanes taken up by ethernet, the chipset, and slow storage as well, so 40 can go really quick.
  23. Looking at the consumer chips, they have a dual DDR4 interface which is faster than the 4-core Haswell and lower Intel GPUs (~43GB/s vs. 25GB/s) but slower that the newer SkyLake+ CPUs (50+GB/s). The quad-channel socket 2011 and Xeon chips leave them in the dust at 75+GB/s. That could be a potential bottleneck for very large sims which require a lot of mem bandwidth. I think this is probably the weak link in the Ryzen design. A 16 thread CPU requires a lot of memory bandwidth, and it could be starved by a dual channel interface. The server chip doesn't have this limitation, but it also takes a clockspeed hit.
  24. While it's great that AMD is competing again, going for a Zen-based chip is more of a cost decision than a performance one. Hopefully it'll put pressure on Intel to drop prices a bit over the long haul.