Jump to content

symek

Members
  • Content count

    1,975
  • Donations

    100.00 CAD 
  • Joined

  • Last visited

  • Days Won

    74

symek last won the day on November 14 2020

symek had the most liked content!

Community Reputation

376 Excellent

7 Followers

About symek

  • Rank
    Grand Master
  • Birthday 09/26/1975

Contact Methods

  • Skype
    szymon.kapeniak

Personal Information

  • Name
    Szymon
  • Location
    Waw/Pol

Recent Profile Visitors

24,368 profile views
  1. Mantra doesn't render in ACES?

    If all the inputs to Mantra are in ACES, Mantra output is in ACES also. If your color parms and textures are correctly adjusted, you are fine.
  2. Sorry, I haven't noticed that before. You probably solved it already but for a sake of completeness: it isn't typically necessary, because current geometry is available in a shader as geometry attribute (via parameters binding) , but generally, yes, volumesample() will work in shader context with op:/path/to/geometry as long as geometry is present in IFD file (which is the case, when /obj/object has the display flag active). You can export additional volume to a IFD and set its renderable parm empty to make it invisible, but people usually dump volumes on disk to keep IFD files small.
  3. I don't think Linux distribution has anything to do with that. More likely it's hardware <--> drivers <--> Houdini interoperability. It's really pointless to compare studio's computer on CentOS with private's Win10 - unless they have exactly the same hardware and deal with the same files! (which as rarely the case) It's more likely, that your gtx 2080ti with recent drivers on Windows 10 works well with specific Houdini build viewport code while displaying not so many textures in test scenes, while studio computer floated with 8K/32bit textures fails miserably after Nvidia driver refuses to allocate buffer due to lack of VRAM, due to chrome's youtube allocating more VRAM, due to switching to higher resolution etc etc, and Houdini can't do anything other than NOT display texture. Aside of that, yes, glitches are annoying. Best way to deal with that is report a bug with as many specifics as possible.
  4. Note this: Basically, geometry is accessible in Mantra as long as it was exported to IFD file as an renderable object. In some cases Houdini can do it even for you (as in case of textures from COPS). So, I didn't say you can't use volumesample, I said you can't use OP_Director to bake geometry from nodes present in Houdini - neither in current frames nor previous one. Mantra doesn't have access to Houdini nodes. It sees static objects (GU_Detail) as exported to IFD file which are named by their Houdini paths. To access arbitrary geometry, you would have to save it to disk, or... use HEngine rendertime procedural to load loads of geometry at rendertime (inside your HDA) and do your trick there.
  5. Ok, but you realize that in shader context you won't have access to node's geometry (entire OP_Director business doesn't exists neither in Mantra nor Karma)?
  6. I'm not sure what you're trying to achieve, but whatever it is, you seem to have chosen wrong path. VEX is high performance streaming instructions language. All data it operates on, should be static, monotone arrays of numbers it can slice up and process concretely. Your code is equivalent to embedding web browser in GLSL shader (my shader could access texture from http server! lets do it! LOL!), you can image such thing doesn't make any sense. Additionally you ask VEX to access Houdini's nodes concurrency, which requires locking, so you end up with high contention of threads (they mostly wait), and entire function has exponential complexity over time. I'm guessing it's slower than Python. I have a sneaking suspicion that thing you're trying to do can be accomplished without extending VEX (if accumulating attribute's values over time is what you're after), but if you insist on using C++, this part context.setTime(CHgetTimeFromFrame(i)); GU_DetailHandle gd_handle = surfaceNode->getCookedGeoHandle(context); should be inside VEX_VexOpInit callback, which should do all preliminary job only once before actual computation takes action. Note, that ifaik VEX by itself doesn't guarantee single execution of that callback. It should be guarded by proper atomic primitives by you. Again, I highly doubt you need this extension, but whatever are your feelings about it, it definitely shouldn't access external nodes concurrently and recursively from within VEX extension.
  7. hython: setting thread count?

    hou.setMaxThreads HOM function:
  8. gRPC with Houdini HDK advice?

    Fractals on Processing! Very nice project, but definitely something that should be ported over to Houdini. The neatest version (imho) would use OpenCL SOP. And distributing over network via DOPs is still applicable. Have fun! but hurry up, because some folks here might be interested as well
  9. gRPC with Houdini HDK advice?

    Can't you just port Java snippet to VEX or Python/C++ and free yourself from the burden of inter-process communication? I know tinkering is fun, but can be very frustrating when involves real-time... anything. If that's not possible 1) I would start from asking if those chunks single process will be occupied with have to communicate with each other? Do they have boundary condition set by their in-progress neighbors? Because if not, you can avoid distributed computing all together by slashing Java array into chunks and pipe it down to many Houdini's independent sessions (concurrent, not parallel, which is easier). If that's not possible 2) I would consider trying Houdini's distributed simulation mechanics first. If you setup distributed FLIP, you'll notice that most tricky business is handled with two nodes called: Gas Net (Field) Slice Exchange, and afaik they will happily exchange any given point attribute or volume over the network's attached chunks. They don't ask what sort of computation is taking place, so you could distribute any VEX snippet or C++ operating on points/volumes over nodes (by just distributing something like a POP sim). If that's not possible 3) I would look into Houdini Engine, which is built around Apache Thrift. In the easiest scenario you can run Python process (with hapi module) and connect it with any number of other python sessions. And those will have entire Houdini inside them running. If you insist on using Java as a data server, you could use Thrift on its side to communicate with Houdini Engine. Using Thrift just spares you time as it's already in Houdini. If that's not possible 4) I would consider using one of Houdini's headers like UT_NetMessage or UT_NetStream for raw data transfer. You can even see the example of how the former one is used in VDB slice exchange. In such case you would have to marshal data for network transfer, there are number of options out there, including FlatBuffers, which is funny, because... FlatBuffers has a builtin support for gRPC, so you get both at once, but gRPC, as name implies, was built for Remote Procedure Calls, not transferring loads of data, so it might not operate as good as you wish with many gigabytes of it. Low level network in nice package (like mentioned HDK headers) may work better (or not)... Also something designed entirely to deal with terabytes over the wires like Arrow might be waiting for your call. hope this helps, skk. ps frankly speaking if you would have been really badass, you would stick to OpenMPI
  10. Change in your shader behavior compared to PointVops comes from a single fact - stack it on top of everything you already know, and all of this will make sense again. Fact is: in Mantra (as probably any other renderer) world origin is at the camera position.
  11. I really have no idea. After Scientific Linux collapse there is only Oracle, which isn't exactly an alternative to IBM you would dream of. There is still time left, maybe they buy RH support for a 2-3 years, and start to migrate either to Debian or to some CentOS fork this situation effectively may produce?
  12. You could try this: hconfig -h HOUDINI_TEXT_CONSOL Also, have you tried using Windows Linux Subsystem? Seems to be good deal for GNU folks trapped in Windows wonderland. I don't have any experience with that though.
  13. Sure. Link to help. Basically: hython script.py [optional params] where script looks like: import hou hou.hipFile.load("myscene.hip") ...
  14. OK then. This is probably good idea. Mykola work from github is probably 80% of the work anyway. Hopefully someone can look into that.
×