Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


Posts posted by schwungsau

  1. I've start test Houdini 18 and Arnold 6. the first test was simple splines rendering, 250.000 splines instanced 25 times. it loads a 140MB alembic file.

    rendered in 6 core Xeon CPU and Nvidia Quadro RTX 5000. (windows 10 pro)


    the startup for Arnold GPU is slow, it renders faster, so it seems but for clear up the final image it takes for forever or just dropped /crashed, hard to tell on the GPU. the CPU is quite fast but much slower then GPU if it ever would finish. (adaptive sampling was on)

    As soon as Arnold finishes rendering the scene, it stops and do not refresh any more on parameter changes. so far i am not impressed with the Arnold GPU rendering.


    here is the same scene Arnold CPU with only direct Lighting. (on my MacBook)

    some test with Arnold GPU. it performed much better with just direct lighting.


    • Like 2

  2.  no, it's more technology preview than a beta. too features a missing or will change. in the current state, Karma is 5 times slower then Arnold or Renderman on CPU.
    for ArchViz:  Vray and Corona are the main players.  if need Caustics you use specialized ArchVix renderer like Indigo render or Thea render, they have GPU engines plus super fast engine with caustics. Indigo render with its CPU bi-directional MTL tracing beats any CPU or GPU renderer in terms of caustics.

    if wanna stick to Houdini Octane is a great option, its easy and with spectral rendering (like indigo) it's to set up physical correct material for ArchViz.
    the new Renderman is a badass swiss army knife. it has bi-directional path tracing, path guiding, and minfold sampling for caustics, which fits well for ArchViz, (indoor and outdoor) and it has the best interactive Houdini plug-in by far.

    --> best fit for ArchBVuz rendering: Renderman, Indigo renderer or Corona.

  3. XPS graphics died.... after dell service guy came and replaced the motherboard, it got worse it run's hot after 20 minutes and shut it self down -->just throw it out and switched to apple. never issues or blurscreen --> never lookded back... i heard lot time people burn's CPU/GPU if it let it runs all day long under full stress. thats i never buy machine with highestg GHz. good cooled or Xeon's . thats why willl pay extra money and save me time for fixing /replacing stuff all the time.

  4. i think you better of with blender, there a is lot Archviz stuff going on with it. (poliigon.com /blenderguru for example).  you have all the render engine you need (cycles, eveee, vray, indigo etc..) and best modeling tools in the industry for this kind of work. 

  5. i had bad experience with dell XPS laptop, that's why i switched to mac book. best part is the apple service, they will replace your mac book even if it is 5years old.

    a friend of mine try get replacements for surface because it burn the CPU, they was laughing at him in Microsoft store. but i know a guy, which he is using his Dell Precision Mobile Workstation for years.


  6. a lots of ways.... simple and fast way
    at pointwrangle after the add note ithe code 


    int prim_points[];
    float umap;
    addattrib(0, "point", "umap", 0.0);
    prim_points = primpoints(0, @primnum);
    for ( int i = 0; i < len(prim_points); i++ ){
        umap = float(i) / (len(prim_points) - 1);
        setattrib(0, "point", "curveu", prim_points[i], 0, umap, "set");

     then you can add a "color" node  -->  set it ramp from attribute with curveu as attr.  

  7. its that's easy.  written stuff for GPU needs 2 things, a host program and a kernel. the host program is usually written mid/high level language like (C, python etc..) the host program handles data management, complies kernels etc... the host program detect hardware GPU and complies and send the kernels to the GPU and feed the kernels with Data. in Houdini the openCL wrangle is the host program. the openCL Kernels (code/snippets) can only handle the data from local GPU/Memory which it got send from the Host program (OpenCL wrangle node). current openCL wrangle Houdini does not support multi GPU's. you have write you own openCL wrangle node via houdini API to make it happen, but you have to dive deep into parallel processing and data management or the GPU's sit there waiting most of the time for Data to be moved around.  

    If you rewrite nodes as openCL version, you need basic routines first. openCL is pretty low level. you need to write function like noise, PCopen, neighbours, ramp's etc...

    based on lastest test Houdini does not take full advantage of shared memory nor does it use multi GPU's. Pyro or Vellum uses only little part of the GPU. I could get self written Kernels to use 100% of the GPU for simpler Operation. I know only handful software packages, which run entirely Simulation's on GPU's. I am not sure, but I think only Cuda can take advantage of NVlink right now.