Jump to content

flcc

Members
  • Content count

    388
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by flcc

  1. Hi terence - You say "a mask of my subdivided, transformed geo", but In your sketch you show white area on non subdivided area. - You say matte, are you after a bitmap mask for some post processing ? - You have already a falloff . probably a solution here, but without take a look at your file difficult to say. A file and some infos will help people to help you.
  2. How to adjust the width of a point multiple times

    class += i@ptnum > 23; class += i@ptnum > 87; I like these two lines I have to remember the trick I've just converted the rig wrangle into a classic detail wrangle. May be more "classical" way. @konstantin magnus by the way, why do you use pretransform rather than transform (who seam to also work) ? Classical vex way.hiplc
  3. Procedural Design

    I will be curious to see the network. It must be very dense to say the least. Do you have any idea of the total number of nodes. Including those in asset or subnetworks
  4. How to avoid stepping/banding in dense volumes?

    May be you better to post a file. Stepping when cranking density on volume doesn't seem strange, and probably I don't have enough knowledge (don't really use pyro) but from the pictures I am not able to say much.
  5. Just search midi in the documentation, you will find an example doing what you're after under "midi out".
  6. If you use point normals its probably the problem. C4d doesn't read prim or point normal from Alembic. To get your pretty normals into C4d you need to promote these Point Normals to vertice normals.
  7. Totally agree. I have nothing to add to what Sebkaine said It's very good advice.
  8. SSS Noise Redshift

    SSS can be tricky to get clean. Have you tried the point based mode? Depending on what you're looking for, that might be enough. Way much faster and less noisy, but yes a bit less nice. It looks like you're using an HDRI for the lighting, I've found that SSS doesn't like HDRIs very much. Just try a physical sky, you'll see a clear difference. Sorry maybe someone else will have more useful advice. Personally for these difficult cases, I use denoiser, even if it means a bit of compositing behind it.
  9. Houdini fixed Point Count

    When you import "as particle geometry" internally C4d use Thinking particles wich is cumbersome. Effectively when importing particles, even with a constant particle count particle id change over time. this is a thinking particle related problem. i've been having a pretty hard time wrapping my head around this problem. But if I understand correctly you want to use the particles as as scatter source. Why export a sequence ? For this kind of workflow I prefer to import the particles "as polygon Object". The imported object has no polygons, but points are here. It's really easy to use mograph stuff with that.
  10. it's not work like this. I should have added that you can't always use this name as an attribute You can use attributes that have been previously defined in the parameter fields, but not all parameters are recognised as attributes. Some nodes recognise a number of attributes, notably predefined attributes, like pscale, orient, v etc... and some nodes recognise some of their parameters as attributes, such as Vellum Sop nodes, but explaining the mechanism of attributes is beyond the scope of a single post. At least by me whose English language is not my strength. I suggest you take a look here : https://www.tokeru.com/cgwiki/?title=Houdini
  11. In the parameter pane, when you rollover the name of a parameter a tooltip appears. You can see the reel name of the parameter, wich is the atribute name. Sometime it's the same as the parameter name (the label), like height, sometime it's different.
  12. mega scan renders always look bad

    or you can just plug your AO texture in the diffuse weight input.
  13. Disgusting thing

    It's very easy to make disgusting things in houdini Set-up inspired by this tutorial from simon Fiedler https://www.youtube.com/watch?v=yqM_3goH4J8 File is available too. I've just added a vdbfromparticule to make the skin, and a totaly non procedural cheating to maintain the "tentacule" consistency on the left. The final Splash is a bit chip, but hey, I needed to ends this awfull thing. And I had fun making the "sound design".
  14. simple vex code for Conway's game of life

    You need to use a solver sop. Houdini don't keep "global" variables overtime. It's why you need a solver. I've put your wrangle in a solver and it's do something but more like an infection. Don't know if it's the behavior you expect. I've also give it a try some time ago, the file is attached. Not perfect and stop working at frame 55. No idea why. A very good explanation of the solver sop here. https://www.tokeru.com/cgwiki/index.php?title=The_solver_sop If you're new to houdini this site is a must see. It as also a simple game of life example. https://www.tokeru.com/cgwiki/index.php?title=HoudiniVex#Solver_sop_and_wrangles_for_simulation vex_gameOfLife + solver.hiplc GameOfLife_F.hiplc
  15. Yes, for me at least measurements are taken with a laser, and the reconstructions are made to the correct scale. But sometimes that's not enough. Sometimes the measurements are just difficult to take without scaffolding and you have to trust the measurements you are given. For example in the case of projection in most cases the technical guys give you the position where the projector will be installed. But in practice the projectors are installed a few days before the show. And when it comes to installing things that can be more than 50 kg at 10 or 20 meters high, well they are not necessarily placed exactly as they were told, etc. etc... That's why I use the little method I described which works perfectly. ... and I trust only my eye
  16. Shift+F broken

    For me it's also SPACE-G and SPACE-F. Actually Shift G or F never worked, no matter which version of Houdini.
  17. Very slow redshift render

    I try your second file (the two rops), and the atom one. Rop1 30s, Rop2 7s, Atom file 8s. The parameter who slow down the render is the adaptative error threshold. In Rop1 it is 0.001 which is a very high setting. put 0.01 (the default) and render immediatly down to 7s. And you don't notice any visual différence. As I understood you use version 2.6.41. This can be a problem, because it's a pretty old version and Redhsift version are pretty linked to houdini versions. And by theway this parameter has changed, since v3. Honnestly you don't have too much parameter to tweak in the redhift ROP. Default settings are ok for most of the cases. Just the "max Samples", 64 128 or 256 can be ok. If you have to go up you have to fond what cause the noise. Frequently reflexion or refractions. At this point you can look at sample setting in the materials. But I highly recommand you check the documentation about sampling. Really well explained.
  18. Very slow redshift render

    There is clearly something wrong. You probably better to ask on Reshift forum as this is not a Houdini problem or even Redhift himself. Rather a configuration - compatibity - hardware. I can't really say. May be reading the redshift log they can find some clue.
  19. Very slow redshift render

    Strange, here it took 7s with 2x1080ti. According to octane bench 1080ti have a score of 185 and 1050ti 48 2x185/48= 7.7. Theorically your render should be 7 x 7.7 = (aproximatively) 53s it is obvious that there is something wrong, Hardware ? driver ? other ?
  20. I have worked on such things before, (not with a round angle, but that doesn't seem to be the problem). In my opinion there is a height problem in your reconstruction. It is always the same thing. Difficult to make virtual and reality coincide with large proportions. I guess you can't test live. My advice would be to make a movie where your image is still but where the "uv-project-camera" is animated and goes up in height in order to determine when, once projected in real life the perspective becomes correct. I know it's the geometry that should move, but it's easier and you can eventually transfer the difference you find to the model afterwards. In any case, this is how I proceed when there is a problem of deformation and I can't act in live. Even if you're not physically present the guy who make the test can tell you "Hey at 10s it's ok"
  21. Well Done I like this look, both curved and well shaped. It reminds me a bit of that illustrator from the 70's Chris Foss.
  22. A simpler and more efficient solution Inspired by Konstantin 's last post (one more time :)) but this time transfering distance to displacement. Embrace F3.hiplc
  23. Another try. The previous one work sometime but sometime not. Embrace F2.hiplc
  24. And some polyextrude trick ? Thanks to librarian for the ivy solver and Konstantin for the isocurve polycut trick. Embrace F.hiplc
×