Jump to content

djiki

Members
  • Content count

    83
  • Joined

  • Last visited

  • Days Won

    6

djiki last won the day on November 25

djiki had the most liked content!

Community Reputation

25 Excellent

About djiki

  • Rank
    Peon

Personal Information

  • Name
    Srdjan Crnjanski
  1. Yes. Houdini, to be more precise, Mantra allows you to do any kind of projection you can imagine using custom lens shader. I wouldn't call that projection because it is much more. You can create your own rays (their origins and directions) for each rendered pixel and do manual ray trace not only in lens shader but in any kind of shader. Off course, you have to figure out first what "Robinson" table data represents and then we can help.
  2. Your final node in sop connect to all materials node you have and now use a switch node and connect in it's inputs all outputs from those material nodes. In expression of switch node you can use pointinstance() function to tell the switch node which input (material) to use
  3. Sometimes UndoHistory list can help you distinguish some crucial operation from the bunch of totally non-descriptive "parameter changed" and other "selection changed" ecents.
  4. Consider using EXR file format and apply your custom channels but not in RGBA plane but make your custom channels like Metallic, AO, Emmis, Smooth etc. EXR can handle any number of custom channels. Every software which can open EXR allows you to pick any of your custom channel and operate with it like you would do with any grayscale image. If you really want PNG or TIF you have to render files without premultiplication (premultiplication will multiply each R,G and B by Alpha and that's not what you want) and store all 4 channels in RGBA. In general, Color plane is not good way for exporting custom data (masks are ok) because color plane uses gamma curve which is applied to data, also it clamps negative range etc
  5. For what is "cl.cfg"

    No. That is option for inter-exchange operations between OpenGL and DirectX. To force NVIDIA GPU to run OpenCL you have to setup followed environment variables: HOUDINI_OCL_VENDOR = NVIDIA Corporation HOUDINI_OCL_DEVICENUMBER = 0
  6. Matching Curves

    Yes, primuv function is for that purpose. I didn't open your scene.... but usual technique would be like this:. Suppose you have low res curves. Then just generate U coordinate attribute on those (resample node can do that or you can process each curve separately where U = float(@ptnum / (@numpt-1)) on each curve assuming point order is sorted same way as on high res curve).. That puts U in range (0 to 1). Later at the place you need some attribute from lowres curve to put on hi res (assuming curve count is the same) you can use pointwrangle node, connect high res curve into first input and low res curve into second : vector tempUV = set(@u, 0, 0); @attribute = primuv(1,"SomeAttributeFromLowRes", @primnum, tempUV); This way you can fetch any attribute from LowRes curve to HighResCurve (and vice versa if you exchange inputs into wrangle node) according to U coordinate which should exists on both geometries. So if you write: @P = primuv(1,"P",@primnum, tempUV); This will put your high res curve at low res curve
  7. flip reseed group new particles

    For post simulation task you can use @ID at current frame to distinguish new (resided or emitted) particles by simply grouping all @IDs greater than MaxId from previous frame. Generally same should apply for detection during sim but you have to take care of exact place inside solver that comparison is possible. And if your solver runs over sub-steps you have to decide does that should take into account. Also if you turn on @age attribute, just emitted particles will have zero age attribute, I'm not sure does that apply to reseeded particles too or they inherit age from particles they are reborn.
  8. Wheel speed (rpm)

    Also you can use builtin mechanisms for simple integration. Your example can be solved by area integral (integral of first order) over RPM (rotations per minute) curve. Here is an example using CHOP for integration. Example is without scaling factors 2*r*PI and without conversion minutes to frames. It is fast enough and allow you changing RPM curve during playback Integration.hip
  9. ocean displacement with deformed geometry

    In that case its much easier .... Make sure your deformed grid have proper UV and instead of exporting bgeo, export 3D displacement only in texture (ocean spectrum node can do that) then apply that texture in displacement shader
  10. Wheel speed (rpm)

    Yes, you can skip solver (as the easiest method for any kind of integration) and do your manual integration inside of a loop by using ch("../rpm", NNN) to fetch value from rpm curve at any given frame NNN. This way your loop have to integrate from scratch at every frame (not so efficient) but can finish job.
  11. Copy Points With Rotation Along Curve

    I modified your scene. Is this what you need? epe_copyWithRotationAlongCurve_modified.hipnc
  12. Pyro Viewport Lighting Problem

    Put ROP_OUTPUT_DRIVER node in your network1 and connect OUTPUT node into it. (On some Houdini version it doesn't work, but on some it works sporadically like in 16.0.731 it looks like a bug ). If it doesn't work on your version, leave that node connected and go outside your network1 and go back inside again. That should do the trick
  13. P world in mantra

    @protozoan: you are right. @marty: CryptoMatte use ObjID or MaterialID to generate different layers, but in this specific case both butterflies are generated at geometry level so it is only one object and both butterflies use same shop material. Ok, you can separate those, to two objects, or two materials, but there is new problem which CryptoMatte can not handle. (Ok, it can, but solution you have to apply for CryptoMatte to work properly is based on regular solution which applied to P pass solves problem too without CryptoMatte.) And problem is hidden in fact that those butterflies are not "shaped" by geometry but by alpha channel in image which is projected to plane(s). So in case of P pass (same apply to N, Pz, Pworld etc) pixels around alpha edges really exist on scene (only their opacity differs). In general, you can deal with that on two different ways. One and probably most common solution is to use already prepared presets for pixel filtering as protozoan suggests and second to handle those specific things in shader by yourself. Second approach gives you endless possibilities and you are not limited only to filtering. This is modified scene which works using closest surface filtering p_world_modified.hip And this is custom shader solution which solve your problem in shader. It simply does manual pixel in-shader compositing, for all semitransparent pixels but for full transparent or full opaque pixels it works like your basic shader. p_world_modified2.hip cheers
  14. ocean displacement with deformed geometry

    Several ways are possible. 1. You can apply ocean to a grid first and then do deformation of complete ocean. 2. Default asset assume x and z coordinates are on the XZ plane and result from FFT evaluates Philips spectrum as displacement in Y coordinate only (if chop parameter is zero). So for example if you try to evaluate ocean on grid in XY plane it will not work well. You can modify original asset to apply displacement in direction of normal instead of Y axis. 3. - remove ocean from your deformed grid - make reference grid with same number of points as your deformed grid - create @oldpos vector attribute and store @P in it - apply ocean on that grid - create point vector attribute v@displ = @P - @oldpos; - create point float attribute f@amount = length(@displ); - compute rotational matrix which orient vector {0,1,0} to vector @displ - because both grids match in topology you can access that matrix attribute from your deformed grid simply referencing it by @ptnum and apply that matrix to normals of your deformed grid and scale them by @amount attribute - now just displace points of your deformed grid by those normals - catch @Cd (and other attributes if any) from reference grid cheers
  15. its not a problem in Houdini. In render settings you can override camera resolution and everything will be fine if aspect of that rendered resolution is same ie. 16:9. But one very important thing you didn't specify. When you make your 4K camera and put that 1080HD image as a background, should that image cover whole camera area or appear smaller in the middle of background? If first is the case then you just work with HD camera and in render time override camera resolution to 3840x2160 and you will have output which proportionally match to source HD image . If second is the case, then you didn't ask right question. Background in 3D software is not some placeholder with specific resolution like in 2D software. It is rather imaginary plane parented to camera on which camera projects an image. If you want image appears smaller (not covering whole camera projection area) or to appear larger, it's a question about projection and not resolution of image. So, for control of over-scanning and under-scanning you have parameters (in camera view tab) ScreenWindowsX/Y (the position) and ScreenWindowSize (scale). For example, if you need your background image be smaller by factor of 2 then enter value 2 to both fields of ScreenWindowSize.
×