Jump to content

djiki

Members
  • Content count

    83
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by djiki

  1. Yes. Houdini, to be more precise, Mantra allows you to do any kind of projection you can imagine using custom lens shader. I wouldn't call that projection because it is much more. You can create your own rays (their origins and directions) for each rendered pixel and do manual ray trace not only in lens shader but in any kind of shader. Off course, you have to figure out first what "Robinson" table data represents and then we can help.
  2. Your final node in sop connect to all materials node you have and now use a switch node and connect in it's inputs all outputs from those material nodes. In expression of switch node you can use pointinstance() function to tell the switch node which input (material) to use
  3. Sometimes UndoHistory list can help you distinguish some crucial operation from the bunch of totally non-descriptive "parameter changed" and other "selection changed" ecents.
  4. Consider using EXR file format and apply your custom channels but not in RGBA plane but make your custom channels like Metallic, AO, Emmis, Smooth etc. EXR can handle any number of custom channels. Every software which can open EXR allows you to pick any of your custom channel and operate with it like you would do with any grayscale image. If you really want PNG or TIF you have to render files without premultiplication (premultiplication will multiply each R,G and B by Alpha and that's not what you want) and store all 4 channels in RGBA. In general, Color plane is not good way for exporting custom data (masks are ok) because color plane uses gamma curve which is applied to data, also it clamps negative range etc
  5. For what is "cl.cfg"

    No. That is option for inter-exchange operations between OpenGL and DirectX. To force NVIDIA GPU to run OpenCL you have to setup followed environment variables: HOUDINI_OCL_VENDOR = NVIDIA Corporation HOUDINI_OCL_DEVICENUMBER = 0
  6. Matching Curves

    Yes, primuv function is for that purpose. I didn't open your scene.... but usual technique would be like this:. Suppose you have low res curves. Then just generate U coordinate attribute on those (resample node can do that or you can process each curve separately where U = float(@ptnum / (@numpt-1)) on each curve assuming point order is sorted same way as on high res curve).. That puts U in range (0 to 1). Later at the place you need some attribute from lowres curve to put on hi res (assuming curve count is the same) you can use pointwrangle node, connect high res curve into first input and low res curve into second : vector tempUV = set(@u, 0, 0); @attribute = primuv(1,"SomeAttributeFromLowRes", @primnum, tempUV); This way you can fetch any attribute from LowRes curve to HighResCurve (and vice versa if you exchange inputs into wrangle node) according to U coordinate which should exists on both geometries. So if you write: @P = primuv(1,"P",@primnum, tempUV); This will put your high res curve at low res curve
  7. flip reseed group new particles

    For post simulation task you can use @ID at current frame to distinguish new (resided or emitted) particles by simply grouping all @IDs greater than MaxId from previous frame. Generally same should apply for detection during sim but you have to take care of exact place inside solver that comparison is possible. And if your solver runs over sub-steps you have to decide does that should take into account. Also if you turn on @age attribute, just emitted particles will have zero age attribute, I'm not sure does that apply to reseeded particles too or they inherit age from particles they are reborn.
  8. Wheel speed (rpm)

    Also you can use builtin mechanisms for simple integration. Your example can be solved by area integral (integral of first order) over RPM (rotations per minute) curve. Here is an example using CHOP for integration. Example is without scaling factors 2*r*PI and without conversion minutes to frames. It is fast enough and allow you changing RPM curve during playback Integration.hip
  9. ocean displacement with deformed geometry

    In that case its much easier .... Make sure your deformed grid have proper UV and instead of exporting bgeo, export 3D displacement only in texture (ocean spectrum node can do that) then apply that texture in displacement shader
  10. Wheel speed (rpm)

    Yes, you can skip solver (as the easiest method for any kind of integration) and do your manual integration inside of a loop by using ch("../rpm", NNN) to fetch value from rpm curve at any given frame NNN. This way your loop have to integrate from scratch at every frame (not so efficient) but can finish job.
  11. Copy Points With Rotation Along Curve

    I modified your scene. Is this what you need? epe_copyWithRotationAlongCurve_modified.hipnc
  12. Pyro Viewport Lighting Problem

    Put ROP_OUTPUT_DRIVER node in your network1 and connect OUTPUT node into it. (On some Houdini version it doesn't work, but on some it works sporadically like in 16.0.731 it looks like a bug ). If it doesn't work on your version, leave that node connected and go outside your network1 and go back inside again. That should do the trick
  13. P world in mantra

    @protozoan: you are right. @marty: CryptoMatte use ObjID or MaterialID to generate different layers, but in this specific case both butterflies are generated at geometry level so it is only one object and both butterflies use same shop material. Ok, you can separate those, to two objects, or two materials, but there is new problem which CryptoMatte can not handle. (Ok, it can, but solution you have to apply for CryptoMatte to work properly is based on regular solution which applied to P pass solves problem too without CryptoMatte.) And problem is hidden in fact that those butterflies are not "shaped" by geometry but by alpha channel in image which is projected to plane(s). So in case of P pass (same apply to N, Pz, Pworld etc) pixels around alpha edges really exist on scene (only their opacity differs). In general, you can deal with that on two different ways. One and probably most common solution is to use already prepared presets for pixel filtering as protozoan suggests and second to handle those specific things in shader by yourself. Second approach gives you endless possibilities and you are not limited only to filtering. This is modified scene which works using closest surface filtering p_world_modified.hip And this is custom shader solution which solve your problem in shader. It simply does manual pixel in-shader compositing, for all semitransparent pixels but for full transparent or full opaque pixels it works like your basic shader. p_world_modified2.hip cheers
  14. ocean displacement with deformed geometry

    Several ways are possible. 1. You can apply ocean to a grid first and then do deformation of complete ocean. 2. Default asset assume x and z coordinates are on the XZ plane and result from FFT evaluates Philips spectrum as displacement in Y coordinate only (if chop parameter is zero). So for example if you try to evaluate ocean on grid in XY plane it will not work well. You can modify original asset to apply displacement in direction of normal instead of Y axis. 3. - remove ocean from your deformed grid - make reference grid with same number of points as your deformed grid - create @oldpos vector attribute and store @P in it - apply ocean on that grid - create point vector attribute v@displ = @P - @oldpos; - create point float attribute f@amount = length(@displ); - compute rotational matrix which orient vector {0,1,0} to vector @displ - because both grids match in topology you can access that matrix attribute from your deformed grid simply referencing it by @ptnum and apply that matrix to normals of your deformed grid and scale them by @amount attribute - now just displace points of your deformed grid by those normals - catch @Cd (and other attributes if any) from reference grid cheers
  15. its not a problem in Houdini. In render settings you can override camera resolution and everything will be fine if aspect of that rendered resolution is same ie. 16:9. But one very important thing you didn't specify. When you make your 4K camera and put that 1080HD image as a background, should that image cover whole camera area or appear smaller in the middle of background? If first is the case then you just work with HD camera and in render time override camera resolution to 3840x2160 and you will have output which proportionally match to source HD image . If second is the case, then you didn't ask right question. Background in 3D software is not some placeholder with specific resolution like in 2D software. It is rather imaginary plane parented to camera on which camera projects an image. If you want image appears smaller (not covering whole camera projection area) or to appear larger, it's a question about projection and not resolution of image. So, for control of over-scanning and under-scanning you have parameters (in camera view tab) ScreenWindowsX/Y (the position) and ScreenWindowSize (scale). For example, if you need your background image be smaller by factor of 2 then enter value 2 to both fields of ScreenWindowSize.
  16. What is the purpose of resolution change? If you want to preserve memory you can change quality of background image (lock camera in viewport, press d (in viewport), go to background image in camera tab and change slider to reduce quality. Or you can increase image cache size with ALT+Shift+M and enter let say 8000. If there is some other purpose for decreasing resolution please specify.
  17. Differential curve growth

    Since they are already cut to pieces don't forget a spoon
  18. animation motion path

    Basically, you can do that only if path object exists in your scene and some object motion is constrained to follow that path. Simple keyed translation values can not be visible in viewport as editable path. In an advanced approach, it would be possible to proceduraly generate some 3D curve in scene from keys you entered in transformation tab. Then such curve would be editable in viewport as any other curve. After edit is completed it would be possible to reapply those modifications into source channels as offsets for correction .... but I'm sure you don't need that
  19. Wheel speed (rpm)

    Huh, I'm not sure I understand you very well. But there is at least thre conceptually complete different approaches to that. 1. Fastest, computation approach.using some expression like WheelAngle = $T * ch("../YourKeyedCurve") * $SomeCoefitient this way your curve represent angles which are simply scaled by time factor and some constant. It is not accurate and can not handle accurate speed changes over time but in some simple cases it can finish the job 2. If your driver curve represent RPM, you need a solver because angle at some point in time depends on value of RPM at that time and current angle from previous time. In Solver you have to put @WheelAngle += 360 * ch("../rpm")/ 60 / $FPS that way you update wheel angle at every time step with appropriate RPM value from that time step, etc. 3. If your Null (which is parent of wheel) moves through space you can plot curve of that motion by making point trail of that point (null) motion and connect them into curve. That way you generate the path. Assuming your wheel should roll on that path you can calculate length of that path at each frame. That length represents the path wheel already passed. If you know radius of wheel then its perimeter=2^r*Pi. Dividing current path length with that perimeter you actually get current rotation angle.
  20. Differential curve growth

    Nice thread with interesting topic. My two cents to this topic: - solver which allows structure to separate grown parts from initial structure when some condition is met and continue to process those parts same way it processes starting structure. In attached example I tried to keep things as simple as possible. The whole magic is inside solver inside for each loop, where shape is tested against conditions if physical distance between two points is small enough and there is more than K points on the shape in between them, separation occurs. This example works in 2D but with this concept it can be extended to work in 3D space. Instead for searching if two points are close enough you search for 3 points which form triangle whose area is small enough but larger than zero. gs.mov gs2.mov Both examples are in the scene file DiffGrow_separation.hip cheers dgs.mp4
  21. motion vector pass for Nuke

    Yes you are right, updated documentation states that GetbluredP returns position in camera space and not NDC space. So conversion is required for proper motion blur. Differences between camera space and NDC. Hm, camera space is regular 3D space. Just imagine new coordinate system in your scene which is positioned with it's origin in camera position, and oriented that positive part of Z axis looks in direction camera looks and Y axis is aligned with UP vector of your camera. Now, if you try to express point coordinates of all objects in that coordinate system, those readings are actually coordinates in camera space. NDC coordinate system involve perspective transformation. That is "perspective correction" applied to x and y coordinate from camera space according to point distance from camera. For example, if you have two identical objects moving in parallel in direction of X axis with speed of 1 unit per frame. One object is let say closer to camera and another is far away from it. Rendering motion vectors of those objects in camera space would give vectors of 1 unit length with values (1,0,0) for both objects. That is because camera space is like world space. You can translate any line of some length to any position in such space and that length will always be the same. In NDC space that is not the case. The line closer to camera, when projected on camera sensor will have larger length than identical line which is positioned deeper in scene, far away from camera. In our case of objects in motion, even both objects have the same speed of 1 unit/frame expressed in world or camera space, when converted in NDC space, closer object to camera will have larger motion vector while further will have smaller. I modified your scene by duplicating sphere with your animation and positioning deeper in the scene. Try to render in camera space and NDC space and look differences in motion vectors in Nuke. test_MV_modified.hip Multiplying by resolution in shader, like in your original example, is not wise choice. That way you killed Z coordinate. Someone will ask why do you need Z coordinate for 2D motion blur, but yes, some advanced algorithms use it especially in situation where two trajectories of objects in motion, looking from camera position, overlaps. Also sometimes you want to distinguish pixels moving toward camera from those moving away from camera and that Z coordinate (third component) of motion vector can be used for that. Houdini camera space is defined that camera looks in direction of positive Z axes, meaning, pixels with positive value in that component are moving away from cam and vice versa. From the other side, Nuke camera space looks to negative direction of Z axis, that is why you need to exchange X and Y components. But that shouldn't be done in shader. Suppose you are working in large company and your render output is used in different compositing software. Exchanging coordinates in shader will make that render "NUKE specific" and for sure you don't want to render more motion passes for other composite software. That's why you should leave output as is and all "Nuke specific things" finish in Nuke. Setup in Nuke: Amount parameter is basically your uniform scale for motion blur. On this example it is oversized so you can see something. Exchange of x and y coordinate and multiplication with resolution is applied in expression node Cheers
  22. Modify Result Of Instance Geometry?

    As I know, only viewport and rendering process have capability to evaluate instance node.
  23. motion vector pass for Nuke

    You don't have to do conversion from space to space for getblurP nodes. Check documentation about that node. You have also example in documentation with this
  24. Torus Procedural

    Images you attached are probably generated by some mathematical software like Wolfram Mathematica or MatLab. So if you have exact math function for generating those you can use them in Houdini too. For such Math approach (not really procedural in manner of combining full potential of Houdini), you can use ISO Surface node. Basically that node will accept any function of X,Y,Z coordinate in implicit form. For example if you want to define surface of sphere with radius 1. You actually thinking of function which will give you all points which are at exact 1 unit from let say center of scene. Function which cover that in 3D space would be sqrt(x^2+y^2+z^2)=1 If x,y,z represent coordinate of point then any point with such x,y,z that satisfy equation will be at surface of that one unit sphere. Making power of 2 on both sides of equation of unit radius sphere gives you x^2+y^2+z^2 = 1 That is explicit form of function. If you transfer right part to left: x^2+y^2+z^2- 1 = 0 Now when right part is 0 it could be removed (but think on it as if it exists and is equal to zero) and it leaves you with implicit form of equation x^2+y^2+z^2-1 and that example is default expression value in ISO Surface node. Unit length sphere. That node samples 3D space in ranges you set and for any point in that range generate surface (iso surface) if point's coordinates ($X,$Y,$Z) satisfy implicit equation you entered. Equation of simple torus in implicit form would be (R - sqrt(X^2 + Z^2))^2 + Y^2 - r^2 where R and r are large and small radius of torus ISO_torus.hip Without proper formulas for exact definition of your surfaces everything else is just guessing. If that is good enough you can try modify equation. Btw. any kind of function can be processed even something like noise($X,$Y,$Z) Doing proper math for repetition of many radius levels involve some of repetition functions like Modulus or trigonometry sin or cos. Rearranging arguments for solving for small r gives r = sqrt((R - sqrt(X^2 + Z^2))^2 + Y^2) replacing that instead of r in original function gives you 0 because any point doesn't matter of its coordinate satisfy equation, but if you expand that expression like this r = int(N * sqrt((R - sqrt(X^2 + Z^2))^2 + Y^2)) / N you actually quantize only those points which match quantized radius satisfy equation. ISO_torus_repetition.hip As you can see in example you can also use logical function to clamp calculation in some segment. Expression length($X,0,$Z)<R clamp calculation only inside tube of radius R. That example gives your image 1 On image 2 and 3 you can see that changing over Y axis bends toruses. so you have to put that in equation etc. This is NOT procedural approach, it is just pure math of surface representation for some equation and since you have images from Math software I suppose you also have exact function so you can use it on the same way in Houdini.
  25. Is there some special reason for doing boolean in each iteration? If you only need holes on your sphere there is no need to use FOR loop since CopyToPoints node already do iteration. It's just about thinking. Instead to bool with each box, you can first create whole geometry for bool difference (stamping cubes all around) and then you use single bool node to make subtraction against sphere. Boolean.hip
×