Jump to content

toadstorm

Members
  • Content count

    100
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by toadstorm

  1. Rendering time

    Positioning of your objects isn't very important or useful for diagnostics; it's the settings on your mantra ROP and your lights and materials (and possibly object containers) that are causing the problem. This is why people are asking for the HIP. Maybe you could clear any confidential information from the scene before uploading? Otherwise I suggest reading the manual: http://www.sidefx.com/docs/houdini/render/noise.html
  2. here you go! The teapot is exporting a "matte" AOV, which is a solid red color. This is handled inside the teapot's shader. To get the glass sphere to refract this AOV, we include the code I posted above in an Inline VOP in the sphere's shader. It uses gather() to look for this same export from other objects in the scene, then writes back to that same "matte" AOV. refract_aov.hip
  3. Your `trace()` instinct is about right, but personally I'd use `gather()`. You need to provide `gather()` with a position P and a direction T. The Fresnel VOP can get you T based on N, I (the camera ray) and an IOR (called eta here). Then tell `gather()` to sample the value of the exported AOV attribute (multimatte RGB or whatever) you want to be refracting in this shader: vector temp = 0; vector hit; gather($P,$T,”bias”,0.01,”samples”,1,”name_of_aov_attribute”,hit) { temp += hit; } $out = temp; then you can bind that output to any export you like. i'm only doing one sample here, but if you needed smoother samples you could increase that number, then divide `temp` by the number of samples after your gather loop is done. i used an Inline VOP for that code, but it'd probably work in a snippet if you changed from $ style variables to snippet syntax.
  4. PySide set Color of cells in QTableView

    If you're using QTableView, you could always use a QStyledItemDelegate to display the first row however you want, but there's some extra boilerplate involved. You could also try using the default model behavior's data() method... the role you want is QtCore.Qt.BackgroundRole: if role == QtCore.Qt.BackgroundRole and index.column() == 0: return QtGui.QColor(255,0,0)
  5. Updated Method for Aiming Normals

    Yeah, a wrangle is usually a good choice. The method is almost exactly the same. Here's a screengrab of the network and the code you'd use: vector p2 = point(1, "P", @ptnum); @N = normalize(p2 - @P);
  6. he did mention he didn't want to blast all the other primitives, though that would be my preferred workflow in this case.
  7. You can create a @density primitive attribute on your box that will guide the scatter density, if you set the Scatter SOP to use density as your Density Attribute.
  8. PBR and raytracing are more or less the same thing... if you crack open any of the default shaders, you're going to see the Compute Lighting VOP in there, which is running pbrlighting.vfl internally. The main difference you'd notice as a user is that if you're trying to run PBR-specific shading nodes with the raytrace engine, it'll likely crap out, and if you're trying to use some more old-school tricks in PBR, you might get unexpected results. Most of the time I use the raytrace engine and make sure my materials are all using Compute Lighting at some point, and this seems to work well for most scenes. Micropolygon is an older approach that's really great in a few specific situations, but it has limitations. It's best used with volumes, especially with displaced volumes, since it can crank through them very very fast. IIRC you have to use deep shadow maps on your lights, however, and if your shader requires raymarching (i.e. multiplying density against volumetric textures) you're not really going to see any speed gains. It doesn't seem to work properly in IPR mode, so if you want to see the actual speed gains you'd expect on volumes, render to MPlay. If you're doing volume-heavy work and you don't need fancy lighting, it's worth a shot to see if you can make it work, but it's not as easy or predictable as raytrace/PBR. 90% of the time you'll be using raytrace/PBR.
  9. If you haven't already watched Entagma's video on parallel transport, it's worth watching. The technique allows you to generate a stable reference frame along a curve, without the flipping you'd often get from a static up vector. http://www.entagma.com/td-fundamentals-parallel-transport/
  10. Smoke falling down

    Particle advection will help for the look, if you're trying to get super wispy smoke. I'd try introducing turbulence, but use the collision field as a mask for applying it. That should get you that nice laminar flow at the top, and noisy turbulent flow underneath.
  11. Interior Fluid Shader

    You're using the wrong node... you want the Transform VOP, converting from space: current to space: world
  12. [SOLVED]Modify Rotation Of Aligned Copies?

    I inserted two nodes here. The first is to create a stable p@orient attribute on your plane, before it starts animating. The scattered points inherit this orientation. It's just based on the maketransform() function, using the existing @N and an arbitrary up vector (+X in this case). The second is to use the quaternion() function to create a second quaternion from an axis and angle. The axis you want to rotate around is @N, and the angle is any random angle between -PI and PI radians, which corresponds to -180 and 180 degrees. The qmultiply() function then adds this rotation to the existing orientation. The great thing about quaternions here is that you can take any axis and angle, make a quaternion out of it, then qmultiply that against your existing rotation, and you'll get a combined rotation back. ap_arrows_align_along_curved_surface_fixed.hiplc
  13. Interior Fluid Shader

    You can grab attributes from points in a shader, no problem. This is what point cloud functions (pcopen, pcfilter) were intended for. If you cache out your points to bgeo, you can take your shader global position P, convert it to world space using a Transform VOP, then use that as the position for a point cloud open VOP, with your points as the input geometry. This way you can grab and filter any point attribute you like. With volumes you can use the Volume Sample and Volume Sample Vector VOPs to grab voxel data from any cached volume primitive.
  14. If you want to create groups from an i@class attribute, maybe just use the Partition SOP?
  15. Export pyro to polygon alembic with vertex color

    Vertex colors will work in Maya from Houdini if: the attribute is vector3 RGB (it's possible this could work with vector4, haven't tried it) it's a vertex attribute it has the attribute type "color" (use the VEX command setattribtypeinfo() to set the type to "color" if it isn't already) Here's a sample file, this displays with vertex colors in Maya as expected: meshed_pyro_vtx_color.hip
  16. Use a Geometry Wrangle or a POP Wrangle if you're dealing with particles, and use an expression like this: if(len(v@v) < ch("min_speed")) { removepoint(0, @ptnum); } Or use a SOP Solver in your DOP network to delete the points using any of the SOP-based methods listed above.
  17. voronoi fracture, how to avoid straight segments

    Sorry, misread you, I thought you were saying concave made no difference in finals! Sounds like we're saying the same thing then.
  18. voronoi fracture, how to avoid straight segments

    I'm assuming you meant concave shapes? Either way, from Houdini's Bullet Solver docs: It's possible to get satisfactory results with concave collision objects, but in practice you're better off with convex decomposition.
  19. [RFE]Carve In COPs?

    Animate another closed RotoShape as a mask? It's roto after all... I suppose if you really needed Carve you could try using a Trace SOP to grab your RotoShape and process that into a curve for carving, but getting the points filtered, merged and ordered correctly for that sounds like a real pain. You're probably better off just doing it the old-fashioned way by animating a mask.
  20. voronoi fracture, how to avoid straight segments

    I don't know about faster, but it's definitely much more stable. Concave shouldn't even be an option IMO, it basically never works. Bullet really hates anything that isn't a convex hull.
  21. voronoi fracture, how to avoid straight segments

    Keep in mind that if you plan on simulating these pieces via Bullet / Packed RBD, you will probably still have to voronoi shatter each individual booleaned piece and then glue the pieces together in order to create reasonably accurate convex hulls.
  22. Vex Volume Procedural Grid Rez

    This is poorly documented (or possibly not documented at all), but the division size for a CVEX Volume Procedural is 1/100 of your maximum bounding box dimension. Depending on your scene, when you're dealing with rendering these volumes you often have to increase Volume Quality above 1 to get detailed results.
  23. Render viewport polygons only

    You can create a scalar volume based on a camera frustrum using a Volume SOP, then use a Group SOP to group points using that volume. Then either blast those points, or promote the point group to a primitive group and blast those. camera_frustrum_volume.hip
  24. missing reflections from fire

    There isn't supposed to be density in a candle simulation, unless you want visible smoke. The way the sim is set up by the shelf tool, it's not meant to emit any smoke. I think the reflection issue is just one of angles and normals on a perfectly flat plane. If you use a Bend SOP to curve your grid into more of a curved cyc shape, the reflections work fine. debug_refl_v002.hipnc
  25. I added some notes in the VEX code to explain what's going on, but I can elaborate a little further. The first two examples are doing exactly the same thing, one in VEX and one in VOPs: generating a @density attribute based on the Y position of each point on the grid, relative to the bounding box. The `relbbox` function returns a vector with components ranging between 0 and 1, based on where the point is relative to the bounding box of the object on each axis. The y-component of that vector is used as the lookup value for a ramp, using the `chramp` function. The ramp parameter is auto-generated by the point wrangle if you click the little "+" button next to the VEX code. You can then use this ramp to control the value of the @density attribute, which exists on the points of the grid. If you want to visually see the results, you can use a Visualizer SOP to view density as color. The Scatter SOP by default will look for a @density attribute to determine where to prioritize scattering points, but you could use other attributes if you like; density is just a convention. The third example is using the "attractor points" you were asking about. A number of points are scattered on the grid; these are the attractors. In the point wrangle, which runs over all points of the grid simultaneously, the nearest attractor point to the current iteration's point position is found using `nearpt`, and then the `point` function is used to get the position of that attractor. The distance between the current grid point and the nearest attractor point is computed and fit to a 0-1 range based on the "max_dist" channel, which is user-defined. The resulting 0-1 value is used with a ramp attribute lookup, and the result is bound to @density, same as the other examples. You could change the falloff ramp and the max distance to adjust exactly how the density is mapped to the grid points. If you wanted to solve this without VEX, you could try scattering a few points onto a grid, giving them all a @density value of 1 or some other positive value, then using an Attribute Transfer to transfer this attribute back to the grid points. Then use a Scatter SOP as before.
×