Jump to content

Leaderboard


Popular Content

Showing most liked content on 03/01/2015 in all areas

  1. 2 points
    Hi folks, I'm proud to announce the newest update of my digital asset, mbPlumage. Let me know what you think about it. For more info, please visit http://tools.haybyte.com/ The asset is available on orbolt. Cheers, Michael
  2. 1 point
    One more architectural visualisation work. Entirely done with Houdini. Trees and picker are purchased models, everything else is done with Houdini. Some of the grass is Houdini some painted later. A lot of post-work is there using Affinity Photo beta. This is one of WIP images.
  3. 1 point
    For those who missed it
  4. 1 point
    Ok! First - the most important part of the method. Check this diagram and attached file - they are the core algorithm I came up with. 1. Let's say we have a simple 2d point cloud. What we want is to add some points between them. 2. We can just scatter some random points (yellow). The tricky part here is to isolate only the ones that lay between the original point cloud and remove the rest. 3. Now we will focus just on one of the points and will check if it is valid to stay.Let's open point cloud with certain radius (green border) and isolate only tiny part of the original points. 4. What we want now is to find the center of the isolated point cloud (blue dot) and create vector from our point to the center (purple vector). 5. Next step is to go through all points of the point cloud and to create vector from yellow point to them (dark red). Then check the dot product between the [normalized] center vector (purple) and each one of them. Then keep only the smallest dot product. Why smallest - well that's the trick here. To determine if our point is inside or outside the point cloud we need only the minimum result. If all the points are outside , then the resulted minimum dot will always be above zero- the vectors will tends to be closer to the center vector. If we are outside the point cloud the result will always be above zero. On the border it will be closer to 0 and inside - below. So we are isolating the dot product corresponding to the brightest red vector. 6. In this case the minimum dot product is above 0 so we should delete our point. Then we should go to another one and just do the same check. Thats basically all what you need. I know - probably not the most accurate solution but still a good approximation. Check the attachment for simpler example. In the original example this is done using pointCloudDot function. First to speedup things I'm deleting most of the original points and I'm trying to isolate only the boundary ones (as I assume that they are closer to gaps) and try not to use the ones that are very close together (as we don't need more points in dense areas). Then I scatter some random points around them using simple spherical distribution. Then I'm trying to flatten them and to keep them closer to the original sheets - this step is not essential, but this may produce more valid points instead of just relying on the original distribution. I'm using 2 different methods - the first one ( projectToPcPlane ) just searches for closest 3 points and create plane from them. Then our scattered points are projected to these closest planes and in some cases it may produce very thin sheets (when colliding with ground for example). There is a parameter that controls the projection. Then second one is just approximation to closest points from original point cloud. Unfortunately this may produce more overlapping points, so I'm creating Fuse SOP after this step if I'm using this. The balance between these 2 projections may produce very different distributions, but I like the first one more, so when I did the tests the second one was almost always 0. Then there is THE MAIN CHECK! The same thing that I did with the original points I'm doing here again. In 2 steps with smaller and bigger radius - to ensure that there won't be any points left outside or some of them scattered lonely deep inside some hole. I'm also checking for other criteria - what I fond that may give better control. There may be left some checks that I'm not using - I think I forgot some point count check, but instead of removing it I just added +1 to ensure that it won't do anything - I just tried to see what works and what not. Oh and there are also some unused vex functions - I just made them for fun, but eventually didn't used. So there it is. If you need to know anything else just ask. Cheers EDIT: just edited some mistakes... EDIT2:file attached pointCloudDotCheck.hiplc
  5. 1 point
    OpenGL natively supports two attribute classes - detail and point attributes. Primitive and Vertex attributes are not natively supported and require extra shader support. Vertex attributes in particular require a geometry shader stage to assign the vertex attribute to each vertex of the triangle primitive. If you do a File > New Operator Type, then SHOP:GLSL, you'll see a sample GLSL shader in the Code tab which shows how this is accomplished. The relevant code in the geometry shader is: in parms { vec4 pos; vec3 normal; vec4 color; vec2 texcoord0; float selected; } gsIn[]; out wparms { vec4 pos; vec3 normal; vec4 color; vec2 texcoord0; noperspective out vec3 edgedist; flat out int edgeflags; float selected; } gsOut; uniform int attrmodeuv; uniform samplerBuffer attruv; int HOUprimitiveInfo(out ivec3 vertex); // will be linked in void main() { .... ivec3 vertex; int prim = HOUprimitiveInfo(vertex); if(attrmodeuv == 0) // point gsOut.texcoord0 = gsIn[0].texcoord0; else // vertex gsOut.texcoord0 = texelFetch(attruv, vertex.r).rg; .... } The 'attrmodeuv' uniform selects between point and vertex normals. The attruv texture buffer object contains the uv vertex data. The code assigning to gsOut needs to be done once per vertex, though with gsIn[0] and vertex.r, gsIn[1] and vertex.g, and gsIn[2] and vertex.b.
  6. 1 point
    The memory limitations on GPU have definitely persisted longer than we expected. And unfortunately even if you can get a 12GB NVIDIA card, their OpenCL driver is still 32-bit at the moment so you're still limited to 4GB per process. The silver lining here is that there are some production-level sims that can fit in 4GB, and we still get a very nice speedup for Pyro / Smoke using OpenCL on the CPU without the memory limitations (particularly with some of the more accurate advection schemes introduced in H14). And the newer uses of OpenCL in H14 for the grain solver and FLIP solver only accelerate smaller-but-expensive iterative parts of the sim and are less memory hungry. For example I think production-scale sims are absolutely possible on the GPU with the grain solver. If you're in a big studio where almost all sims are done on the farm, the lack of GPUs on most render farms is obviously an issue. The OpenCL CPU driver can help there, but there's a bit of chicken-and-egg issue on getting more GPUs on the farm. But these days (especially with Indie) a lot of production/commercial quality work is being done by small studios or individuals; for them running a big grain sim overnight on a GTX 980 is a really nice option.
  7. 1 point
    you can add "hittime" attribute in Collision POP then use that information to offset animation in Timeshift SOP to be able to use $HITTIME variable in Copy SOP you need to add Attribute Create SOP after POP Network (point attribute "hittime", variable "HITTIME") and uncheck Write Values then create stamp variable something like $T-$HITTIME of if($HITTIME==-1, 0, $T-$HITTIME) then stamp this in Timeshift SOP in Time parameter (Method: By Time)
  8. 1 point
    If you are using the defaults for volume rendering, then yes you can certainly get a handle over the render times by tuning the volume rendering with a bit of a sacrifice on quality (if at all). Volume Step Size The main way to tune your volume rendering is with the volume step Size on the mantra output ROP (Properties > Sampling folder). Step size is in Houdini units in Camera Space. This will also have a great impact on quailty. It is a trade-off and that is what tuning is for. What you are looking for here is if your volume object is very large (bigger than say 1 Houdini unit), then you can safely increase the Volume Step Size and see little to no quality difference. I turn down the step size when I am tweaking colors and shaping. I then turn up the step size when tuning the actual fine detail on the smoke. Pixel Samples This is the second major way to get speed back in to your volume rendering. It defaults to 3x3 which is decent quality. Decreasing this to 1x1 will certainly increase your render times but will introduce visible noise in to the final image. If you have a great deal of motion blur and/or DOF, you can get away with turning down the samples down a bit. At 2x2, the difference in quality from the 3x3 render is noticeable but not as bad as the 1x1 where sampling noise is clearly evident. Shading Quality Since we are dicing the volume primitive in to micropolygons (non PBR) before we even shade, shading quality comes in to play. You can control the shading quality on a per object level and that is exactly where you find this parameter: Object > Render > Dicing > Shading Quality. Increasing causes more micropolygons therefore more shader calls. Because we are marching through the volumes, the number of shader calls can get out of hand. Decreasing this value will reduce the memory consumed as well as render times but not nearly as drastic as throttling back the step size. If you are unfamiliar as to what shading quality does in a micoropolygon renderer, you have some homework to do. Seek out the OdForce Wiki and RenderMan documentation plus the Advanced RenderMan book. Fyi it controls the size of the micropolygons which inherently controls the number of times the shaders are called. Motion Factor, DOF and Motion Blur If you are using either Depth Of Field (DOF) and/or Motion Blur, you should try out Motion Factor. It controls the Shading Quality dynamically. Depending on the level of DOF or MB, you can decrease the shading quality which = less memory and faster renders. Control the Number of Lights Since we are marching in to a volume, eash shader call will loop through all the lights illuminating the current micropolygon in the volume. By limiting the number of lights we need to look up, we get a proportional decrease in render times. Following this, the less lights we have generating and using deep shadow maps, the quicker the render times. Remember it is the deep shadows that give you depth. You may only want one or two lights with this option on and the rest of the lights to not use shadows and just add bits of illumination. So, if you are using ambient occlusion (the Environment Light from the shelf) with your volume objects, don't. They really slow down your volume rendering because each shaded micropolygon will cause an awful lot of rays to be generated due to the raytraced ambient occlusion. Use standard CG lighting techniques to try to get the look you are after. At the default setting of 16, you can obstensibly add up to 16 carefully placed lights and get a similar impact yet have an awful lot of contro. If you really need this, limit the number of rays. Opacity Limit When you are rendering geometry with a certain amount of transparency, you can use the opacity limit to stop the current ray if the opacity threshold is met. If used too aggressively, you will see flickering and strange banding in your volume over a sequence of renders if the cam is moving or the smoke is evolving. This is the varying opacity thresholds in the volume. Most evident if your volumes are more wispy and transparent. Here you are chasing that 1-5% decrease in render times and not the large decreases above. The default value is already pretty aggressive so be careful here to avoid adding nasty artefacts. Use Standard Displacement and Surface Shader Optimizations It's really nice to have volume rendering where we can use standard surface and displacement shader opimization tricks. There should be quite a few tips/ideas on the two main forums. At this point your optimizations are in the 1-5% decerase in render times. Note: Both the Depth Maps (deep shadows with opacity) and the final rendered image benefit from the tuning above. Note: If you make a changes such as decreasing the step size, you need to regenerate your depth maps. Note: Make sure to turn on Mantra profiling to see the actual render times to see actual impact on rendering speeds.
×