Jump to content
[[Template core/front/profile/profileHeader is throwing an error. This theme may be out of date. Run the support tool in the AdminCP to restore the default theme.]]

Community Reputation

0 Neutral

About EdArt

  • Rank

Personal Information

  • Name
    Ed Arthur
  • Location
  1. Hi all, I'm running a simple topological vex function over a mesh, designed for topological symmetry. The function below is run in either a compiled for-loop or a SOP solver, I got the same performance from each: function int[] crawlmesh5(int geo; int basehedge; int reversedir; int baseindex; int foundpts[]; int foundprims[]; ){ int newhedges[]; // early out if prim has been processed int primfound = foundprims[hedge_prim(0, basehedge)]; if (primfound){ return newhedges; } //return newvtxes; int localiter = 0; int currenthedge = basehedge; int vtxpt, lookuptwin; do{ if(localiter > 10){ printf("failsafe-break_"); break; } vtxpt = hedge_dstpoint(0, currenthedge); lookuptwin = foundpts[vtxpt]; if(lookuptwin == -1){ setpointattrib(0, "twin", vtxpt, baseindex); //foundpts[vtxpt] = baseindex; baseindex++; } append(newhedges, hedge_nextequiv(0, currenthedge)); if (reversedir){ currenthedge = hedge_prev(0, currenthedge); } else{ currenthedge = hedge_next(0, currenthedge); } localiter++; }while (currenthedge != basehedge); setprimattrib(0, "found", hedge_prim(0, basehedge), 1); return newhedges; } Since wrangles would produce race conditions in the numbering of found points, I'm running this in a detail wrangle, in a manual loop over an array of half edges. foundprims and foundpts are passed by reference and used in place of setting component attributes, and both are saved to and loaded from detail attributes between iterations. When running this function on a grid of 500 points, I get the following performance: On iteration 7: On iteration 8: A slowdown of roughly %8000 . The size of the arrays are not changing across iterations, as far as I can tell nothing is changing other than their content. If anyone has any idea what I'm doing wrong, I will be extremely grateful. Thanks
  2. Hi, I'm looking for a way to fine-tune exactly what visible OBJ nodes appear in a certain camera. I'm working on a Wrap-like tool similar to what Pixar show here: In terms of having the two camera views side by side, they have two separate obj networks for the source and destination meshes. I would like to contain the process in one network. If I have a simple OBJ network like so : And two cameras pointing to it, is there a way for Camera_A to only show sphere1 in the viewport, and hide box1, despite the fact that both are marked visible? I don't believe it's an option to collect the specific obj nodes together in a Geometry node via obj merges or similar, and just point the camera to that, since the OBJ transforms need to be directly interactible from the camera. Thanks for your help
  3. UVs and connectivity

    I figured so, thanks. I guess Houdini also has primitive groups to solve the ambiguities that could arise out of it, but it's interesting that depending on how your uvs move or unfold, random uv points could end up sticking together because they happen to be on top of each other.
  4. UVs and connectivity

    Is there any "real" attribute in a mesh that defines how vertex UVs are connected together? Or is it only implicit, so if two vertex UVs have the same position, they are considered merged? For contrast, in Maya a UV set is considered its own pseudo mesh, with the same UV point connecting to multiple face vertices, its own distinct topology buffers etc.
  5. Group of friends and I are starting a small project, and while we're doing all rigging/animation in Maya, some characters rely on effects that would be much more easily achieved in Houdini. These are generally quite basic, and we're looking to minimise the amount of shot-specific work to be done - I'll ideally write the effect once, and use it over the whole show. Once I have an abc or fbx export, what is the best way of reading that in Houdini to the maximum level of detail? For example, one character has panels of LEDs that form screens, which in turn display depth maps calculated from each LED's normal - there are about four hundred separate LEDs, so ideally I would have a locator for each one, with X facing out the normal, bake out character anim as fbx, import into Houdini and just instance a setup across each locator - I don't know how to do that. Others require things like sparks between contacts, thrusters, etc. How is it best to achieve this? Via Metadata? Naming convention? Custom attributes on locators? The other approach is to use HDA in Maya, which I know nothing about - ideally would like to keep the two separate, if nothing other than to make look dev easier. Let me know your ideas. Thanks!
  6. Simple project to learn Houdini

    I'm new to this place, so apologies if I'm being dumb about anything. The project is to fire a beam at a piece of geometry, and have it fracture at the point of impact, like a simple sci fi gunshot. I have a rough idea of how to do the destruction, but which nodes should I use to cast the ray, and then find the intersection? Also how do you even define a direction or a vector to start off with? Currently I'm using a very long single-span curve, but I'd like to find a way to cast the ray infinitely. Any help is much appreciated. Thanks very much.