
f1480187
Members-
Posts
758 -
Joined
-
Last visited
-
Days Won
76
f1480187 last won the day on September 18 2022
f1480187 had the most liked content!
Personal Information
-
Name
F1
Recent Profile Visitors
6,836 profile views
f1480187's Achievements
-
sixfingersfx started following f1480187
-
sutton_rjp started following f1480187
-
crispr_boi started following f1480187
-
adding points on the corner of a polygon with subdivs inside
f1480187 replied to caskal's topic in General Houdini Questions
If you manage to isolate the corner points, you can use Falloff and PolyCut nodes to cut surrounding geometry and output several types of corner geometry, including the point needed. points_at_shape_vertices.hipnc -
Hmm, weird that it did work for me earlier. stderr parameter is required to skip console popup (on Windows). Try to catch the subprocess.CalledProcessError exception: try: out = subprocess.check_output(path, shell=True, stderr=subprocess.STDOUT) except subprocess.CalledProcessError, e: out = e.output This now works for me, in clean scene.
-
I don't know reliable future-proof way to get the exact list of geometry files for File node. 1. To make hou.ui.selectFile() ask for geometry, pass file_type=hou.fileType.Geometry parameter. 2. You can parse data from "$HH/GEOfiles" and "$HH/GEOio.json", which appear to provide most common geometry formats. 3. To get the most extensive list of supported geometry, I'd parse the help message of gconvert utility: import re import subprocess formats = {'Read': set(), 'Write': set()} # Capture output of gconvert utility. path = 'gconvert' try: out = subprocess.check_output(path, shell=True, stderr=subprocess.STDOUT) except: pass # Split output into major pieces. _, builtin, _, external_translators = re.split(r'-- Built-In --|-- Translators --|-- External Translators --', out) # Parse "Built-In" piece builtin = set(builtin.partition('Recognized Extensions:')[2].replace(',', '').split()) formats['Read'] |= builtin formats['Write'] |= builtin # Parse "External Translators" piece external_translators = re.findall(r'\((Read)-(?:Only|(Write))\)\n.*: ([\w., ]+)', external_translators) for read, write, extensions in external_translators: extensions = set(extensions.split(', ')) if read: formats['Read'] |= extensions if write: formats['Write'] |= extensions # Manually add 2 formats from "Translators" piece and also add FBX read. formats['Read'] |= {'.hgt', '.fbx', '.vdb'} formats['Write'] |= {'.vdb'} It will output this: {'Read': {'.abc', '.ai', '.bgeo', '.bgeo.bz2', '.bgeo.gz', '.bgeo.lzma', '.bgeo.sc', '.bgeogz', '.bgeosc', '.bhclassic', '.bhclassic.bz2', '.bhclassic.gz', '.bhclassic.lzma', '.bhclassic.sc', '.bhclassicgz', '.bhclassicsc', '.bjson', '.bjson.gz', '.bjson.sc', '.bjsongz', '.bjsonsc', '.bstl', '.dxf', '.eps', '.fbx', '.geo', '.geo.bz2', '.geo.gz', '.geo.lzma', '.geo.sc', '.geogz', '.geosc', '.hbclassic', '.hbclassic.gz', '.hbclassic.sc', '.hbclassicgz', '.hbclassicsc', '.hclassic', '.hclassic.bz2', '.hclassic.gz', '.hclassic.lzma', '.hclassic.sc', '.hclassicgz', '.hclassicsc', '.hgt', '.iges', '.igs', '.json', '.json.gz', '.json.sc', '.jsongz', '.jsonsc', '.lw', '.lwo', '.obj', '.off', '.pc', '.pdb', '.ply', '.pmap', '.stl', '.vdb'}, 'Write': {'.abc', '.bgeo', '.bgeo.bz2', '.bgeo.gz', '.bgeo.lzma', '.bgeo.sc', '.bgeogz', '.bgeosc', '.bhclassic', '.bhclassic.bz2', '.bhclassic.gz', '.bhclassic.lzma', '.bhclassic.sc', '.bhclassicgz', '.bhclassicsc', '.bjson', '.bjson.gz', '.bjson.sc', '.bjsongz', '.bjsonsc', '.bstl', '.dxf', '.geo', '.geo.bz2', '.geo.gz', '.geo.lzma', '.geo.sc', '.geogz', '.geosc', '.hbclassic', '.hbclassic.gz', '.hbclassic.sc', '.hbclassicgz', '.hbclassicsc', '.hclassic', '.hclassic.bz2', '.hclassic.gz', '.hclassic.lzma', '.hclassic.sc', '.hclassicgz', '.hclassicsc', '.iges', '.igs', '.json', '.json.gz', '.json.sc', '.jsongz', '.jsonsc', '.lw', '.lwo', '.obj', '.off', '.pc', '.pdb', '.ply', '.pmap', '.stl', '.vdb'}} Difference may not worth it, but it exist: >>> formats['Read'] - formats['Write'] {'.ai', '.eps', '.fbx', '.hgt'} >>> formats['Write'] - formats['Read'] set()
-
@henderthing it's a preprocessor macro similar to ones in C language. Houdini 17.5 docs is not very informative on them. Fortunately, you can use C tutor on this: http://www.zentut.com/c-tutorial/c-macros/ We have both object and function macros in VEX. Be careful about: ...precedence #define FOURv1 2 + 2 #define FOURv2 (2 + 2) printf("8 == %d using FOURv1\n", FOURv1 * 2); // 2 + 2 * 2 printf("8 == %d using FOURv2\n", FOURv2 * 2); // (2 + 2) * 2 ...more precedence #define TWICEv1(arg) arg * 2 #define TWICEv2(arg) (arg * 2) #define TWICEv3(arg) ((arg) * 2) printf("8 == %d using TWICEv1\n", TWICEv1(1 + 1) * 2); // Expanded as: 1 + 1 * 2 * 2 printf("8 == %d using TWICEv2\n", TWICEv2(1 + 1) * 2); // Expanded as: (1 + 1 * 2) * 2 printf("8 == %d using TWICEv3\n", TWICEv3(1 + 1) * 2); // Expanded as: ((1 + 1) * 2) * 2 ...scoping and line continuation #define REPEATv1(arg) \ arg; \ arg; #define REPEATv2(arg) \ { \ arg; \ arg; \ } if (0) REPEATv1(printf("This should never print (REPEATv1).")) /* Above expanded as: if (0) printf("This should never print (REPEATv1)."); printf("This should never print (REPEATv1)."); */ if (0) REPEATv2(printf("This should never print (REPEATv2).")) /* Above expanded as: if (0) { printf("This should never print (REPEATv1)."); printf("This should never print (REPEATv1)."); } */ Note: you may want to omit the scope to generate variables that should be created in scope where macro is used. And never put trailing space after backslash, as it will break the code with cryptic error. For useful examples search VEX libraries in $HFS/houdini/vex/. For example, ADVECT_POINTS macro in $HFS/houdini/vex/include/groom.h at line 265
-
Hi. How about computing local space per primitive instead, and then get noise position from point position in the local space? Some sort of edge based UV unwrap. // Primitive wrangle. int pts[] = primpoints(0, @primnum); // Compute averaged primitive normal from point normals computed from their neighbours. vector normals[]; foreach (int pt; pts) { vector normalized_edges[]; vector pt_pos = point(0, "P", pt); foreach (int nb; neighbours(0, pt)) { vector nb_pos = point(0, "P", nb); append(normalized_edges, normalize(pt_pos - nb_pos)); } append(normals, normalize(avg(normalized_edges))); } vector normal = normalize(avg(normals)); // Compute edge tangent. vector pt0 = point(0, "P", pts[0]); vector pt1 = point(0, "P", pts[1]); vector edge = normalize(pt0 - pt1); // Compute bitangent and orthonormalize matrix. vector perp = normalize(cross(normal, edge)); normal = normalize(cross(edge, perp)); 3@tangent_space = set(perp, normal, edge); Final deformation code: // Point wrangle. int prim; xyzdist(1, @P, prim, set(0)); matrix3 tangent_space = prim(1, "tangent_space", prim); vector pos = @P * invert(tangent_space); float deform = noise(pos * {10,1,100}) * 0.05; v@P += v@N * deform; Some image sampling could work too: tangent_space_noise.hipnc
-
Hi. Here is the crappy function I could think of. It returns list of lists with questions answered by each kid. First question is answered by one kid, last question is answered by everyone. Kids will get some kind of brain penalty for answers, so they won't know the answer next time. I found if I force this strictly, I get visible switching between two "teams" in the middle. So, randomized sampling is taking precedence after a few questions: Function returns after everyone answered about X times, 5 for your case. from random import seed, shuffle, randint from numpy import median seed(hou.ch('seed')) def quiz(num_kids, median_answers, randomness=0.8): # Empty lists accumulating answered questions per kid. kids = [[] for _ in range(num_kids)] # Ask until median amount of answers per kid is met. i = 0 while True: # Randomize pending kids to avoid halves switching pattern. slice = int(len(kids) * randomness) a = kids[:slice] b = kids[slice:] shuffle(a) kids = a + b # Comment this to see problem. # Score approximately i answers. pick = randint(max(1, i), min(i + 1, num_kids)) clever = kids[:pick] dumb = kids[pick:] clever = [scores + [i] for scores in clever] # Extra shuffle for cases where randomness is near 0. shuffle(clever) kids = dumb + clever # Finish if kids answered enough questions. if median([len(s) for s in kids]) >= median_answers: # Complete last row. kids = [s + [i] if i not in s else s for s in kids] return kids, i i += 1 Close neighbors can still answer simultaneously, though. random_kids.hipnc
-
@bonassus that will take an arbitrary perpendicular to y. Using random constant usually enough, except it may break with vectors collinear to the constant. rot is an orthonormal matrix with y pointing at dir and arbitrary rotation around y axis.
-
Best way is to deal with it upstream than post-process collapsed geometry. How about grouping unshared points and counting them using npointsgroup() expression function? Grid will have plenty of points in that group, while piece missing a quad will have only 4. Condition_problem_fix.hipnc
-
# Initialize new ramp. node = hou.pwd() ramp_parm = node.parm('ramp') bases = [hou.rampBasis.Linear] * 5 keys = [0.0, 0.125, 0.25, 0.375, 0.5] values = [0.0, 0.25, 0.5, 0.75, 1.0] ramp = hou.Ramp(bases, keys, values) ramp_parm.set(ramp) # Add multiple points to existing ramp. node = hou.pwd() ramp_parm = node.parm('ramp') ramp = ramp_parm.eval() bases = list(ramp.basis()) + [hou.rampBasis.Constant] * 4 keys = list(ramp.keys()) + [.5, 0.625, 0.75, 0.875] values = list(ramp.values()) + [0.25, 0.5, 0.75, 1.0] ramp = hou.Ramp(bases, keys, values) ramp_parm.set(ramp) # Change first point value from 0.0 to 1.0. node = hou.pwd() point1_val = node.parm('ramp1value') # Point indexing starts from 1. point1_val.set(1.0) python_ramp.hipnc
-
Using Python: 'Time: %.1f' % hou.time() Using VEX sprintf(): // Detail wrangle. s@num = sprintf("%.1f", @Time); in Font node: Time: `details("../attribwrangle1", "num")` format_font.hipnc
-
This is the proper way, I think. It is commonly used in $HH/python2.7libs to filter nodes by type: https://pastebin.com/EnmhzJKj You can use list comprehensions to make it one-liner: vops = [c for c in node.children() if c.type().name() == 'vopsurface']
-
In the Operator Type Properties → Scripts tab select "On Created" event handler, change "Edit as" to Python. Here is basic code: node = kwargs['node'] node.setName('some_special_name') For default operators you need to make a shelf tool and create node and change it's name manually in the tool script. Custom assets also use tool script. In Operator Type Properties → Tools tab and the tool script looking like this: import soptoolutils soptoolutils.genericTool(kwargs, '$HDA_NAME') The function call at the last line returns fully created node instance. After On Created event was executed. You can use it too: import soptoolutils node = soptoolutils.genericTool(kwargs, '$HDA_NAME') node.setName('some_special_name') It's more logical to use On Created, but sometimes it doesn't work. For example, if I remember correctly, you can't set node position in it, it will be overridden.
-
asset_node = hou.node('/obj/geo1/my_asset') file_nodes = [c for c in asset_node.allSubChildren() if c.type().name() == 'file'] hou.copyNodesTo(file_nodes, asset_node.parent())
- 1 reply
-
- 1
-
-
-
Does it cause problems? It is for display. There is no relative or absolute nodes in HOM, as far as I know.