Hi All,
I was just writing a bit of python code that goes through an object and creates a dictionary of attributes and vertex indices.
It works fine for small meshes(under 100k Points, but as you get bigger it inevitably gets slower. I've tracked the majority of the slowness to a couple of lines that is run many times (per vertex)
attribStack[name].append(v.point().attribValue(attr))
indexStack[name].append(v.point().number())
I think this is because of how HOM is accessing the info (or better, how I am accessing the info through HOM)
here is the code block :
attribStack = {} indexStack = {} for attr in myGeo.pointAttribs(): if attr is not None: name = attr.name() attribStack[name] = [] indexStack[name] = [] attribStack[name].append(str(attr.dataType())+str(attr.size())) indexStack[name].append(str(attr.dataType())+str(attr.size())) for prim in myGeo.iterPrims(): for vid in xrange(prim.numVertices()): # get vertex v = prim.vertex(vid) attribStack[name].append(v.point().attribValue(attr)) indexStack[name].append(v.point().number())[/CODE] The basic things that I need to happen are : Create a list of values for each attrib Create a point index on the vertices (for shared data) Any thoughts on other ways to optimize this bit of code. perhaps I'm missing the obvious answer.