Jump to content

pezetko

Members
  • Content count

    256
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by pezetko

  1. Cascadeur

    Cascadeur is not bad but general UX was not good in the previous version. Rigging is also lacking a lot. Some animators hate the AI that adjusts the poses for them. Hope they improved the UX in the rewrite. The license policy for open beta is also very nice. For another potential Maya rival, there is also https://rumba-animation.com/ It all depends on support, speed, stability, and pricing policy. Maya has a huge advantage in API and prevalence.
  2. [Solved] Custom LiDAR importer with python

    Just realized that there is a difference between inFile.x (x position with scale and offset applied) and inFile.X (X position without any scale or offset). So Lidar Import SOP produces scaled and offset position from las. And for the laz there is laszip executable from https://rapidlasso.com/laszip/ that you have to have somewhere on your PATH. There is also c++ based laz-perf https://pypi.org/project/lazperf/
  3. [Solved] Custom LiDAR importer with python

    Requests for Enhancement: https://www.sidefx.com/bugs/submit/ I was able to download the file from google drive but my old K4000 can't display it in Houdini.
  4. [Solved] Custom LiDAR importer with python

    Just a few more optimizations to native python types, and testing on bigger las file. Python laspy SOP is 1.2 - 1.6x slower than Import Lidar SOP (1.6x if I apply scale, offset and add classification attribute what Import Lidar SOP does not perform), not bad. This is the code: from laspy.file import File import numpy as np node = hou.pwd() geo = node.geometry() file_path = geo.attribValue("file_path") def load_color(inFile): missing_color = ["red", "green", "blue"] for spec in inFile.point_format: if spec.name in missing_color: missing_color.remove(spec.name) if missing_color: return None color = np.vstack((inFile.red, inFile.green, inFile.blue)).transpose() return (color / 255.0).reshape(-1) # transform from 1-255 to 0.0-1.0 range) with File(file_path, mode='r') as inFile: # --- load point position coords = np.vstack((inFile.X, inFile.Z, inFile.Y)).transpose() # 632 ms scale = inFile.header.scale # should be already np.array offset = inFile.header.offset # there is no offset in simple.las example from laspy library # --- compute scaled and offseted positions and transform in 1d array pos = (coords*scale+offset).reshape(-1) # 300 ms geo.setPointFloatAttribValues("P", pos.tolist()) # 2203 ms # --- add classification attribute geo.addAttrib(hou.attribType.Point, "classification", 0, False, False) geo.setPointFloatAttribValues("classification", inFile.Classification.tolist()) # 450 ms # --- load color colors = load_color(inFile) if colors is not None: geo.addAttrib(hou.attribType.Point, "Cd", (1.0,1.0,1.0), False, False) # add color atttribute geo.setPointFloatAttribValues("Cd", colors.tolist()) # --- load intensity geo.addAttrib(hou.attribType.Point, "intensity", 0.0, False, False) # add intensity atttribute geo.setPointFloatAttribValues("intensity", (inFile.intensity / 512.0).tolist()) # transform from 1-255 to 0.0-1.0 range)
  5. [Solved] Custom LiDAR importer with python

    Nice, I think fast ssd is much more important nowadays. I didn't try *.laz with laspy, may worth a try. I know I submitted a few RFEs for Lidar Import SOP, one for *.laz support as well as one for supporting a newer version of *.las format. If you find some of those features important, submit yours, more "votes" (RFEs) doesn't hurt Btw: if you replace np.concatenate(array) with np.ravel(array) the second one is 10x faster. And this is even faster than the second one (but only few ms). array.reshape(-1)
  6. [Solved] Custom LiDAR importer with python

    Julien means the same thing I already mentioned. Loading and building attributes in Python is much slower than doing same thing in C++ most of the time. Lidar Import SOP is a fast C++ node compared to pylas. Loading your *.las example takes 20ms with Lidar Import SOP on my machine but same file takes almost 6 seconds with Python SOP with pylas (3 seconds just for transforming numpy nd array to serialized form and another 2.5 seconds for setPointFloatAttribValues method). If you do have enough memory and you do end up reading most of the file anyway it's faster just to use Lidar Import SOP to load all points fast and then use Python SOP to add only additional data that Lidar Import SOP cannot read (like classification) and blast what you don't need. Like this:
  7. [Solved] Custom LiDAR importer with python

    Hi, no, this is just default comment from Python SOP. In the latest version, https://forums.odforce.net/applications/core/interface/file/attachment.php?id=56259 classification is just an integer attribute on the points.
  8. [Solved] Custom LiDAR importer with python

    If you have any memory issue just check latest update version that using try/except block https://forums.odforce.net/applications/core/interface/file/attachment.php?id=56259 I'm not sure if laspy does implement context management protocol (so using "with File() as file_pointer:" statement may not close the file at exit) Just verified that and with File() as fp: statement works as it should.
  9. [Solved] Custom LiDAR importer with python

    You are welcome. Submitting RFEs doesn't hurt. C++ implementation is still much faster than laspy, but it's possible to do an easy and quick specific modification for las import until LIDAR Import SOP gets some improvements without going the C++ route. I attached an example with adding classification as an attribute (added closing the las file to avoid memory issues). I added an updated file: pz_load_las_with_python_classification_attribute_memory_fix.hipnc as it looks like context management protocol is not implemented in laspy (with File(): block will not close the file at exit) so I switched to try/except/finally instead. It will not error on the node, so watch python console for exception logging.
  10. [Solved] Custom LiDAR importer with python

    The classification works fine. But with inFile = inFile.points[I] you are overwriting inFile object with multi-dimensional arrays of all attributes so you no longer get them by .x/.y/.z or other named properties. I uploaded a modified scene, where you can set classification and then it returns only a subset of points that match the correct classification. inFile.points[I] Returns subset of points in an array where I is True inFile.Classification == 2 Returns array of True/False values to determine which points are classified with id 2. Another approach would be adding Classification as an attribute to all points and then use vex expressions, groups, or other partitioning mechanisms to separate points. pz_load_las_with_python_classification.hipnc
  11. [Solved] Custom LiDAR importer with python

    Hi, pretty neat library! Thank you for the tip. There is no need for csv, you can do a lot with laspy and numpy itself. Attached example scene to load data from las file. Seems that Lidar Import SOP ignores scale and offset. To make it work (18.0.499, Python 2.7 branch) I cloned the https://github.com/laspy/laspy repository. Then copied content of laspy folder to $HOME/Houdini18.0/python2.7libs/laspy so I have $HOME/houdini18.0/python2.7libs/laspy/__init__.py (and rest of the library) and it's possible to load it into Houdini with import laspy in Python shell. (Numpy is already included with Houdini) I used example file from repository: https://github.com/laspy/laspy/blob/master/laspytest/data/simple.las import logging from laspy.file import File import numpy as np node = hou.pwd() geo = node.geometry() file_path = geo.attribValue("file_path") inFile = File(file_path, mode='r') try: # --- load point position coords = np.vstack((inFile.X, inFile.Y, inFile.Z)).transpose() scale = np.array(inFile.header.scale) offset = np.array(inFile.header.offset) # there is no offset in simple.las example from laspy library # offset = np.array([1000, 20000, 100000]) # just for testing that offset works # geo.setPointFloatAttribValues("P", np.concatenate(coords)) # same as Lidar Import SOP - seems that it ignores scale (and offset?) geo.setPointFloatAttribValues("P", np.concatenate(coords*scale+offset)) # --- load color color = np.vstack((inFile.red, inFile.green, inFile.blue)).transpose() geo.addAttrib(hou.attribType.Point, "Cd", (1.0,1.0,1.0), False, False) # add color atttribute geo.setPointFloatAttribValues("Cd", np.concatenate(color / 255.0)) # transform from 1-255 to 0.0-1.0 range) except Exception: logging.exception("Processing lidar file failed") finally: inFile.close() pz_load_las_with_python.hipnc
  12. The little things about houdini

    Hi, The script is an option "how" to enable it, not everything in Houdini has its own GUI (yet). Houdini is a younger application (not counting PRISM) compared to Cinema 4D. There is RFE for it. Meantime you can add these callbacks to your $HOME\houdini18.0\python2.7libs\pythonrc.py import hou def checkForUnsavedChangesEvent(): if hou.hipFile.hasUnsavedChanges(): win = hou.qt.mainWindow() title = win.windowTitle() win.setWindowTitle(title+" *") hou.ui.removeEventLoopCallback(checkForUnsavedChangesEvent) # remove multiple evaluations def restoreCheckForUnsavedChangesEvent(event_type): eventType = hou.hipFileEventType if event_type in (eventType.AfterClear, eventType.AfterLoad, eventType.AfterSave): hou.ui.addEventLoopCallback(checkForUnsavedChangesEvent) # restore callback hou.hipFile.addEventCallback(restoreCheckForUnsavedChangesEvent) hou.ui.addEventLoopCallback(checkForUnsavedChangesEvent) Most of the companies use some sort of tracking software like Shotgun or Ftrack that provides a screenshot saving feature. The drag of what and drop to where? This is so broad topic as it's hard. Each user has different preferences. A customizable option is the best for everyone. Maybe SideFX could include some common (requested by users) examples shipped with Houdini? Last time I checked Houdini starts up within 4 to 10 seconds. The multi-tab interface sounds interesting, not sure about the benefits: What kind of isolation should be between tabs/processes? Is it only UX thing or there is a shared core? How are environment variables shared? What about loaded packages (plugins), etc…? You can use subnets for each project. Submit RFEs with examples! Not everybody knows Cinema 4D features and how it behaves, what is good about it and what could be done a better way. There is flat skin in qLab tools. Cheers, Petr
  13. Default autosave

    You can try to set it in your 456.cmd file in $HOME/houdiniX.Y/scripts/456.cmd (X=Major version, Y=Minor version, e.g.: 18.0) directly: autosave on https://www.sidefx.com/forum/topic/25350/ Or you can ask the user what they want: https://www.sidefx.com/forum/topic/12971/#post-61684 I like the idea of having it in File Menu under File Save As too: https://www.sidefx.com/forum/topic/12971/#post-256607 Environment variable to force it on/off wouldn't be bad but it is a little bit harder to discover: https://www.sidefx.com/forum/topic/45068/#post-201467 and there is already plenty of other options. Always on in preferences (GUI) was probably already requested, but you can add RFE (voice) for it too.
  14. Color Management Rules ?

    What do you mean by studio rules? You can generate your own ocio.config file with all the rules set exactly as you want. See the examples of spi-vfx or spi-anim configs for more straightforward configs.
  15. If you don't mind floating windows you can right click on the node and select "Preview Window..." for that node. Work for individual SOPs nodes too.
  16. Load ASCII PTX data into Houdini

    Hi, there is also nice Load Data Tablet asset at Orbolt from sidefx written in vex: https://www.orbolt.com/asset/ndickson::load_data_table and for e57 just use Lidar Import SOP:
  17. use string version of the detail function with s at the end details()
  18. File Cache SOP / checking for "dirtiness"

    Hi Christopher, everything is in the hip file attached under the video in my previous post. Python code is in the extra parameters on the nodes. Attrib create has the code to compute the checksum, it's stored as detail in the info block. File cache node has code for reading it from info block and comparing it against the current value. Everything is evaluated when the nodes are cooked. This is also the issue that the file cache node is colored only when it's cooked and it requires it's input to be cooked too to get the correct current value. It is just a prototype. For a production, I would probably put all the code into separate python module or directly into HDA and load it from there instead. Event callback could be more resilient to the propagation of changes in the node tree too, but I didn't try that.
  19. File Cache SOP / checking for "dirtiness"

    Not sure what is the workflow you are after. Cache files are there to avoid computation on heavy graphs. As hip scene files are tiny usually compared to the size of cached data it's not a problem to save unique (locked) source scene file for each cache version in the case of need to return to the previous version. I tried to create a quick prototype that computes a checksum from the input graph and save it in the data block and then compares it against the current graph. It's done on Windows (10) but it has an issue with the way how Houdini evaluates nodes. pz_input_hash.hip
  20. managing .sim checkpoint files

    For *.sim files I'm using built-in rolling cache, usually keeping latest 3-5 frames is enough to continue in case of a crash. Cleaning old files with Windows 10 built-in feature and cleaning temp folders on each restart with a simple batch script.
  21. AFAIK I think it never worked with VDBs. It still works with Houdini's native volumes.
  22. Houdini 17 Wishlist

    Sounds like those new enhancements in 16.5 with that search icon. Check this video starting at 53:48 https://vimeo.com/241036613#t=53m48s
  23. Houdini 17 Wishlist

    Parameters with non-default value and tabs with those parameters have their label in bold already. Maybe this could be enhanced with custom colour in colour theme.
  24. Maya Lattice Information to Houdini SOP

    I had to match animated Maya lattice in Houdini once in past. I don't have that file anymore so only from memory: I did it as Marty suggested. I exported box lattice from Maya naively as a couple of poly boxes that matched lattice division and inner structure with the same lattice applied to them as Alembic. That way I got exactly same point deformation on multiple cubes that in wireframe looked exactly as Maya lattice gizmo. Then I had to recreate Maya's lattice functionality in Houdini as the algorithm in Houdini is a bit different (e.g. no extrapolation, different smooth functions etc.). I did it with vex at the end. Today it's probably easier to do it in a Deform wrangle. I think it is a really old algorithm, like from 1986 or so, you should be able to find the paper online.
×