Jump to content

davpe

Members+
  • Content count

    651
  • Donations

    100.00 CAD 
  • Joined

  • Last visited

  • Days Won

    10

Everything posted by davpe

  1. Solaris camera

    you can still store materials with assets and then import them with sceneImport LOP. i think the general material workflow in LOPs is very convenient.
  2. Solaris camera

    i had the same question on sidefx forum. the answer is that this LOP camera contains sort of a standard set of USD parameters and it's up to the developers to built their own interfaces around it, if they want to. which doesn't really answers the question but it indicates user experience may be improved in the future, or may not personally I find LOP camera completely useless at it's current state. the best workflow right now seems to have your standard camera setup in SOP and then import it in LOP. but then you can't move it around in LOP context bcs it seems to break once u touch it. work in progress i guess. having said that, I guess we'll stick to old rendering workflows for some longer time. LOPs doesn't feel ready for production yet. will be great once polished, but right now there are too many quirks/bugs and unfinished/unsupported features. as for the other thing, saving render snapshots, that's currently not possible as far as I know. but according to the developers, we should get some sort of a similar thing to the Render view with similar functionality. in the future. the topic link: https://www.sidefx.com/forum/topic/70700/?page=1#post-300360
  3. dont take this for granted but i think USD is only supposed to live in the world of LOPs, which is a framework for it. SOP context doesn't seem to know anything about USD in the same way as it doesn't know anything about shader networks in your scene for example. you might possibly be able to export from LOPs as Alembic and import that into SOPs, but that's not currently possible as far as I know. btw. there is something called lopimport_SOP but it doesn't seem to do anything right now, and there is no docs for it
  4. uv and closed spline

    i havent looked into your scene but this video might give you some ideas:
  5. Scale texture / remove repetition

    when i open your scene and hit render, i don't see what you show here. maybe post a bit more simplistic setup clearly demontrating the issue so it's easier to debug and see what's going on. anyways, transform UV should do what you want - what it looks like (just by looking at the gif you posted), is that you're not using proper uv coordinates but a parametric per-primitive uv. that's why uv transform is happening in "squares" that are size of your grid polygons i presume.
  6. based on what do you want to transfer that data? if you wanna raytrace you can use the Gather VOP - it can send rays from object's surface or from point in space and gather surface data from surrounding objects it hits. if that's what you're after...
  7. with mantra, this will inevitably render long, I'm afraid. not sure what are your settings and such and how much render time are we talking. there may be ways to optimize, without inspecting the scene thou it's really hard to say anything relevant. but in general, for heavy refractions, mantra isn't a great choice.
  8. Houdini "proxy"

    well again you can display your instances as bounding boxes, or point cloud, and all should be fine...
  9. embed external file in hip file

    yeah it depends. to me it feels lenghty (and clumsy), especially if there is a one click corresponding function with sop nodes (lock flag). but anyways personally I dont remember ever running into a situation where i needed to embed a picture... P. S. of course i realize I'm just spoiled ungrateful freak and that it's great you can do it with hda
  10. Procedural OBJ splitting

    hey, yeah that's certainly possible. steps you need to do: 1 - turn group names into a string names 2 - loop over names and pack each name into a packed prim (useful if you want to deal with chunks of geometry as if they were a single points) 3 - on your template points generate an attribute that specifies which instance number is supposed to be copied onto each point 4 - before copying the geometry on points, refer to that instance number attribute and delete all packed prims that don't match that number 5 - loop over the template points and for each of them copy matching packed prims check the hip file for an example. obj_instance_groups.hiplc
  11. Houdini "proxy"

    i think packed prims are super fast to display. you can specify what kind of representation you want to see (i.e. bounding boxes, full geo, point cloud), plus there are a few viewport optimization options to help with heavy scenes (Display options > Optimize tab). otherwise you can load lowres and highres versions of the same asset into a single geo container. you will only display the lowres one (blue display flag), while highres one will be picked for rendering without ever needing to display it (purple render flag). plus there are other more technical ways how to do that...
  12. Procedural OBJ splitting

    dgani > connectivity can't actually split the geo, it will just identify pieces and then you have to split it by hand or by some other method. my solution would be to use Python script that can do blast SOP (or a new geo container) per group. i've posted it the thread below. i can imagine you could do that with TOP network as well...
  13. embed external file in hip file

    generally, the way to embed stuff in your hip file is locking any node with a red flag. unfortunately, it only seem to work with geometry and not with any image inputs (in COPs or shaders). not sure about the method dgani mentioned - maybe possible but seems to be quite a lenghty way of doing stuff on regular basis :/ anyways lock flag in COPs could be worth RFE...
  14. Thin Laptop for Houdini

    I've got very good experience with HP z-book studio G4. only thing i ever had an issue with was cpu heat during rendering 20 hrs straight in hot summer.
  15. Making a switch?

    For archviz and product viz, while you can do all that with houdini, 3ds max is probably a pretty good option more geared out of the box towards that kind of use. so i presume you won't gain much by switching. BUT: one thing to consider is, if you're freelancing and don't need to render hires animations (more than HD), you can have full featured highend 3d package for $400/2 yrs (Houdini Indie), compared to $2000/1yr or so with Autodesk? hmmm... that alone may be one good reason for switching. another thing is that Houdini seems to be todays fastest developing software with the most open architecture. comparing new features of any houdini release with new features of any other 3d package at that year is just silly. maybe only Blender is making any real progress and introducing interesting and inovative features (with is also silly given it's zero price). anything from autodesk pretty much stagnates, Modo doesn't make big waves either... so in terms of future development and customizability, Houdini seems to be a pretty good bet. it all gets down to what is your motivation behind wanting to switch.
  16. UV out of range

    i suppose you only miss edge padding. in that case easiest thing is to use UV trasform SOP and simply scale your uvs down a bit. you can also use wrangle SOP and remap the uv range with fit function. if you don't need to keep the layout then UV layout will do a good job arranging messy uvs into a udim tile.
  17. Mantra transparency

    no. you can do it per object or per shader, as mentioned.
  18. micropolygon can offer a good performance with volumes, particles, hair, DOF, and MB. it's not friends with anything pbr or raytraced thou. that means any modern pbr shaders, area lights, raytraced environment lights are bad and will render quite slow. if you wanna go with micropoly, stick to shadow maps and old school shader models (lambert, phong...). that makes it largely obsolete these days unless dealing with heavy volumes or particle sims,and perhaps some other special cases that are difficult to raytrace. heavy geometry and instances are not benefitting from micropoly in any way i would say (maybe rather opposite bcs unlike raytracing, micropoly needs to fit an entire scene in memory at once, if i understand it correctly)
  19. directional area light

    hi yeah texture is also a valid way how to approach it. performance-wise it will be much less efficient than using geometry blocking (sampling noise). I think using geo blocking doesn't actually add anything to the render time (why would it?). currently there are no other options out of the box for driving "directionality" of area light sources, other than already mentioned. of course if you are resourcefull enough you can build your own light shader. that's one of awesome things about Mantra that it's so open to allow you to do that. me, in most cases, can get the job done with spot light sliders or geo blocking, (or maybe using multiple parented lights) , and light texture being the last resort in some special cases. very much depends what the scale of your scene is - for small scenes using textured lights can be perfectly fine, with larger scenes less so. cheers.
  20. i guess the geometry can be separated by group names or by some attribute. if you want a separate objects, like /geo/obj1 .. /geo/obj1000 - which I wouldn't recommend as it will be a nightmare to work with (i understand thou for Maya person this kind of idea makes complete sense) - anyways, if you want to do that, i recommend using a Python script that will extract pieces for you into and individual containers, or you could try TOP network i presume, and batch process that stuff. Or you can use a loop to export each attribute value as a separate file on disk. And there are probably more options how to approach it. What I would do is also a Python script, but I would split the geometry into multiple streams inside of the original object, each containing one poly group. I do that often with poly groups. The script is as follows and you can save it easily as a shelf tool. You select the node inside of your geo container and run the script, which should populate your network with number of outputs, one for each group: selNode = hou.selectedNodes()[0] selNode.setPosition([0,0]) if len(hou.selectedNodes()) == 0: hou.ui.displayMessage("Make your selection first", title = "Error") else: listGroups = selNode.geometry().primGroups() groupsNum = len(listGroups) hou.clearAllSelected() for prim in listGroups: createDel = selNode.createOutputNode("blast", "OBJ_"+prim.name()) createDel.parm("group").set(prim.name()) createDel.parm("negate").set(1) createDel.parm("removegrp").set(1) createDel.setSelected(True, clear_all_selected = False) selNode.setPosition([(groupsNum),0]) hou.ui.displayMessage("Created " + str(groupsNum) +" objects") cheers. D. edit: to put the individual pieces into origin with centered pivot use for loop per group (per attribute value) with transform SOP in there (before you run the script). pivot can be easily centered by hscript expression "center('../'+opinput('.',0), D_X)" - without quotes and you'd have to do it for each axis (this one obviously works for world X axis, written like this it centers to the geometry of connecting node). Centering the object itself is then done by putting the negated valued into the transform XYZ parameters.
  21. directional area light

    I'm afraid if spot light parameters are not giving you what you want it would be best to just juse a gemetry to block the light if you know what i mean. just make a wall with a hole in it and put the light behind it.
  22. Mantra transparency

    i came across exactly the same thing. the outer layer is simply casting an undesired shadow which can be easily removed in shader with isShadow VOP:
  23. that's interesting. i've seen some look differences with progressive vs. bucket render, this is really extreme case thou. i wonder if it gives the same result even with rendering engine set to Raytrace?
  24. render volume in mantra in short time

    - if you want faster render times you should reduce the volume step rate and shadow step rate, not to increase it. i'd start with value of 0.3 for start and you'll see where it gets you. it basically reduces a resolution of your volume object at render time but in many cases it's perfectly ok and it doesn't decrease a visual appearance in any significant way. depends on your particular scene thou. if it starts to look like crap, you've gone too low with it. - check the Limits tab on ROP - this can impact rendering volumes significantly too, especially if you compute an indirect bounce on volumes (volume limit > 0). - Stochastic samples ( that are used for transparency sampling), can make a big difference in how clean the volume renders, especially with low density volumes like fog, etc. sometimes you have to go quite high with it - like 64 or so. if increasing the value doesn't seem to do anything, check if there isn't a material override (some volume material present have that). - as somebody before pointed out, if you don't need to raytrace, render with micropolygon - that's really fast for volumes. - if you need to raytrace, you may want to switch your render engine to Raytrace instead of pbr. at some cases i've seen better performance with almost the same visual output (especially if you render with motion blur).
  25. Material Builder to Layer

    hi, it's not very clear what are you trying to achieve. pictures are not helping much so it's hard to give any advice. anyways, there is a masterclass on materials mixing in case you haven't seen it: https://www.sidefx.com/tutorials/houdini-16-masterclass-custom-shading/
×