Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


LaidlawFX last won the day on October 15

LaidlawFX had the most liked content!

Community Reputation

189 Excellent


About LaidlawFX

  • Rank
    Houdini Master
  • Birthday 02/23/1985

Contact Methods

  • Website URL

Personal Information

  • Name
  • Location
    Bordeaux, France
  • Interests
    Getting away from the computer
  1. Houdini crashing constantly

    Yeah ignore what I said for Romain.
  2. Houdini crashing constantly

    I had a similar issue when I had a bad intel based graphics card. While it sounds like you should have good components. It's possible it's one piece of your hardware that is bad. The other issues is possible bad Houdini preferences, just make sure you cull those. Possibly use Revo uninstaller or some other 3rd party uninstaller. The third radical option is you has some corruption with your OS or third party software another program may share. Generally speaking for a crash like this I do not think Houdini is your culprit in this case outside the possibly bad preferences folder. You might want to go a bit radical and test your components and re-install your OS. If a OS reinstall doesn't work then it's certainly a hardware component that is at fault.
  3. Renderman Shading Language guru ?

    So another not answer, answer that should help you. Just getting back into life again. I went from France back to the US. Oh the fun. Attached is RenderExport.hip It's a simple setup that show the variables flowing from sops to materials to mantra. It has a grid with an array of attributes P, N, uv, rest, foo. Foo is just a relative attribute as it could be anything, in this case is just a copy of the UV attribute. A material shader builder that imports them with an array of nodes that you can cross wire to see how they work. It purposely does not use any of the components of the principled shader core so you can see how the info flows. If you want you can assign the principled shader core and keep unlocking the hdas and deleting the components that don't matter until you get down to these basic components. This was roughly how the shader were written in H9-H14 or so, and at that time was when the transition was easiest from Renderman to Mantra as the 1 to 1 was very apparent. The shader core make that switch a lot harder to see. Which is why most of what I am saying is complete non-sense, besides the fact that I am not actually answering your question directly. lol. There are 4 mantra ROPs each set to the different render engine so you can flip between them and see how the shading works. Also there are a series of different BSDF connections you can make to see those basic components. The easiest way to answer is to keep cross wiring that setup until you understand it. Then the direct RSL conversion will make sense. This way you can see how the data flows across the different components of Houdini. At any point you can RMB and see the VEX of those compiled connections. You can switch between the different render engine no matter what, but you will notice the shading is different due to the inherit difference between the PBR shading, raytrace shading, and micropolygon shading. Look at the fooExported bind export vop and the mantra rop extra image planes and you can see how the variables get passed to the render engine. So now when you look in the Compute Lighting vop you can see how all those connections are made. And how BSDF is broken into it's sub-components to be read by mantra. Apologies again for presenting more a small lego set than an answer. This should help you more. It was the biggest issue going from Mental Ray and Renderman to Mantra. It's that key difference maker, that makes it make sense. RenderExport.hip
  4. I have to make the right choice....

    Hello, Either will work well. If you look through the forums you will find plenty of post on these hardware questions. There is no specific right or wrong setup. It really depends on the day to day tech you will be using if you want to finesse it. Will you be doing lots of Renders, Sims, using the GPU, what other programs besides unreal/unity and houdini will you be using? Each algorithm and parts of the software are made differently, and could benefit from different combos. So there is no one right answer unless you are building a render farm with a very constant load out. In the end a good intel/amd CPU and a good nvidia GPU will take you far. As you are just starting out I would not worry too deeply about it though. As you won't be leveraging the hardware as much as a render farm, so any increase in performance can easily be out weighed by the amount of idle time your hardware will actually have. If your sim, or render takes too long... optimize it, there is no excuse in today's day and age to be blaming hardware for slow processing time. IMO, the thread ripper has some great reviews, and I have seen it tear up some good Mantra Renders. But in the end you are splitting hairs. I would say cost should be your most important advisor at this point in the game. -Ben
  5. Output for HDA

    You can do a lot of pythonic hack to make this possible with an HDA, allow editing of contents, place down outputs, re save the hda, etc. However, I would highly recommend not doing this, and using an attribute or a group as your data split. Houdini is not really designed to have dynamic outputs. Think of the HDA more like a tool, or a python module itself. Most tools have standard ins and outs. You don't change how to use a tool, you change the tools with the options.
  6. Renderman Shading Language guru ?

    Country hoping ain't easy, and is very time consuming, lol. Compute Lighting does a lot of the old black magic internal to it so you don't have to think about it. It's just and hda so you can see how everything is split up inside. The key part is you don't need to plug Ce into anything. The Bind parameter/option to use as an export option from the parameter is all you need. That is what passes the value to mantra, or better said what mantra is looking for from the compiled shader. So if you dive into the compute lighting you'll see many of those "dead ends" for parameter vops. Their names will correspond to the Mantra Extra Image planes vex varriables. For the standard ones they are just toggle boxes now a days. But to understand it you can make a custom variable called foo in your shader and add the extra image plane and the vex variable called foo. Not what you are after I know. I have not directly answered your question Trying to point out the flow from the basics. Once you get those humps the rest is quite easy. Cd is only the term as deifned in SOPs. Once you get in to the material land Cd is only used as a bind import and then multiplied against some other variable. Then they stop using the term Cd anywhere else in that context. Cd used to mean Color Diffuse, but that logic is long gone in the age of albedo and PBR. Legacy YEAH! If you really want you could call your SOPs attribute Albedo to make it easier and instead of importing Cd, you import albedo for instance. You won't see the color in the viewport with Cd, but it's more of an understanding of the oddity of Mantra versus Renderman than anything practical. Hopefully, my non-answer answer makes more sense now, lol.
  7. Learning Curve of Popular 3D Software

  8. I can't take credit for it, but it needed to be shared. This made me cry with laughter.
  9. Houdini 18 Wishlist

    Welcome to the forum. And surprises await for you very soon.
  10. Renderman Shading Language guru ?

    I'm in transit for the next few days, so no Houdini in front of me. Edit... Unless realtors are being a pain in the arse... So the way I transitioned from the two languages I started with basic shaders. The new physical models are far more complex now adays, than when I switched. To find a simple example go to the material library and drop down a basic diffuse. Then go check it out in vops. So one of the biggest first bumps is to render a constant shader in mantra. There are 4 combination of render engine unlike renderman last I worked with it. The default option on Mantra is not PBR, but raytrace. The main difference as far as shading with these render engines, is they are a combination of two geometry processes and for practical purposes will say two shading methods. Raytrace, and micropolygon are the geometry processes, and the non-pbr and PBR methods are for shading. So it's important to place a mantra rop and change the render engine so you can shade correctly when developing such as you are. For PBR to make a constant shader wire a parameter or constant into a bind export called Ce (i.e emission). In the basic diffuse example this is contained in the Compute lighting vop, which is really just a custom export vop for mantra myriad render layers. For PBR to make a diffuse with color. Take a Lambert vop that exports a f (bsdf) the yellow wire and multiple it with the color from a constant or parameter vop into the output parameter and variables. For a non-pbr shader to make a constant shader wire a constant or parameter into the color (Cf) on the output parameters and variables vop. For a non-pbr shader to make a diffuse shader wire the color from the lambert vop and multiple it with the color from a constant and parameter color and then wire it into the color (Cf) on the output parameters and variables vop. Sorry for the basic, it's actually the one of the biggest gotcha when switching between the two. The harder stuff is actually easier.
  11. Renderman Shading Language guru ?

    Your best bet is to deconstruct a mantra shader to see what it is doing. VEX and RSL are similar but mantra and renderman both have their quirks that make them different. With pbr your color output with no lighting is called emission. Look at some of the simpler shaders like diffuse to see an example. Otherwise just multiply color by the bsdf.
  12. Renderman Shading Language guru ?

    RMB on the vops and look at the vex code. It should line up extremely similar.
  13. .hip file versioning (git)

    Yeah I would not recommend doing this either. What @berglte said.
  14. VFX to Realtime Transition

    If you are specifically wanting to go into FX for VR the field is quite small at the moment. A few notable companies are pushing projects where they might have one or two full time traditional FX positions specific to AR/VR/MR, for instance, Facebook. However, it's a very small market outside of indy. Game into the AR/VR/MR market would certainly help, especially, if you are coming from Unity as opposed to Unreal. Hololens publicly works with Unity for instance. It may help you to get a good understand of the different type of tech and the capabilities before you make the dive. The hardware capabilities drives a lot of what you can do, and unless you wear headsets that are connected to a PC, versus one capable of mobile computing, your ability to do a lot of graphics is limited. It's a fun challenge to be sure, but if you are not a fan of trying to do your magic and make it efficient as possible, it may not be worthwhile. I've actually only heard of a very few times where someone goes through a traditional path of going to school and going intern, to junior, up the ranks, so it's IMO allways a sideway happenstance. If you have the desire than learn up and go. The market is still pretty young for VR/AR/MR, where people need to rediscover old school tricks, and leverage modern tech to get a great experience. Just browsing through all the games available on Steam is a great opening course, of what is possible. You could also find that AAA games is more up your alley to. Games and VR/AR/MR are close enough to be lumped together, but the different type of games is a very vast medium compared to say film and commercials. Games, Films, and Commercials all lives under the same umbrella of 3-D for content creation, but games has a wider technology base that handle the processing of FXs. FXs in games are more dependent on the hardware as user input is provided causing multiple options, than Film and Commercial where the device only needs to play a 2D image. The content of the image may be the same pretty explosion, but depending on the company and the pipeline in games you will be doing different tricks. Films you may widdle your renders from 30hrs to 30min, in games you widdle your renders from 10 milliseconds to 1 micro second. Your compositing goes from Nuke to hlsl shaders. Same basic math and functions, but different set of overhead, algorithms and hardware to process it. IMO, if you are not interested in the aforementioned ideas, of building a test level and the peripheral, getting the general feel of what it is like working in the game engine, then it's not really a good choice for you at this time. I'm not saying you need to be a Level Designer, programmer, or any of the other specialties of games not really in Film commercials, but these are some basic components of FXs in games and VR/AR/MR that you need to do on a regular basis. For instance, in film you make an FX one off and you may a tool if you do that FX often for a sequence or film. In games and VR you'll set up a test level where you have all your FXs playing on loop or set to run via commands for guns, so you check on their quality constantly. This process I don't think is that much different than an image morgue for lighting artist for reference, but if you don't find the concept at least non-nonchalant or enjoyable then that's a good flag to know for yourself. On the real fun side, in this test environment you are the one who has made all these repeatable methods of destruction and chaos. I've seen FXs artist just love to shoot and blow up stuff with their own concoctions. Personally I like most media, but I still prefer to read a book, and my fun with work is solving the problems associated with it. Maybe it's the team, studio, or project you like more than the medium. So I guess the soul searching question is what part of FXs do you like and why?
  15. VFX to Realtime Transition

    Hello Mountaingoat, I can respond a bit more later, but figured I would jump in now. I have made the transition. The biggest issue you will deal with is not your skill transfer, but the people who will doubt you because you worked in another medium. So don't get discouraged from that crap. The budgets are different for processing, but it's more a factor of scale than all of the sudden trying to learning latin. The technology in this field changes so fast that it doesn't really matter what you did a few years ago. You allways need to adapt and move forward. As for the easiest way to transfer it kind of depends on your background, and if you have any specific interest. These motivating factors will help you push through. The process of building the content from say Houdini is the same in many cases. Instead of kicking it to say mantra you kick it to that game engine's render process. Which are remarkably the same if you dig deep enough. The difference is that with the realtime fx engine some of the components from Houdini will be replaced by the game engine. So a bunch of the particle editor will be done int he game engine, but you will still author the texture arrays in Houdini. Coming from what ever you did before and finding that vernacular to what the new package uses will help you move forward quicker, and then you can add to it. I would suggest Unreal if you are going for AAA games and Unity for VR and mobile. That difference is drastically changing every 6 months, but you will find the similarity when you go into production environment between the two ecosystem. The first thing I would say do is to create a nice test level for yourself, so you have a stock player than can run around. Then slowly add FX elements to it that enhance the environment depending on where you want to learn first. Maybe do some of the low hanging fruit with all the Houdini GameDev integrations at first. You can even make a procedural level area too. Add in a bit of each type of FX. Get used to how the shader system works on the houdini gamedev stuff, and then you can start to do more specific engine related stuff like modifying the weapons fire. Just like film/commercial VFX there are a lot of specialties in game VFX. Each studio will be looking for something different so don't be discourage when a studio may not like the certain sauce of FX your focused in especially because of your background. it's not like you are going to make the transition over night. I've meet many leads that can't even do half of the work of their team, so no probs.