Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

jon3de

Members
  • Content count

    111
  • Joined

  • Last visited

  • Days Won

    2

Community Reputation

30 Excellent

1 Follower

About jon3de

  • Rank
    Initiate
  • Birthday 01/18/1988

Contact Methods

  • Website URL https://vimeo.com/jon3de

Personal Information

  • Name Jonathan
  • Location Schwäbisch Hall
  • Interests 3ds Max,Nuke, Vray, Python, Houdini, Sailing

Recent Profile Visitors

1,998 profile views
  1. hi, This is a shot I did with my colleague. This shot is a imitated shot from the Prometheus movie. So it´s not an creative achievement but it´s great to get deeper into Houdini and for learning some new techniques. We created this shot completely from scratch for ourselves to get more experience and speed in more cinematic like sequences which are a nice variety to our daily work. I was responsible for the ground setup, Lighting, Rendering, Compositing, FX. My colleague did all the animation, modeling of the rover and mountains, matte painting. Except modeling of mountains, camera animation and rover model everything is houdini. After a little RnD about the ifd workflow I was pretty surprised how well mantra performed with millions of stones and the ultra wide scale of the scene. Average rendertimes of the scene was around 10 minutes. Of course its no high-end quality but its the quality what we were able to achieve in the amount of time we had. We leave it like that for know and concentrate ourselves on the next shot I hope you like it. kind regards Jon
  2. checked it for 3 minutes this morning before work and didn´t save at all. Do the following: Navigate to the pop network and hit the Ground Plane button in the shelf. Your particles should collide immediately with the ground. In your autoDop network ( this one with the smoke sim ) just deactivate the auto resize fluid.... node. Resize your fluid Grid a bit. That´s all. Jon
  3. particles: Add a static solver to your particle dop network with the collision object of your choice. You just have added a collision detect which stores some hit attributes. smoke: At first glance I think there is something wrong with your dynamic grid. I turned it off and set the bounding box to a proper size and I got some feedback. Maybe start looking there. Jon
  4. I stumbled over these new Cinematics for Far Cry 5. They really look awesome!!! Does anyone know which studio made them? And which software was used? I know you can achieve good renders and animations with a lot of programs and render engines but anyway its somehow an interesting additional info.
  5. eh...it rotates in the wrong direction
  6. thats a pity May I ask why you didn´t use mantra in place for on of the three render engines above?
  7. Thats at least how I would have done it. Dont know if its the best or fastest method. Create attribute from relbounding box and use the values in extra image plane. pRest.hiplc
  8. I will respond later more in detail because I am on the move. I had a quick glance at you link. I think you can store the @P in a custom attribute e.g. "@Pref" at the beginning of your animation and use this in your render element. The values will stick to the object now. Or export the @Prest attribute from vops with a relative to bounding box VOP. Gives you more fitting values straight out of mantra. Should work. I can do an example file later if there should´t already be one kind regards
  9. hm...either it´s your english or my ( quite possible ) but I don´t get your problem at all. The alpha has nothing to do with this so far. You can use the position pass to get the world position of each pixel. Inspect each channel (r,g,b) of this layer in nuke by its own. This gives you some clarity I think. ( be aware of that some pixel of the image will be just black because they have negative values ) Jon
  10. Thank you for the gifs! I´m working with 15.5.717 Hm dont know if that makes any difference but i used this method to submit my code. http://www.tokeru.com/cgwiki/index.php?title=HoudiniPython#Make_a_general_python_input_window I will try it with the python shelf and with houdini 16. No, I checked that already. kind regards EDIT: I checked it via python shelf and there it works! Thank you Jeff. Would be interesting whats the problem with this self build python text editor...
  11. Currently I wanted to change some rops´s image path parameters with python. I have the problem that python converts my strings with e.g. "$F4" or "$JOB" expressions in it to absolute paths. Can I avoid that? e.g. : c.parm("vm_picture").set("//testDirectory/teststring_$F4.v001.exr") turns to: //testDirectory/teststring_0079.v001.exr How can I set this up with keeping the $JOB or $F expressions in it? is this possible?
  12. it changes colors because its going from negativ to positive values...should work in nuke. Btw. there is no need to change the shader in your case. Just tick on "Shading Position" in the Extra Image Planes Tab in your ROP.
  13. hm maybe this is not what you looking for because its not proportional to the segment length but you can use the ptdist to delete if the points are x units apart from each other? And ctrl the the resample size with the segments count...I attached the file. resample_by_length_Polylines.hip
  14. If you look again at the link that fencer replied to you that´s exactly what you need to group by color with vex.
  15. Ah thank you Tomas, I came to this situation while working with alembic caches so the pre compute in SOPs workflow is no option. I saved my test cube out and used the file for the relbbox node. That worked. So i think I can also refer to some alemic cache instead and this solves my problem. May I ask you if this is the normal way of doing such things like gradiant in one axis etc. Im asking myself why does the Shader is not able to look up the dimensions of a cube in the scene but from a file on disk ?