Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

catchyid

Members
  • Content count

    200
  • Joined

  • Last visited

  • Days Won

    1

Community Reputation

11 Good

About catchyid

  • Rank
    Initiate

Personal Information

  • Name Khaled Abdelhay
  • Location Montreal, QC

Recent Profile Visitors

393 profile views
  1. Hi, I am an experienced Software Engineer with a good eye for Visual Effects. I have over more than 10 years of experience in developing advanced software systems for simulation and animation companies. In addition, I've invested that last 5 years exploring and educating myself all about Visual Effects. Below is my demo reel. Please check https://www.linkedin.com/in/khaledabdelhay for more details on my technical/artistic skillset.
  2. Okay David Looks like I need to read more Well, I admit it, I skimmed through documentation quickly. As per the 6 days, I am talking about a sequence of 390 frames Full HD, so I want each frame to take max 20 minutes on my PC (and the only machine I have, basically working on my demo reel). The scene has around 15 pyro objects (fire, smoke, debris,...) and one mesh (a building). One last note, I am okay with having some noise in my final render as I will composite it onto an old footage that is already noisy, so they will all fit nicely Alright, I will read the docs, re-read your reply and take it from there... Thanks David
  3. Okay, I found in pyro shader two vex variables: smoke_mask, and fire_mask --> when I exported them in Mantra extra image plane I got correct result. Is this what you suggested David by "render actual density as a matte"? Thanks...
  4. Hi, I think this is basic, but I cannot figure it out myself! I have couple pyro smoke objects (pyro shader) and one mesh (mantra surface shader). When I render this scene using physical based rendering, smoke objects look good, however some parts of the solid object (i.e. a building) become noisy. What is the best way to remove this noise from this object? I "think" increasing the pixel sampling rate would solve it (currently it's 2x2, so 3x3 would fix it, but this means I have to wait 6 days to render the scene and I don't wish to do that). I looked at mantra shader SHOP and the object rendering pane SOP but I could not find a way to increase the sampling rate only for this particular shader/object? It's an outdoor scene where there is only direct sun (directional light) and a sky model, so there is no much diffuse lighting, however I thought increasing light sampling to 8 would help but I am not sure?! Finally, in mantra node, I lowered noise level to 0.01. In summary: Is there a way to improve only the rendering for a specific object that has mantra shader without improving the entire rendering in Mantra? Thank you
  5. I will produce an example image and post it David Just one last question: what is the "right" workflow for isolating volumes (e.g. pyro smoke)? The above method works nicely with surfaces (ie a regular meshes), however when I used the above idea with a smoke/fire volume, the the extra plan produced was incorrect in two ways : (1) it covered more screen space than the original pyro smoke object, and (2) the channel values produced were always higher than 1 (i.e. I defined in SOP my_id =1 so it should max one in the extra image plan)? I've tried "Full Opacity" in Mantra extra plan description , also changed it to "Fog" in the shader but did not work?! Sorry to keep asking questions
  6. Thanks Alexander "minmax idcover" created what I wanted, however when I tested it I realized I still need an alpha channel to "smoothly" blend/isolate objects, meaning with sharp edges (as I wanted to have initially, compositing did not work well because I lost the alpha channel created by the default filter), pls see attached. So, I ended up creating a channel for each set of objects to act like a mask and an alpha channel. Once more, thanks for your help
  7. Alright, I followed your idea (even a simpler one, f@my_id=1 and exported that in the shader), and I've noticed that my_id is as excepted equals 1.0 at the majority of the object screen space, however at the borders it goes from 1.0 to 0.0, I think this has to do with antialiasing and taking multiple samples and averaging the value of my_id. So, due to antialiasing, any shader variables will range from 0 to its maximum value (i.e 1.0 in my case), and this would make isolating objects using one variable is "impossible" (because all of them will have share a small range near 0). The solution I did is to create a unique ID for each objects that I want to isolate, sort of their matte plate, so instead of having id, I've created id_1, ide_2, ....It's not the most flexible solution but I guess on a small scale shot, it should work... Thanks David
  8. yes, this would work... I'll test it tomorrow and update you Thanks for help David
  9. Just an update... when I enabled Node ID (op_id) as extra image plan, I got unique rgb for each object. However, the rgb values are not clamped to [0,1] so I am getting large RGB pairs like (1234,1234,1234) and I could find an easy way in Nuke to select specific objects (or create a masks) based on the op_id (for those who are interested, I've tried to use keylight and other keyer, but they did not work because the RGB values were so high and out of [0,1] range) I will look for a way to produce something like an alpha channel (to act as a mask) for each object automatically. The other solution is to render each object alone but that would be tooo much rendering... Another idea is to manually export some parameters on shaders to extra image plan, i.e. define a variable X and on the shaders used by object A and set its value to 1 and use it extra layer (such that only shaders that use object A will set X to 1 hence producing its matte...however this is too much work and it will get messy as shaders are shared between objects...)
  10. I see Alexander Thank you, really really appreciate it
  11. Thanks Alexander...I am looking at your file and I am not sure exactly how the vel2d and vel3d get calculated! There is no shader assigned to your mesh yet in the rendering node you export vel2d and vel3d and it does work!!! The only thing I see is trail node which computes velocity. Could you please elaborate more on how this "magic" is done
  12. Hi, https://www.sidefx.com/forum/topic/34339/?page=1#post-158971 In addition to my original question, an user on the above post says it's better to use Mantra built-in motion blur than using velocity vectors + post? Is this correct? I am not expert but I "think" that exporting velocity vectors and using them in post for motion blur will be way faster than computing extra samples in render time to create motion blur? Anyways, I still cannot export velocity vectors
  13. Hi, I have a moving geometry that stores velocity vector on each point (keyframed animation + trail node). I want to do motion blur in post, so I need to store velocity vector as EXR layer, however I don't see any VEX variable for that (i.e. mantra->images->extra image planes have no parameter for velocity, I just find point, depth, color but no reference to velocity)??? Also, if I want to Houdini to store this velocity vector for all objects in the scene, then what's the right way to do that? Should I go in each object and append trail node? What if some objects are not moving... Thanks,
  14. Hi, I am new to both Houdini Mantra and Nuke, so bare with me In summary, I have a scene with 2 objects: Box_A and Box_B. Now I want to configure Mantra to render them such that I'll be able somehow to distinguish them as EXR layers, i.e. I want to produce a layer (RGBA) for Box_A and another layer (RGBA) for Box_B such that when I select Box_A layer I get RGBA for Box_A and same for Box_B. My goal is to color grade them independently in Nuke. One last note, I've tried to Node_ID and Primitive_ID layers in Mantra but when I've read them in Nuke, I got the same identical images !? i.e. both layers "seem" to have the same data, maybe I am reading them incorrectly in Nuke, but I am also new to Nuke... Thanks,
  15. I would go with minimum 32 GB RAM because simulation takes lots of memory Linux, in my preference, is usually better than Windows (if u set up correctly, removed all GUI junk, just kept it to minimum, you would get the best performance out of your machine, plus you still save memory to be used by Houdini...BUT you gotta know how to install it correctly because you will to install it correctly, deal with all sorts of drivers issues, it's not easy but if you have spare time then you can always find a solution...my advise, if you don't have experience in Linux, then use Windows or Mac) As per CPU, I don't the current brands, but I would go the highest financially possible processor (is it intel i7 now?), the more cores you have the faster simulation will be because many functions in Houdini are multi-threaded so it can use all your cores. Also, go with higher processor cache I am not sure about the graphics card, in theory Houdini could benefit from OpenCL acceleration, but I've tested couple times in pyro sim and I have not noticed much difference , if not identical (maybe my driver is not installed correctly?). A good graphics card is still needed because it uses OpenGL to display your scene, I think 4GB RAM is good (minimum) (just a side note: if your computer is only needed for computation then I "think" you just need an okay card...) A side note: try to save your money, just get what is good for now so basically focus on a good processor, bare minimum RAM (i.e. 32 GB) because you might need an extra money to buy other software/plugins needed for your work