Jump to content

Houdini Matrix to Renderman Transform?


Atom

Recommended Posts

Hi All,

 

I am playing around with a RIB exporter to try out the Renderman system. I have some python code that exports the Matrix for objects, lights and the camera but it does not always work. I am wondering if there is some difference between the two systems that is causing the camera to point the wrong way and thus render black images sometimes.

 

Does anyone have any example python code that would show how to export the Houdini Matrix to a Renderman Transform?

Link to comment
Share on other sites

So far my code looks like this.

 

Each Houdini node presents 4 matrix options. I am not sure which one to use or what to swap for what?

def export_transform(rib_writer, ob_name):
    n = hou.node(ob_name)
    if n != None:
        #mtx = n.parmTransform().inverted()
        #mtx = n.localTransform().inverted()
        #mtx = n.preTransform()  #.inverted()
        mtx = n.worldTransform().inverted()
 
rib_writer.emit_text('Rotate 180 0 1 0') 
rib_writer.emit_text('Scale -1 1 1') 
rib_writer.emit_text('ConcatTransform %s' %  rib(mtx))
Which is kind of close. But compare the two images, one is the Houdini viewport and the pink image is the renderman result. It also looks like the sphere should be 1-y but I'm not sure how to do that with Matrix math?

post-12295-0-46437600-1429391510_thumb.j

post-12295-0-15037100-1429391516_thumb.j

Edited by Atom
Link to comment
Share on other sites

I made a little bit of progress in trying to export coordinates to renderman. I discovered that order of operation is very important when dealing with explicit LOC/ROT/SCALE. While my render output looks much close to the viewport than the previous post I still find that certain camera angles simple don't work, I get a black render (probably because the camera is looking at nothing).

 

I wish someone could explain what is wrong with my code or post a correction to this def. I have incorporated the detection of translation order into this revision. I also set aside a special case for the camera matrix in case I need to process it differently than the rest of the objects in the scene.

 

def export_transform(file, ob_name):
    n = hou.node(ob_name)
    if n != None:
        xOrd = n.parm("xOrd").eval()  # Translation Order.
        rOrd = n.parm("rOrd").eval()  # Rotation Order.
 
        if n.type().description() == 'Camera':
            world_mtx = n.worldTransform()
        else:
            world_mtx = n.worldTransform()
            
        world_mtx = n.worldTransform()
        mMirror_XY = hou.Matrix4( (-1,0,0,0, 0,-1,0,0, 0,0,1,0, 0,0,0,1) )
        mtx = mMirror_XY * world_mtx * mMirror_XY
 
        pos=mtx.extractTranslates()
        rot=mtx.extractRotates()
        scl=mtx.extractScales()
 
        if xOrd == 0:
            # Translation order: Scale Rot Loc.
            file.write('Scale %f %f %f' %  (scl[0],scl[1],scl[2]))
            if rOrd == 0:
                # Rotation Order: X Y Z
                file.write('Rotate %f %f %f %f' %  (-rot[0],1,0,0))
                file.write('Rotate %f %f %f %f' %  (rot[1],0,1,0))
                file.write('Rotate %f %f %f %f' %  (rot[2],0,0,1))
            else:
                print "WARNING:Unsupported rotation order [%s]." % rOrd
            file.write('Translate %f %f %f' %  (-pos[0],pos[1],pos[2]))
        else:
            print "WARNING:Unsupported translation order [%s]." % xOrd
Link to comment
Share on other sites

have you had a look in the soho source, see if you can crib some notes from the sidefx rib exporter? last I checked the install is identical between apprentice/indie/escape/master, just the licence server enables and disables features, so there should be something for you to study.

Link to comment
Share on other sites

I am still working on this.

 

Does anyone understand matrix math and can explain it in working python code?

 

I can get coordinates out to the RIB file, but they are just wrong at render time. I keep flipping axis thinking I'll just land on the right combination that works but I think I have been through all the possible combinations and there is always a flaw.

 

I find mixed conclusions on the net as to weather or not Houdini is Left or Right handed axis system. One of the employees on the Renderman forum claims that renderman is a left-handed system. But I really don't know what that means at the math level. I do know that in all other exporters I have written that the matrix calculations are only a few lines of code. That is why I am surprised that there is not just some example code out there already.

 

I guess this thread will become the definitive guide that Google search engines will stumble upon when other users attempt to convert Houdini coordinates to Pixar Renderman coordinate system. This python code will calculate the matrix transformation from the Houdini camera to the Pixar Renderman camera.

Link to comment
Share on other sites

After reading some more on the subject I have come to the conclusion that the camera needs some additional matrix in order to function as a camera in the scene. This is still an assumption and I wish someone who actually knew this stuff would chime in on a way to proceed. So here goes.

 

A second matrix is created based upon the screen windows settings, projection type, fov, clipping planes and aperture of the camera.

def view_plane(camera_name, winx, winy, xasp, yasp):  
    n = hou.node(camera_name)
    if n != None:
        #/* fields rendering */
        ycor = yasp / xasp
        use_fields = False
        if (use_fields):
          ycor *= 2
        
        # Fetch values from the named camera.
        nc = n.parm("near").eval()  
        fl = n.parm("focal").eval()
        wx = n.parm("winx").eval()
        wy = n.parm("winy").eval()
        ap = n.parm("aperture").eval()
        pt = n.parm("projection").eval()        # Projection type.
        if (pt == 1):   
          # Orthographic proection.
          #/* scale == 1.0 means exact 1 to 1 mapping */
          pixsize = n.parm("orthowidth").eval()  
        elif (pt == 0):
          # Perspective projection.
          sensor_size = 32.0    #Blender default.
          pixsize = (sensor_size * nc) / fl
        else:
            # Unsupported projection type at this time.
            pixsize = 1.0
 
        apx = ap
        apy = (winy * ap) / (winx * xasp)
        if apx > apy:
            # Horizontal.
            viewfac = winx
        else:
            # Vertical.
            viewfac = ycor * winy
 
        pixsize /= viewfac
 
        #/* extra zoom factor */
        pixsize *= 1 #params->zoom
 
        #/* compute view plane:
        # * fully centered, zbuffer fills in jittered between -.5 and +.5 */
        xmin = -0.5 * winx
        ymin = -0.5 * ycor * winy
        xmax =  0.5 * winx
        ymax =  0.5 * ycor * winy
 
        #/* lens shift and offset */
        dx = wx * viewfac # + winx * params->offsetx
        dy = wy * viewfac # + winy * params->offsety
 
        xmin += dx
        ymin += dy
        xmax += dx
        ymax += dy
 
        #/* the window matrix is used for clipping, and not changed during OSA steps */
        #/* using an offset of +0.5 here would give clip errors on edges */
        xmin *= pixsize
        xmax *= pixsize
        ymin *= pixsize
        ymax *= pixsize
    else:
        xmin = 0.0
        xmax = 1.0
        ymin = 0.0
        ymax = 1.1
 
    return xmin, xmax, ymin, ymax
 
def projection_matrix(camera_name):
    n = hou.node(camera_name)
    if n != None:
        rx = n.parm("resx").eval()
        ry = n.parm("resy").eval()
        ap = n.parm("aperture").eval()
        nc = n.parm("near").eval()
        fc = n.parm("far").eval() 
        pr = n.parm("aspect").eval()
        
        apx = ap
        apy = (ry * ap) / (rx * pr)
        if apx > apy:
            aspect_ratio = apx/apy
        else:
            aspect_ratio = apy/apx
 
        left, right, bottom, top = view_plane(camera_name, rx, ry, pr, pr)
        farClip, nearClip = fc, nc
        Xdelta = right - left
        Ydelta = top - bottom
        Zdelta = farClip - nearClip
        
        '''
        # Original Blender matrix mapping.
        mat = [[0]*4 for i in range(4)]
        mat[0][0] = (nearClip * 2 / Xdelta)
        mat[1][1] = (nearClip * 2 / Ydelta)
        mat[2][0] = ((right + left) / Xdelta) #/* note: negate Z  */
        mat[2][1] = ((top + bottom) / Ydelta)
        mat[2][2] = (-(farClip + nearClip) / Zdelta)
        mat[2][3] = -1
        mat[3][2] = ((-2 * nearClip * farClip) / Zdelta)
        return sum([c for c in mat], [])
        '''
        # I wonder if I have this mapping correctly? Porting this technique from a Blender script.
        mat = hou.Matrix4( ((
        nearClip * 2 / Xdelta),0,0,0, 
        0,(nearClip * 2 / Ydelta),0,0, 
        ((right + left) / Xdelta),((top + bottom) / Ydelta),(-(farClip + nearClip) / Zdelta),-1, 
        0,0,((-2 * nearClip * farClip) / Zdelta),1)
        )
        return mat
    else:
        return None
        
def export_transform(file, ob_name):
    n = hou.node(ob_name)
    if n != None:
        if n.type().description() == 'Camera':
            # Special matrix output for camera.
            m = projection_matrix(ob_name)
            mtx = n.localTransform() * m
        else:
            mtx = n.worldTransform()
        file.write('Transform %s' %  rib(mtx))
Edited by Atom
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...