Jump to content

Voice Split Phoneme Lip Synch Setup?


Atom

Recommended Posts

Hi All,

I am trying to figure out how to leverage the very, very very old VoiceSynch and Phoneme CHOP nodes initially introduced in Houdini 4.0.

The documentation is sparse and even Googling this topic produces very old reviews from nearly a decade ago.

Does anyone know how to setup these nodes to function as described in the Help?

For instance I have a few lines of text the I have placed in the Phoneme node. This does get me channels.

Untitled-1.jpg

I have a wave file with only the voice in the file, so there is no background music to filter out. I can see this wave in the Motion View.

Untitled-3.jpg

I connect the wave to a VoiceSplit which is supposed to listen to the wave and produce channels for words.

Untitled-2.jpg

What's the next step?

I don't feel like the voice split is working correctly. The result looks more like music instead of words. I expected each channel to be a solo of a word. Instead it looks like each channel contains a beat pattern. When I edit the parameters in the voicesplit node Houdini often crashes.

My goal is not to drive blendshapes at this time. All I want out of the system is timing marks and phoneme information combined into a single channel.

I want phonemes over time.

Can Houdini be configured to produce this kind of output?

Right now I am using Papagayo which only supports 10 phonemes but Houdini claims to support 41. I would like the extra phoneme resolution for my lip synching script.

 

I have attached the scene which has the audio track locked into the File node.

ap_voice_synch_explore.hiplc

Edited by Atom
Link to comment
Share on other sites

I have constructed a series of font nodes, each with a text representation of the phoneme. I can route the CHOP output from the Phoneme node to a switch and cause the phonemes to display over time. But as the help card for the Phoneme dictates...The phonemes are initially placed at regular intervals along the length of the channel.

Untitled-2.jpg

 

I have also managed to setup the voicesynch to produce an On/Off style CHOP wave that matched the input audio fairly well. But I still do not know how to connect the two CHOP channels so the phonemes appear at the correct time?

 

Some kind of fit function perhaps...?

Untitled-1.jpg

ap_voice_synch_explore_1b.hiplc

Edited by Atom
Link to comment
Share on other sites

Sorry, no time to look at hip files but have you looked at this old tutorial?

http://www.digitalcinemaarts.com/docs/houdini/chops_lipsynch4.pdf

See also the parent directory which has tonnes of old stuff: http://www.digitalcinemaarts.com/docs/houdini/

Basically, you massage the channels so that they match the channel names on your BlendShape SOP. Then you can just directly export the results (eg. using Export CHOP) to them.

  • Like 1
Link to comment
Share on other sites

I did discover that old PDF from Houdini version 4.0. The support files for the lipsynch PDF don't appear to be in that parent directory.

Is it a Swedish massage or a Turkish massage that I apply to the channels?:)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...