Atom Posted August 28, 2016 Share Posted August 28, 2016 (edited) Hi All, I am trying to figure out how to leverage the very, very very old VoiceSynch and Phoneme CHOP nodes initially introduced in Houdini 4.0. The documentation is sparse and even Googling this topic produces very old reviews from nearly a decade ago. Does anyone know how to setup these nodes to function as described in the Help? For instance I have a few lines of text the I have placed in the Phoneme node. This does get me channels. I have a wave file with only the voice in the file, so there is no background music to filter out. I can see this wave in the Motion View. I connect the wave to a VoiceSplit which is supposed to listen to the wave and produce channels for words. What's the next step? I don't feel like the voice split is working correctly. The result looks more like music instead of words. I expected each channel to be a solo of a word. Instead it looks like each channel contains a beat pattern. When I edit the parameters in the voicesplit node Houdini often crashes. My goal is not to drive blendshapes at this time. All I want out of the system is timing marks and phoneme information combined into a single channel. I want phonemes over time. Can Houdini be configured to produce this kind of output? Right now I am using Papagayo which only supports 10 phonemes but Houdini claims to support 41. I would like the extra phoneme resolution for my lip synching script. I have attached the scene which has the audio track locked into the File node. ap_voice_synch_explore.hiplc Edited August 28, 2016 by Atom Quote Link to comment Share on other sites More sharing options...
Atom Posted August 29, 2016 Author Share Posted August 29, 2016 (edited) I have constructed a series of font nodes, each with a text representation of the phoneme. I can route the CHOP output from the Phoneme node to a switch and cause the phonemes to display over time. But as the help card for the Phoneme dictates...The phonemes are initially placed at regular intervals along the length of the channel. I have also managed to setup the voicesynch to produce an On/Off style CHOP wave that matched the input audio fairly well. But I still do not know how to connect the two CHOP channels so the phonemes appear at the correct time? Some kind of fit function perhaps...? ap_voice_synch_explore_1b.hiplc Edited August 29, 2016 by Atom Quote Link to comment Share on other sites More sharing options...
edward Posted August 30, 2016 Share Posted August 30, 2016 Sorry, no time to look at hip files but have you looked at this old tutorial? http://www.digitalcinemaarts.com/docs/houdini/chops_lipsynch4.pdf See also the parent directory which has tonnes of old stuff: http://www.digitalcinemaarts.com/docs/houdini/ Basically, you massage the channels so that they match the channel names on your BlendShape SOP. Then you can just directly export the results (eg. using Export CHOP) to them. 1 Quote Link to comment Share on other sites More sharing options...
Atom Posted August 30, 2016 Author Share Posted August 30, 2016 I did discover that old PDF from Houdini version 4.0. The support files for the lipsynch PDF don't appear to be in that parent directory. Is it a Swedish massage or a Turkish massage that I apply to the channels? Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.