Jump to content
Atom

Moon DEM Data To Model Script

Recommended Posts

HI All,

 

 

I have released a first draft of my moon scanner script, written in python. You can download it from this post below.

 

 

 

 

untitled-2.jpg

Edited by Atom

Share this post


Link to post
Share on other sites

http://igl.ethz.ch/projects/instant-meshes/

 

it's not houdini but it might work

 

 

 

or you can use triangulate 2d sop node. If you provide uvs to your points you can just use them to generate the surface.

I quickly tried but I don't know how to handle the seams of that geometry. I might need to study and play more with the trianguulate2d node to use it properly.

triangulate2d.hip

Edited by MENOZ

Share this post


Link to post
Share on other sites

Thanks for the tips, I went ahead and created the faces using python and I think it is working.

post-12295-0-48730200-1450738586_thumb.j

    # Create faces from vertices by scanning LEFT->RIGHT, TOP->BOTTOM.
    len_lines = lat_count-1
    len_points = long_count-1
    len_vertices= len(Vertices)
    for line_index in range(len_lines):
        for point_index in range(len_points):
            # Get 1st point and immediate neighbor.
            vertex_index = point_index+(line_index * len_lines)
            v1 = Vertices[vertex_index]
            v2 = Vertices[vertex_index+1]
            # Get 1st point and immediate neighbor, next line down.
            vertex_index = point_index+((line_index+1) * len_lines)
            v3 = Vertices[vertex_index]
            v4 = Vertices[vertex_index+1]

            pt0 = geo.createPoint()
            pt0.setPosition(hou.Vector3(v1[0], v1[1], v1[2]))
            pt1 = geo.createPoint()
            pt1.setPosition(hou.Vector3(v2[0], v2[1], v2[2]))
            pt2 = geo.createPoint()
            pt2.setPosition(hou.Vector3(v3[0], v3[1], v3[2]))
            pt3 = geo.createPoint()
            pt3.setPosition(hou.Vector3(v4[0], v4[1], v4[2]))
            poly = geo.createPolygon()
            # Note: order is important.
            poly.addVertex(pt0);
            poly.addVertex(pt1);
            poly.addVertex(pt3);
            poly.addVertex(pt2);

I'm just not sure if those wide soft bands are error in my generation code or simply gaps in the altimeter data?

post-12295-0-32899900-1450738594_thumb.j

 

post-12295-0-76341600-1451315937_thumb.j

 

post-12295-0-34724600-1451315946_thumb.j

 

post-12295-0-29918200-1451352230_thumb.j

Edited by Atom

Share this post


Link to post
Share on other sites

I tried another approach. This time I feed the python node with a grid. The grid has the same number of points in it's rows and columns as the DEM data does. They are matched. Then I simply loop through the DEM data and attach DEM height data as an attribute to each associated point. Using an attributeVOP I can displace using the stored height data as the amount.

 

This works, however, I end up with a flat representation of the moon and not a spherical one.

 

Is there a spherify node or cast into sphere shape option somewhere?

 

How do I bend a grid around a sphere?

 

LDEM_4.IMG: Data set lowest quality (1440x720)

post-12295-0-04260800-1450753375_thumb.j

 

LDEM_16.IMG: Data set next quality (5760x2880)

This image takes 20GB of ram in scene to hold the entire moon's surface at this quality.

post-12295-0-62739400-1450756334_thumb.j

 

LDEM_64.IMG: Data set (23040x11520)

I was able to bring in a small potion of the total surface for rendering.

post-12295-0-89209200-1450758090_thumb.j

 

NOTE: There are much more detailed DEM files available, LDEM_1024. I am not sure how to leverage such dense data?

Edited by Atom

Share this post


Link to post
Share on other sites

you can use creep sop on the grid to a spere, or,maybe you can run your script on a sphere directly ?

 

For the datasize, I don't understand if the problem is rendering or just managing it.

 

Have you tried converting to polysoup? it might help.

Are you using  some sort of delayed load rendering?

Do you need it to be geometry? maybe you could try to extract a displacement map.

Share this post


Link to post
Share on other sites

The problem seems to be the sheer amount of data. Consider the LDEM_64 data set. If I were to map every point in the set to a sphere I would need 265,420,800 points not to mention any memory for faces. 265,420,800 * 3 for X,Y,Z * 4 then width in bytes for just reading the information leads to big numbers that won't fit in my computer. And that is just the LDEM_64 data set, imagine working with the 128,256,512, or 1024 sets.

 

Working with slices of data offers small windows into the surface of the moon. Here is a 20 degree longitude sweep over 6 degrees of latitude using the LDEM_64_FLOAT data set.

post-12295-0-86454200-1450816880_thumb.j

 

And another slice.

post-12295-0-48960300-1450819893_thumb.j

 

I don't know much about delayed load rendering. Because I am creating the model in the SOP context, on-the-fly I'm not sure how delayed load could be leveraged?

 

I don't really need it to be geometry but I do like having the ability to see it in the viewport and frame up features interactively.

Edited by Atom

Share this post


Link to post
Share on other sites

So there are 2 problems, build the high res geometry and displaying it in viewport. Rendering shouldn't be a massive problem given mantra's abilities.

 

I don't know how these dataset are structured so I don't know if this may work.

 

To build the high res geometry itself:

You could further partition the dataset maybe and build smaller models from that, and for each you convert to polysoup, pack, and save to disk.

 

To display:

If loading the whole raw geo is not possible you have to decide the way you want to preview and interact with it in viewport.

 

You could perhaps create various level of detail and manually switch bewteen them when you need.

Or extract displacement maps and use them to displace a lower res geometry. You could automatically change the resolution of your mesh depending on where your camera is and load different resolutions of your displacement map to get more detail.. 

Or, having a low res geo displaced by a disp map in viewport as a reference, and use mantra to see the details.

 

For render, you can use delayed load, or again, render with displacement.

 

 

I don't know other ways to approach this out of my mind.

Share this post


Link to post
Share on other sites

Thanks for the suggestions. I am thinking along the lines of figuring out what the largest partition size is (for my machine) and and create sub-models at that size for each section, in the .bgeo format.

 

Using the LDEM_64 data I have upped my generation swath to encompass 24 degrees of latitude.

post-12295-0-77082900-1450825581_thumb.j

 

I just pulled down the LDEM_128 data, which is 4GB in size.

Here is a 4 degrees of latitude sample of that data's quality. This shows a zoomed in area from the previous image.

post-12295-0-52450000-1450828349_thumb.j

 

This image took 27 minutes to render. A 28 degrees of latitude and 8 degrees of longitude region using the LDEM_128 data set. Mantra reported 1.34gb of memory.

post-12295-0-62165400-1450832874_thumb.j

Edited by Atom

Share this post


Link to post
Share on other sites

convert to a floating point texture and then displace at render time.  preferably a rat texture.

 

you can also use that same texture to offset geo at whatever rez you want.

 

i suspect you can find a tif version of the data out there somewhere...

Share this post


Link to post
Share on other sites

convert to a floating point texture

There are .JP2 jpeg2000 images as companion files to the .IMG data, however, Houdini will not read them. Neither will Photoshop. A TIFF container has a 4Gb limit for data perhaps a RAT file can store more data? At some point I feel like I would run out of room and have to switch to using various channels for various quadrants which leaves me back at where I am starting. The .IMG files are already in that state, partitioned binary data.

 

I am not really looking for a way to cram the data into a smaller container like an image map, I have a complete low-res moon (90Mb .bgeo) object for casual rendering but I really want to look at the data directly to see what the LOLA device actually recorded on the surface of the moon.

 

For example the .LBL file claims the data in .IMG is in the range of -8.746 to 10.380. However, when I read in the actual data I get numbers way out of bounds reading LDEM_4.IMG.

post-12295-0-27123200-1450890246_thumb.j

 

So I have added some bounding logic to the generation that clamps the data to the MIN/MAX values from the .LBL file and colorizes points that fall out of range. Blue points fall below and are considered noise and Red points fall above.

post-12295-0-21798800-1450890254_thumb.j

As you can see, a good portion of the moons surface is represented in error using the LDEM_4.IMG data set.

 

It would be nice to offer a 'fit' function for the data as well but I don't know how use the hscript or vex fit features in python?

 

My current scaling scheme is based upon a note from the .LBL description file.

Map values are relative to a radius of 1737.4 km.

My code: The SCALING_FACTOR is 0.5 as shown in the above images.

Radius=(data_height_sample * SCALING_FACTOR)+MOON_RADIUS
The higher resolution data set LDEM_128.IMG seems to have no upper data spikes like the lower quality data set. All we see is the blue noise floor as expected.

post-12295-0-86342000-1450893993_thumb.j

I guess after viewing more portions of the LDEM_128 data I did locate some areas where the data runs out of upper bounds as well.

post-12295-0-79896300-1450897901_thumb.j

Here is a small fixup to bounding code. In this image I am skipping bounding the lower data but I still colorize it. This way you can see the noise floor of the lower data in blue.

post-12295-0-68439200-1450898573_thumb.j

if dh < MIN_HEIGHT:
    #dh = MIN_HEIGHT # Uncomment for flat floor look.
    if `ch("tgl_colorize")`:color_min = True
Even more LDEM_128_FLOAT data with upper boundary errors. How can such a wide swath be out of bounds while the neighboring areas look fine?

post-12295-0-41329300-1450899670_thumb.j

Edited by Atom

Share this post


Link to post
Share on other sites

even if you have to section up image files, having rat format data allows for a ton of benefits: mip-mapping, vex/shader execution, texture filtering, random access...

Share this post


Link to post
Share on other sites

Do how would I write a .RAT file?

My data set for LDEM_128 is 46080x24040 in pixel size. So LDEM_256 is double that and LDEM_512 double that etc..

Will a COP network image support that large of an image dimension? I know there are limits for Apprentice and Indie work out of COPs.

I assume I would use COP somehow to write the data.

I guess I would need an inverse of attributeToMap inside of COP.

Can python in SOP write to COP?

Edited by Atom

Share this post


Link to post
Share on other sites

in cops, you can sample sop geo via a vop network.  you'd definitely need to break things down into tiles.  ideally, powers of two tiles for mipmapping efficiency.

 

the 128 data would break down into 1024x512 tiles and still divide evenly into 360 degrees.  that would make about 2000 images (45 by 45 array, each tile covering a 8x4 degree section).

 

you could use higher rez data with the same number of tiles to get 4096x2048 tiles (from the 512 data if that's even available) for the same coverage (8x4 pixels)...

 

that's a lot of images, but it does the best job of preserving the details (ie, you have a one-to-one match for the most part).

Edited by fathom

Share this post


Link to post
Share on other sites

I have re-written the main calculating routine to sample from the data set instead of being tightly bound to the data set as before. This allows the user to supply the routine with the highest quality .IMG sample data then choose how coarse or fine the result should be through resampling.

Here is a series of a small section of the moon (0-15 latitude) by (0-30) longitude @1024 byte data resolution taken from file LDEM_1024_00N_15N_000_030.IMG

Resample 128:
post-12295-0-40289500-1450984572_thumb.j

Resample 64:
post-12295-0-51854700-1450984564_thumb.j

Resample 32:
post-12295-0-55083700-1450984556_thumb.j

Resample 16:
post-12295-0-44507800-1450984549_thumb.j

Resample 08:
post-12295-0-03942000-1450984541_thumb.j

Edited by Atom

Share this post


Link to post
Share on other sites

After another re-write of the .IMG file scanner I think i have worked out the mysterious spiked data problem. My file read was skewing bytes as it read them thus causing heights from rows and columns not to line up as they should. I have also added in what I am calling an 'aesthetic scale' for the height data. It is the python equivalent of a vex fit function and remaps the original height data from the original range into an aesthetically pleasing range.

 

Here is a top view shot from the LDEM_128 .IMG file. A small 6.4x3.6 degree rectangle.

post-12295-0-58866800-1451360569_thumb.j

 

The same data set but with camera planet side.

post-12295-0-71595000-1451360603_thumb.j

 

Another top view from another section of the moon. Expanded section of LDEM_128 @12.8x7.2 degrees.

post-12295-0-70017700-1451365928_thumb.j

 

And the companion planet side.

post-12295-0-33903500-1451362763_thumb.j

 

Here is the same location but a wider area covering 25x14 degrees of the moon's surface.

post-12295-0-75632700-1451399578_thumb.j

 

And the planet side view.

post-12295-0-53021400-1451399602_thumb.j

 

Edited by Atom
  • Like 1

Share this post


Link to post
Share on other sites

Amazing, I like it a lot.

How long it takes your code to create the geometry?

Are you displacing points in the end or creating geo from scratch from python?

Have you managed to tile the data?

Share this post


Link to post
Share on other sites

This project turned into more than I wanted, I thought I could just pull down the data and view the surface of the moon up close but calculation times are long. For the LDEM_128 12.8x7.2 degree tile it took my single core Python script 40 minutes to calculate on my AMD 4.4Ghz machine. If I make the tile too big I can easily exceed my 24Gb memory limit, then it really slows down into hours and hours for calculating the surface. The LDEM_4 data set calculates fairly quick, however.  Render times are not that bad.

 

I am creating the surface by scratch in Python. I tried the displacement approach early on but found it took much longer to use a series of Houdini nodes than to simply contain it all in a single script. Yesterday I tried out a hybrid python/vex approach where I used Python to only read the DEM data and generate points. I stored the height information as an attribute on those points.Then after that I dropped down an Attribute Wrangle and use the VEX based wrangle to scan the points and create the faces. While I did see a slight improvement in CPU usage (up from 14% to 60%) the overall time to create the geometry was about the same so I dropped back to just using python for the entire generation. I left the VEX code in the HIP file and if you want to play around with this technique simply set the projection_type=2 (bottom of python code) and activate the attributeWrangle.

 

There are gigs and gigs of data to pull down from NASA if I wanted to assemble the entire surface and I only have a small SSD drive at this time. So generating a complete tile set is still on the ToDo list.

 

I did manage to generate a complete Moon surface as a .bgeo model from the LDEM_16 data set. This resulted in a 537Mb model which would be good for any distant shot. But once you get too close to the surface, the detail is lost as you can see in the planet side shots from the LDEM_128 data above.

 

The data sets from LDEM_4_FLOAT, LDEM_8_FLOAT, LDEM_16_FLOAT ,LDEM_64_FLOAT, and LDEM_128_FLOAT contain the entire surface of the moon in a single data set. This is convenient for my current code scanner because I can specify any latitude or longitude within a single file. The higher resolution data sets are broken into sections of the moon that cover only a potion of the latitude and longitude range. Meaning my scanner, in it's current form, can not cross any boundary and fetch data from companion files yet.

 

I do have a basic line scanner which allows you to view a small window into the highest resolution data (256, 512 and 1024) but the area that fits within my computer is so small (0.6 degrees by 0.3 degrees). Set projection_type=1 in the python code if you want to use this experimental approach to viewing large data sets.

 

If anyone wants to play around with the code I am posting it here along with the LDEM_4 data set, which is the lowest quality moon data. For best results start off with small sections of Latitude and Longitude and increase the range as you observe how long it takes to calculate any given range. Beware, there are rules to the Latitude and Longitude To->From parameters and the code will break if these rules are broken.

 

Additional higher quality data sets that are compatible with this code can be downloaded here. Remember to download both the .IMG and .LBL files. The .LBL is the descriptor that informs the python code how to read the .IMG file.

 

Have fun, and post any moon pictures you make with this!

post-12295-0-84594000-1451487888_thumb.j

 

post-12295-0-53459100-1451488223_thumb.j

atoms_houdini_moon_scanner.zip

Edited by Atom
  • Like 2

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×