3D Height adjusting

This is something that is probably obvious to the more experienced members but I wanted to pass on to the struggling noobs like myself.
I figured out how to bring in greyscale height-mapped images and where necessary, frame them. The issue that I have seen many asking is: How to adjust the height of 3D (2.5D) objects.
It’s deceptively simple but easily overlooked.

In the ‘Model Import’ menu:

Angle= the angle that the image is relative to the screen
HEIGHT= How High the 3D model will be from the base line.
XY Scale= How big the 3D model will be on the screen.

Down low on that same screen is "Base Height’. That’s where the bottom of the 3D object is stacked on top of. Like adding a backer plate.

This Eureka moment has lead to a number of successes today.

If I have mis-stated any of this, please feel free to jump in. I have been frustrated at trying to figure this out until this morning and I hope this helps some other users.

4 Likes

Good information, I was just about to give this a whirl.

I believe there are 256 levels of grey that represent a height in most images (24bit color or 16bit bw).
There is a XY scale to determine how big each pixel will be represented.

If you were doing an image that represents ground elevation (DEM/GIS), and you wanted true scale, how would you figured it out ?

I would think you would have to say the difference in the min/max elevation divided by 256 would give you a distance per pixel.
The XY scale would then be the distance covered by the image divided by pixel height distance ?
Maybe I am tired just over thinking this …

2 Likes

You also have to consider any “Base Height” already built into the image (or 3D model).
If the image uses all of the colors from 0 - 255, the lowest spot on your contour being 0, and the highest spot being 255, then you will be cutting the entire height that you set on import.
But what if the image uses the colors 128 - 255? Now your surface is only half as deep as the height used on import. Like it already has a “Base Height”.
Likewise, what if the colors don’t go all the way to 255? If it uses the colors 0 - 127, now you will remove half of that height before starting to cut the 3D surface.

The same applies with STL data. The height you use on import represents the total height of the model.

So, in both cases you need to know what’s in the data. What colors are used in the image, or how the STL is built. Or you can do the import, and try to do some analysis afterward. You can build other 3D components to a known height and temporarily use them as gages.

It might be nice if CC gave us some info on the incoming data once it’s selected & opened, but before we commit to the component.

2 Likes

Wow. Thanks for the heads up !

The STL import is pretty straight forward if you know your STL extents.
The by the CC import height divided by Model height the scale factor for the XY dimensions.
Correct ? Unitless ?

The image scale is less straight forward if you are trying to predict the end result Z height of a given pixel.

The CC UI height / 255 would determine the per Pixel Z effective height.

I assume that the min value (0 = black) is interpreted as the bottom of the model.
You would need to determine the min/max grey scale range in the image (Histogram).
That range would determine the effective cutting height.

Then you would have use the Base height to offset the data from the min effective cutting height.
That offset would be the (CC UI height / 255 ) * min value ???

This all would require CC to scan the data BEFORE offering the CC UI options or have the user enter the data. I am not sure if using the Stock Top or Bottom effects anything.

I am pretty sure I have missed something …

2 Likes

Most of the time you can tell by looking at the image. Does is look like there is any pure black & white?

After that, I import it, set the scale & height I think I need. Then create some additional components to sanity check what I thought was coming in. If an adjustment is needed, delete the component, try again, measure again… I’ve never looked at the raw data in the image. Although having at least a range displayed of pixel values would be helpful & save some trial & error time.

But yes, your last statement is true, if you want to do it that way. :wink:

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.