height map for normal input sharp edges
$begingroup$
So I have generated this simple height map to test my add on:
That's the node setup and the result that I am having in Blender:
What is happening here (basically, that's a theory, I did not analyze the actual pixel values programmatically): Blender Bump node is interpreting each slightest variation in the pixel value (say, 0.985 white and 0.99 white) as variation in height. What you have as a result are the ugly cutoffs. By the way, a similar question with the same problem, unsolved: height map brings weird result
Now, if I do converting in CrazyBump, I have the opportunity to ignore small details, and, if I convert with CrazyBump, the map is displayed correctly in blender. By their nature, any height map, even 16K will not be absolutely smooth because the set of possible heights is a set of discrete values, it will never be smooth. The task of an algo would be to produce smooth normals from this. And this is what Bump node obviously does not do. Any suggestions? (using other software does not work, I want to achieve it specifically in the node editor of Blender, for any BW map)
and just to prove the point: if I twick the small details all the way up in CrazyBump, I will get essentially what the BumpNode in Blender is doing: overexaggerating non-significant value variations:
height-map
$endgroup$
add a comment |
$begingroup$
So I have generated this simple height map to test my add on:
That's the node setup and the result that I am having in Blender:
What is happening here (basically, that's a theory, I did not analyze the actual pixel values programmatically): Blender Bump node is interpreting each slightest variation in the pixel value (say, 0.985 white and 0.99 white) as variation in height. What you have as a result are the ugly cutoffs. By the way, a similar question with the same problem, unsolved: height map brings weird result
Now, if I do converting in CrazyBump, I have the opportunity to ignore small details, and, if I convert with CrazyBump, the map is displayed correctly in blender. By their nature, any height map, even 16K will not be absolutely smooth because the set of possible heights is a set of discrete values, it will never be smooth. The task of an algo would be to produce smooth normals from this. And this is what Bump node obviously does not do. Any suggestions? (using other software does not work, I want to achieve it specifically in the node editor of Blender, for any BW map)
and just to prove the point: if I twick the small details all the way up in CrazyBump, I will get essentially what the BumpNode in Blender is doing: overexaggerating non-significant value variations:
height-map
$endgroup$
1
$begingroup$
It might have to do with the bit depth of the height map. 8 bit images are not enough to resolve subtle gradation. Try using images in a higher bit depth. Also, images used for bumps and normals should not be interpreted as color, but as Non-Color-Data (linear)
$endgroup$
– cegaton
7 hours ago
$begingroup$
@cegaton thanks for responding. bpy.data.images['...'].depth tells me it's 32 bit depth (is this the right indicator?). Also, changing to non-color does not affect the result. take a look at the last illustration: CrazyBump can algorithmically ignore or exaggerate those cuttoffs by setting its degree of attention to fine details
$endgroup$
– SerhiiPoklonskyi
7 hours ago
2
$begingroup$
32 bit is 8bit per channel+Alpha (8 bit x 4). Try creating the image as a 16 bit per channel file. Non-color data should always be used for bump and height maps to avoid the distortion caused by the 2.2 gamma curve used in image encoded in sRGB. You might get better results using a displace modifer and subsurf than using displacement in the material or shader.
$endgroup$
– cegaton
7 hours ago
$begingroup$
@cegaton holy Jesus! it has solved the problem! marry me, cegaton! I totally agree with you with regard to displacement but my idea was to combine individual alpha elements to produce reliefs and cravings easily. now it will work thanks to you. if you so wish, post an answer for others, probably with a bit of explanation on depth and how it affects the result
$endgroup$
– SerhiiPoklonskyi
7 hours ago
1
$begingroup$
will do it soon. Thank you again
$endgroup$
– SerhiiPoklonskyi
6 hours ago
add a comment |
$begingroup$
So I have generated this simple height map to test my add on:
That's the node setup and the result that I am having in Blender:
What is happening here (basically, that's a theory, I did not analyze the actual pixel values programmatically): Blender Bump node is interpreting each slightest variation in the pixel value (say, 0.985 white and 0.99 white) as variation in height. What you have as a result are the ugly cutoffs. By the way, a similar question with the same problem, unsolved: height map brings weird result
Now, if I do converting in CrazyBump, I have the opportunity to ignore small details, and, if I convert with CrazyBump, the map is displayed correctly in blender. By their nature, any height map, even 16K will not be absolutely smooth because the set of possible heights is a set of discrete values, it will never be smooth. The task of an algo would be to produce smooth normals from this. And this is what Bump node obviously does not do. Any suggestions? (using other software does not work, I want to achieve it specifically in the node editor of Blender, for any BW map)
and just to prove the point: if I twick the small details all the way up in CrazyBump, I will get essentially what the BumpNode in Blender is doing: overexaggerating non-significant value variations:
height-map
$endgroup$
So I have generated this simple height map to test my add on:
That's the node setup and the result that I am having in Blender:
What is happening here (basically, that's a theory, I did not analyze the actual pixel values programmatically): Blender Bump node is interpreting each slightest variation in the pixel value (say, 0.985 white and 0.99 white) as variation in height. What you have as a result are the ugly cutoffs. By the way, a similar question with the same problem, unsolved: height map brings weird result
Now, if I do converting in CrazyBump, I have the opportunity to ignore small details, and, if I convert with CrazyBump, the map is displayed correctly in blender. By their nature, any height map, even 16K will not be absolutely smooth because the set of possible heights is a set of discrete values, it will never be smooth. The task of an algo would be to produce smooth normals from this. And this is what Bump node obviously does not do. Any suggestions? (using other software does not work, I want to achieve it specifically in the node editor of Blender, for any BW map)
and just to prove the point: if I twick the small details all the way up in CrazyBump, I will get essentially what the BumpNode in Blender is doing: overexaggerating non-significant value variations:
height-map
height-map
edited 7 hours ago
SerhiiPoklonskyi
asked 7 hours ago
SerhiiPoklonskyiSerhiiPoklonskyi
465
465
1
$begingroup$
It might have to do with the bit depth of the height map. 8 bit images are not enough to resolve subtle gradation. Try using images in a higher bit depth. Also, images used for bumps and normals should not be interpreted as color, but as Non-Color-Data (linear)
$endgroup$
– cegaton
7 hours ago
$begingroup$
@cegaton thanks for responding. bpy.data.images['...'].depth tells me it's 32 bit depth (is this the right indicator?). Also, changing to non-color does not affect the result. take a look at the last illustration: CrazyBump can algorithmically ignore or exaggerate those cuttoffs by setting its degree of attention to fine details
$endgroup$
– SerhiiPoklonskyi
7 hours ago
2
$begingroup$
32 bit is 8bit per channel+Alpha (8 bit x 4). Try creating the image as a 16 bit per channel file. Non-color data should always be used for bump and height maps to avoid the distortion caused by the 2.2 gamma curve used in image encoded in sRGB. You might get better results using a displace modifer and subsurf than using displacement in the material or shader.
$endgroup$
– cegaton
7 hours ago
$begingroup$
@cegaton holy Jesus! it has solved the problem! marry me, cegaton! I totally agree with you with regard to displacement but my idea was to combine individual alpha elements to produce reliefs and cravings easily. now it will work thanks to you. if you so wish, post an answer for others, probably with a bit of explanation on depth and how it affects the result
$endgroup$
– SerhiiPoklonskyi
7 hours ago
1
$begingroup$
will do it soon. Thank you again
$endgroup$
– SerhiiPoklonskyi
6 hours ago
add a comment |
1
$begingroup$
It might have to do with the bit depth of the height map. 8 bit images are not enough to resolve subtle gradation. Try using images in a higher bit depth. Also, images used for bumps and normals should not be interpreted as color, but as Non-Color-Data (linear)
$endgroup$
– cegaton
7 hours ago
$begingroup$
@cegaton thanks for responding. bpy.data.images['...'].depth tells me it's 32 bit depth (is this the right indicator?). Also, changing to non-color does not affect the result. take a look at the last illustration: CrazyBump can algorithmically ignore or exaggerate those cuttoffs by setting its degree of attention to fine details
$endgroup$
– SerhiiPoklonskyi
7 hours ago
2
$begingroup$
32 bit is 8bit per channel+Alpha (8 bit x 4). Try creating the image as a 16 bit per channel file. Non-color data should always be used for bump and height maps to avoid the distortion caused by the 2.2 gamma curve used in image encoded in sRGB. You might get better results using a displace modifer and subsurf than using displacement in the material or shader.
$endgroup$
– cegaton
7 hours ago
$begingroup$
@cegaton holy Jesus! it has solved the problem! marry me, cegaton! I totally agree with you with regard to displacement but my idea was to combine individual alpha elements to produce reliefs and cravings easily. now it will work thanks to you. if you so wish, post an answer for others, probably with a bit of explanation on depth and how it affects the result
$endgroup$
– SerhiiPoklonskyi
7 hours ago
1
$begingroup$
will do it soon. Thank you again
$endgroup$
– SerhiiPoklonskyi
6 hours ago
1
1
$begingroup$
It might have to do with the bit depth of the height map. 8 bit images are not enough to resolve subtle gradation. Try using images in a higher bit depth. Also, images used for bumps and normals should not be interpreted as color, but as Non-Color-Data (linear)
$endgroup$
– cegaton
7 hours ago
$begingroup$
It might have to do with the bit depth of the height map. 8 bit images are not enough to resolve subtle gradation. Try using images in a higher bit depth. Also, images used for bumps and normals should not be interpreted as color, but as Non-Color-Data (linear)
$endgroup$
– cegaton
7 hours ago
$begingroup$
@cegaton thanks for responding. bpy.data.images['...'].depth tells me it's 32 bit depth (is this the right indicator?). Also, changing to non-color does not affect the result. take a look at the last illustration: CrazyBump can algorithmically ignore or exaggerate those cuttoffs by setting its degree of attention to fine details
$endgroup$
– SerhiiPoklonskyi
7 hours ago
$begingroup$
@cegaton thanks for responding. bpy.data.images['...'].depth tells me it's 32 bit depth (is this the right indicator?). Also, changing to non-color does not affect the result. take a look at the last illustration: CrazyBump can algorithmically ignore or exaggerate those cuttoffs by setting its degree of attention to fine details
$endgroup$
– SerhiiPoklonskyi
7 hours ago
2
2
$begingroup$
32 bit is 8bit per channel+Alpha (8 bit x 4). Try creating the image as a 16 bit per channel file. Non-color data should always be used for bump and height maps to avoid the distortion caused by the 2.2 gamma curve used in image encoded in sRGB. You might get better results using a displace modifer and subsurf than using displacement in the material or shader.
$endgroup$
– cegaton
7 hours ago
$begingroup$
32 bit is 8bit per channel+Alpha (8 bit x 4). Try creating the image as a 16 bit per channel file. Non-color data should always be used for bump and height maps to avoid the distortion caused by the 2.2 gamma curve used in image encoded in sRGB. You might get better results using a displace modifer and subsurf than using displacement in the material or shader.
$endgroup$
– cegaton
7 hours ago
$begingroup$
@cegaton holy Jesus! it has solved the problem! marry me, cegaton! I totally agree with you with regard to displacement but my idea was to combine individual alpha elements to produce reliefs and cravings easily. now it will work thanks to you. if you so wish, post an answer for others, probably with a bit of explanation on depth and how it affects the result
$endgroup$
– SerhiiPoklonskyi
7 hours ago
$begingroup$
@cegaton holy Jesus! it has solved the problem! marry me, cegaton! I totally agree with you with regard to displacement but my idea was to combine individual alpha elements to produce reliefs and cravings easily. now it will work thanks to you. if you so wish, post an answer for others, probably with a bit of explanation on depth and how it affects the result
$endgroup$
– SerhiiPoklonskyi
7 hours ago
1
1
$begingroup$
will do it soon. Thank you again
$endgroup$
– SerhiiPoklonskyi
6 hours ago
$begingroup$
will do it soon. Thank you again
$endgroup$
– SerhiiPoklonskyi
6 hours ago
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
short answer: as @cegaton has pointed out, save your image with 16 bits depth. If you are baking like me, this has to be done before bake, because resaving later does not generate information lost while baking.
Go to 'Save as' dialog, there in the lower left corner choose 16 bits depth.
A (not so) technical explanation: if you are not familiar, depth is the number of bits used to store the pixel color. With 8 bits, you store 2^8 = 256 color values at maximum, with 16 bits that's 2^16 = 65536 possible colors. What was happening in my case is the following: for the different height values in the geometry that I was baking, an approximation was generated to store the height. In other words, for heights x-0.01, x, x+0.01s, it would simply put x, because it has no bits to store all the three. Hence the flat areas in your image, where the color gets 'simplified'. For my 8 bits image, I did for learning purposes:
len(set(bpy.data.images['My8BitImg.png'].pixels))
This has returned 256, while
len(set(bpy.data.images['My16BitImg.png'].pixels))
has returned 42891 in my case.
IMPORTANT EDIT. maybe that's a well known fact but I feel obligued to let others know: Blender does not reload images when you 'Save as'. ...which means that when you save it with the new depth, it will have the new depth there on the hard drive but not in Blender. ...so that you have to reload it, and now it will be okay. To sum up: create new image ---> save with new depth ---> reload ---> can use now. Again, maybe that's well known but IMHO that's super weird behavior and I spent last 5 hours to figure out what was going on. cheers
$endgroup$
$begingroup$
If you want the images to be part of the file you have to "Pack" them. Then you don't need to reload anything. Read: when shoudl I pack an image into the file? and blender.stackexchange.com/questions/1336/…
$endgroup$
– cegaton
18 mins ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "502"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fblender.stackexchange.com%2fquestions%2f135103%2fheight-map-for-normal-input-sharp-edges%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
short answer: as @cegaton has pointed out, save your image with 16 bits depth. If you are baking like me, this has to be done before bake, because resaving later does not generate information lost while baking.
Go to 'Save as' dialog, there in the lower left corner choose 16 bits depth.
A (not so) technical explanation: if you are not familiar, depth is the number of bits used to store the pixel color. With 8 bits, you store 2^8 = 256 color values at maximum, with 16 bits that's 2^16 = 65536 possible colors. What was happening in my case is the following: for the different height values in the geometry that I was baking, an approximation was generated to store the height. In other words, for heights x-0.01, x, x+0.01s, it would simply put x, because it has no bits to store all the three. Hence the flat areas in your image, where the color gets 'simplified'. For my 8 bits image, I did for learning purposes:
len(set(bpy.data.images['My8BitImg.png'].pixels))
This has returned 256, while
len(set(bpy.data.images['My16BitImg.png'].pixels))
has returned 42891 in my case.
IMPORTANT EDIT. maybe that's a well known fact but I feel obligued to let others know: Blender does not reload images when you 'Save as'. ...which means that when you save it with the new depth, it will have the new depth there on the hard drive but not in Blender. ...so that you have to reload it, and now it will be okay. To sum up: create new image ---> save with new depth ---> reload ---> can use now. Again, maybe that's well known but IMHO that's super weird behavior and I spent last 5 hours to figure out what was going on. cheers
$endgroup$
$begingroup$
If you want the images to be part of the file you have to "Pack" them. Then you don't need to reload anything. Read: when shoudl I pack an image into the file? and blender.stackexchange.com/questions/1336/…
$endgroup$
– cegaton
18 mins ago
add a comment |
$begingroup$
short answer: as @cegaton has pointed out, save your image with 16 bits depth. If you are baking like me, this has to be done before bake, because resaving later does not generate information lost while baking.
Go to 'Save as' dialog, there in the lower left corner choose 16 bits depth.
A (not so) technical explanation: if you are not familiar, depth is the number of bits used to store the pixel color. With 8 bits, you store 2^8 = 256 color values at maximum, with 16 bits that's 2^16 = 65536 possible colors. What was happening in my case is the following: for the different height values in the geometry that I was baking, an approximation was generated to store the height. In other words, for heights x-0.01, x, x+0.01s, it would simply put x, because it has no bits to store all the three. Hence the flat areas in your image, where the color gets 'simplified'. For my 8 bits image, I did for learning purposes:
len(set(bpy.data.images['My8BitImg.png'].pixels))
This has returned 256, while
len(set(bpy.data.images['My16BitImg.png'].pixels))
has returned 42891 in my case.
IMPORTANT EDIT. maybe that's a well known fact but I feel obligued to let others know: Blender does not reload images when you 'Save as'. ...which means that when you save it with the new depth, it will have the new depth there on the hard drive but not in Blender. ...so that you have to reload it, and now it will be okay. To sum up: create new image ---> save with new depth ---> reload ---> can use now. Again, maybe that's well known but IMHO that's super weird behavior and I spent last 5 hours to figure out what was going on. cheers
$endgroup$
$begingroup$
If you want the images to be part of the file you have to "Pack" them. Then you don't need to reload anything. Read: when shoudl I pack an image into the file? and blender.stackexchange.com/questions/1336/…
$endgroup$
– cegaton
18 mins ago
add a comment |
$begingroup$
short answer: as @cegaton has pointed out, save your image with 16 bits depth. If you are baking like me, this has to be done before bake, because resaving later does not generate information lost while baking.
Go to 'Save as' dialog, there in the lower left corner choose 16 bits depth.
A (not so) technical explanation: if you are not familiar, depth is the number of bits used to store the pixel color. With 8 bits, you store 2^8 = 256 color values at maximum, with 16 bits that's 2^16 = 65536 possible colors. What was happening in my case is the following: for the different height values in the geometry that I was baking, an approximation was generated to store the height. In other words, for heights x-0.01, x, x+0.01s, it would simply put x, because it has no bits to store all the three. Hence the flat areas in your image, where the color gets 'simplified'. For my 8 bits image, I did for learning purposes:
len(set(bpy.data.images['My8BitImg.png'].pixels))
This has returned 256, while
len(set(bpy.data.images['My16BitImg.png'].pixels))
has returned 42891 in my case.
IMPORTANT EDIT. maybe that's a well known fact but I feel obligued to let others know: Blender does not reload images when you 'Save as'. ...which means that when you save it with the new depth, it will have the new depth there on the hard drive but not in Blender. ...so that you have to reload it, and now it will be okay. To sum up: create new image ---> save with new depth ---> reload ---> can use now. Again, maybe that's well known but IMHO that's super weird behavior and I spent last 5 hours to figure out what was going on. cheers
$endgroup$
short answer: as @cegaton has pointed out, save your image with 16 bits depth. If you are baking like me, this has to be done before bake, because resaving later does not generate information lost while baking.
Go to 'Save as' dialog, there in the lower left corner choose 16 bits depth.
A (not so) technical explanation: if you are not familiar, depth is the number of bits used to store the pixel color. With 8 bits, you store 2^8 = 256 color values at maximum, with 16 bits that's 2^16 = 65536 possible colors. What was happening in my case is the following: for the different height values in the geometry that I was baking, an approximation was generated to store the height. In other words, for heights x-0.01, x, x+0.01s, it would simply put x, because it has no bits to store all the three. Hence the flat areas in your image, where the color gets 'simplified'. For my 8 bits image, I did for learning purposes:
len(set(bpy.data.images['My8BitImg.png'].pixels))
This has returned 256, while
len(set(bpy.data.images['My16BitImg.png'].pixels))
has returned 42891 in my case.
IMPORTANT EDIT. maybe that's a well known fact but I feel obligued to let others know: Blender does not reload images when you 'Save as'. ...which means that when you save it with the new depth, it will have the new depth there on the hard drive but not in Blender. ...so that you have to reload it, and now it will be okay. To sum up: create new image ---> save with new depth ---> reload ---> can use now. Again, maybe that's well known but IMHO that's super weird behavior and I spent last 5 hours to figure out what was going on. cheers
edited 28 mins ago
answered 5 hours ago
SerhiiPoklonskyiSerhiiPoklonskyi
465
465
$begingroup$
If you want the images to be part of the file you have to "Pack" them. Then you don't need to reload anything. Read: when shoudl I pack an image into the file? and blender.stackexchange.com/questions/1336/…
$endgroup$
– cegaton
18 mins ago
add a comment |
$begingroup$
If you want the images to be part of the file you have to "Pack" them. Then you don't need to reload anything. Read: when shoudl I pack an image into the file? and blender.stackexchange.com/questions/1336/…
$endgroup$
– cegaton
18 mins ago
$begingroup$
If you want the images to be part of the file you have to "Pack" them. Then you don't need to reload anything. Read: when shoudl I pack an image into the file? and blender.stackexchange.com/questions/1336/…
$endgroup$
– cegaton
18 mins ago
$begingroup$
If you want the images to be part of the file you have to "Pack" them. Then you don't need to reload anything. Read: when shoudl I pack an image into the file? and blender.stackexchange.com/questions/1336/…
$endgroup$
– cegaton
18 mins ago
add a comment |
Thanks for contributing an answer to Blender Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fblender.stackexchange.com%2fquestions%2f135103%2fheight-map-for-normal-input-sharp-edges%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
It might have to do with the bit depth of the height map. 8 bit images are not enough to resolve subtle gradation. Try using images in a higher bit depth. Also, images used for bumps and normals should not be interpreted as color, but as Non-Color-Data (linear)
$endgroup$
– cegaton
7 hours ago
$begingroup$
@cegaton thanks for responding. bpy.data.images['...'].depth tells me it's 32 bit depth (is this the right indicator?). Also, changing to non-color does not affect the result. take a look at the last illustration: CrazyBump can algorithmically ignore or exaggerate those cuttoffs by setting its degree of attention to fine details
$endgroup$
– SerhiiPoklonskyi
7 hours ago
2
$begingroup$
32 bit is 8bit per channel+Alpha (8 bit x 4). Try creating the image as a 16 bit per channel file. Non-color data should always be used for bump and height maps to avoid the distortion caused by the 2.2 gamma curve used in image encoded in sRGB. You might get better results using a displace modifer and subsurf than using displacement in the material or shader.
$endgroup$
– cegaton
7 hours ago
$begingroup$
@cegaton holy Jesus! it has solved the problem! marry me, cegaton! I totally agree with you with regard to displacement but my idea was to combine individual alpha elements to produce reliefs and cravings easily. now it will work thanks to you. if you so wish, post an answer for others, probably with a bit of explanation on depth and how it affects the result
$endgroup$
– SerhiiPoklonskyi
7 hours ago
1
$begingroup$
will do it soon. Thank you again
$endgroup$
– SerhiiPoklonskyi
6 hours ago