Hello.
First time posting on this forum. ComfyUI newbie here.
This post is regarding the ComfyUI workflow in NVIDIA’s “3d conditioning” blueprint workflow.
git url: GitHub - NVIDIA-Omniverse-blueprints/3d-conditioning: Enhance and modify high-quality compositions using real-time rendering and generative AI output without affecting a hero product asset.
NVIDIA tech blog (for context): How Generative AI Fuels Personalized, Brand-Accurate Visuals With OpenUSD
This concept of 3D conditioning for precise visual image generation is game changing, particularly in the field of generating marketing contents with brand-approved 3D digital twins/assets of the hero assets. However I feel like there is room for improvements regarding this workflow to be actually useful in an enterprise marketing scene.
So I have two questions regarding the blueprint for “3d conditioning” (aka NVIDIA Omniverse Blueprint: 3D Conditioning for Precise Visual Generative AI)
1. How to impose consistent light on the “hero asset” and the rest of the objects and background?
- According to the comfyui workflow, the rgb image of the “hero asset” is masked, untouched, and then later merged with the generated background (along with the other objects in the scene).
- This resulting image is very unnatural since the generated background and the “hero asset” has different lighting.
- Is there a way to impose consistent lighting on both the “hero asset” and its surroundings?
2. How to generate a “clean contour” of the “hero asset”?
- Below is the sample image generated using the blueprint, and as you can see the “hero asset” is unaltered as intended, however a “jagged contour” of the “hero asset” is also generated around the masked area as you can see in highlighted area of the image below.
- Is there a way or a workaround to prevent this from happening? Any help would be much appreciated!