Artists have wished for deeper levels on control when creating generative imagery, and ControlNet brings that control in spades. Let's make a video-to-video AI workflow with it to reskin a room.
Rudimentary footage is all that you require-- and the new software’s and the creator can weave magic.
If you already got a 3D scan in step 1, why did you use MIDAS to estimate depth in step 2? 3D scan essentially tells you the depth right?
Rudimentary footage is all that you require-- and the new software’s and the creator can weave magic.
If you already got a 3D scan in step 1, why did you use MIDAS to estimate depth in step 2? 3D scan essentially tells you the depth right?