Why is it that......?

Home Page Forums General Chat Why is it that......?

Viewing 14 posts - 1 through 14 (of 14 total)
  • Author
    Posts
  • #2070783
    montanablueskies
    Participant
    Rank: Rank-1

    As hard as i try and with close comparison to photographs, my characters only resemble their real life counterparts at certain angles and in certain lighting. It even seems like a model that isnt exactly conformed to the RL image may actually render a better lookalike than one that conforms very closely to the original. Is that my brain doing some association trick? You'd think if the shape and texture was right that it would render consistantly looking like the RL image. Sometimes i light the scene and then turn the head to a ceretain angle and bingo its almost perfect, but then again from another angle I might never be able to light it to get that same realism. Or is it that people in real life only look like themselves at certain angles and lighting and our brains do a trick?

    #2070789
    Frank21
    Participant
    Rank: Rank 4

    Very good questions about making and rendering look-alikes. The human brain is very adept at picking up on barely perceptible (subliminal?) nuances. It's why we can recognise a familiar person from a long distance or a face from just the eyes or partial features. Or mentally reconstruct split words. As you say the brain is doing its association trick.

    1
    So conversely, when there is a mismatch, like a spelling mistake, we know it doesn't look right before we comprehend what the error is. I guess the same applies to facial features that don't look right. It only takes one feature to be wrong or missing before the brain rejects an association. The most instantly recognisable 3D characters are semi-caricatures like Leon because they emphasise (over-emphasise) the features the eye/brain is searching for.

    With 3D models there is a limited number of polys to work with in the first place, while with AI like Deepfake or generative AI they are trained on dozens/hundreds of images from different angles and lighting environments so realistic likenesses of people are much easier to achieve.

    #2070802
    montanablueskies
    Participant
    Rank: Rank-1

    @Frank21:

    Yes thats what I've found. My favorite versions of my #1 character are semi caricatures. i know when I focus on the overall shape that its not quite right, or not detailed enough (smoothed), but when i render them the sense of it being the RL character is much better than if it had been tecnically superior in form. Still though the perspective and lighting are paramount for best realism.

    #2070817
    Frank21
    Participant
    Rank: Rank 4

    Lighting is supremely important for realism but the model is the deal-breaker. Because I got into both realism and look-alikes, after 20+ years of Poser/Daz 3D, I realised that 3D was a losing battle. Not saying it doesn't have its particular place. Generative AI has innumerable issues but for someone who wants images to "resemble their real life counterparts" it's the only realistic option at the moment. Why make a rod for your own back by clinging onto an ageing technology that doesn't do what you want it to do?
    Stunning realism can be achieved with 3D but it's the preserve of people not like us nor using pre-fab shit like Daz/Poser. Note that CGSociety shut down in Jan. in recognition that AI is the new future. Maybe sad but change is life's only constant and it's common denominator.

    2

    #2070839
    Legolas18
    Participant
    Rank: Rank 7

    I think our brain has several integrated mechanisms to check if something is real or not (mostly to avoid being fooled by mimetic predators, but also to be able to quickly identify threats, especially during low-light conditions).

    In other words, you'll have to check several boxes to make something look as realistic as possible:

    - Morph Details: Here you obviously need highly detailed models. It's no surprise, though, that even with the best artist's skills, the best 3d models are the 3D photoscanned ones (since they have tons of extra imperfections and details that artists typically don't notice). Obviously, DAZ models tend to lag behind in this (a lot), especially in terms of body details.
    - Skin Details & Colors: Again, the human skin has tons of imperfections, and an artist would have to spend a loooot of time to replicate something like real skin. In this department, it seems daz has done more work, since they have been using some photoscans on their main characters, although their textures tend to be a bit too generic and not include certain tone differences in different body parts.
    - Lighting: The reason you can get good results on some angles is probably because some shadows might be hiding a certain amount of details, so your brain is not seeking them. Obviously, the less detailed your models + textures are, the more "clever hiding" you have to do to give your render a certain sense of depth and realism.

    On top of that, a lot of accessories/hairs/etc on daz is really low quality, rendering your life even harder.

    TLDR; DAZ Stuff is light years behind what 3D can actually do, but the "best" is pricey and requires technical skills. Also, it's very hard to beat AI's "free" and "easy" (although, it has its own limitations, but things advance very fast here).

    Here is a practical example of a DAZ vs 3D scanned vs AI character:

    Cheyenne 9:

    3D Photoscanned face (morph + textures):

    Good Daz Clothing

    Photoscanned Clothing

    AI:

    #2070844
    Stahlratte
    Participant
    Rank: Rank-1

    Pretty much all what @Legolas18 said.
    A proper laser scan beats a manual sculpt every time.

    Also, the uncanny valley is a big problem. EVERYTHING has to be JUST RIGHT, or the illusion is gone immediately.
    (That's why I rather stay savely THIS side of the valley, trying to find my own type of "neither toon nor photorealistic" "Poser Aesthetic")

    But even with an app like Studio you can come pretty close sometimes:

    Look at the renders of jeff_someone in this thread:
    https://www.daz3d.com/forums/discussion/313401/iray-photorealism

    https://www.daz3d.com/forums/uploads/FileUpload/60/4c44d004f210b07b86436aaf7380ed.jpg

    https://www.daz3d.com/forums/uploads/FileUpload/16/732bd1ba1a3acd535c40fce2e8d197.jpg

    https://www.daz3d.com/forums/uploads/FileUpload/21/cf9ce668dfd989f5c2b73be8395d22.jpg

    https://www.daz3d.com/forums/uploads/FileUpload/7f/433f81bef193cbe439b0897dac8404.jpg

    As I said, I'm not into photorealism, but I still wish I could squeeze out at least one Poser render like that someday.

    😉

    #2070867
    Legolas18
    Participant
    Rank: Rank 7

    @stahlratte I would like to point out that these renders are using the clever technique of "hiding details to fool the brain" by introducing some white noise, blur effect, as well as chromatic distortion.

    I even saw a computer game doing something similar with excellent results.

    As you say, the other option is to make things obviously non-photorealistic, so as to change the brain's expectations (for example, Alita: Battle Angel).

    In other words, if you can't beat it, cheat? 😁

    #2070879
    montanablueskies
    Participant
    Rank: Rank-1

    What is a good AI engine for creating images from text? Or can you start with an image and have the AI improve on it from suggestion?

    #2070880
    montanablueskies
    Participant
    Rank: Rank-1

    @Legolas

    I think your right about the "hiding" technique. Maybe just here and there in an image. In RL no light source produces perfect illumination and sometimes the fact that its too good will alert your eye. For example renders often look better while there is still some grain in them.

    Also, I think I've been fooled by mimetic predators a few times. Or at least thats my defense.

    Curiously, the brain seems to look for "favorite" people. When they are lost from death, distance or the errant word, it seizes on slight similarities in other people to alert you that they have returned. I've caught myself speaking to someone before I realize my brain is deceiving me. Damned brains why do we neeed them anyway.

    #2070881
    Grouchy Old Fart
    Participant
    Rank: Rank 3

    This is why I love staying on the toon side of things, with just enough realism thrown in to make them fully human. I do this very much along the lines of what @legolas18 explained, which does work very, very well.

    I do have (not photo- but merely-) realistic figures, but I intentionally go out of my way to not make them look like anyone I see in real-life. Why? Because I tried quite a few times over the years to make figures that match individuals, and it never works. I either end up disappointed and frustrated, or my brain decides that "hey, let's make a better version of that person instead!", and the figure's shape and face wanders off onto its own style anyway. Also, I can give a better backstory to a figure that doesn't look like anyone in particular - I can give them their own personalities without expectations.

    As far as where it's all going, I don't know yet. I hope AI can let me bring in the characters and shapes I've come to know and love, but bring more powerful tools to bear on them for more fantastic results. It's in its infancy still, but it looks very, very promising, and I want to see it evolve (and more importantly, reach out to our little corner of the hobbyist world.)

    My greatest worry though, is that I end up like the crusty old holdouts you see in the Renderosity forums: clinging desperately to outdated software, techniques, and worst of all? Cranking out the same bland aesthetics ad infinitum that were once an unavoidable necessity of CG, but are now just kitsch - and not even the good kind.

    And yet, at least a bridge to use all of the existing goodies would be nice. I have a ton of stuff that should have been deleted years ago, yet I occasionally bring one of them out, dust it off, modernize it(!), and it's like brand new again.

    Funny, that.

    #2070889
    Ethiopia
    Participant
    Rank: Rank 3

    A point I tried to make earlier is something I read about a while back. Our musculature changes the shape of our face as we talk and our expression changes. The best 3D scan only captures our face's shape at that particular instant. There must be a lot of other subtle clues and features that let us recognize a particular person.

    I'm not 100% behind AI's ability to create models that resemble anyone in particular...yet.
    I go through Civitai every morning and models are all over the place. A few are spot on from most angles, but most break down as soon as the head rotates away from straight ahead. Still, it's a lot better than what we get in Daz.

    #2070899
    montanablueskies
    Participant
    Rank: Rank-1

    I think your on to something there. Things like using the "look" morphs to change the eye socket shape and moving the jaw side to side can make big changes in the realism. We need a better "emotion" system for sure.

    #2071021
    Legolas18
    Participant
    Rank: Rank 7

    @ethiopia You are quite right about expressions, the best way to deal with that would be to have photo-scanned morphs for each expression, just like daz does in some of its products (I mean custom morphs for each expression, they do so manually though).

    Thankfully, these photoscanned emotions do exist:

    As for AI, I agree with you, it's still a work in progress, but the progress is fast (today the SD3 API was release, something like a Public Beta, in a few weeks it's going to be released on HuggingFace too), which is less than one year from the last version.

    #2071022
    Legolas18
    Participant
    Rank: Rank 7

    @Montanablueskies That depends on your computer. If you have a beefy one, I'd use the program Forge (which is a more efficient version of Automatic1111), and as a model it depends on what kind of images you want to make. I am doing mostly anime nowadays, so for me it's a Pony XL merge called Autism mix Confetti (you need a strong GPU since it's based off SDXL.

    "Curiously, the brain seems to look for "favorite" people."

    I think that has to do with neural pathways. The more something is repeated in our brain, the better established (and preferred) is the pathway. For example, we typically like things we are used to, like for example the color Cyan since it's the color of the sky. 🙂

    Or music that has been listened to way too often (that's why radio station keep on hammering the same songs, and ads the same products over and over again).

Viewing 14 posts - 1 through 14 (of 14 total)
  • You must be logged in to reply to this topic.

 

Post You Might Like