Ai Dreams Forum
Chatbots => Avatar Talk => Topic started by: Art on January 11, 2020, 06:23:18 pm
-
Although it is an interesting video the summary based on the reports of CES make it its debut almost anti-climactic. Yes, hopefully better for CES 2021.
https://www.youtube.com/watch?v=iAY2LxaFtaM (https://www.youtube.com/watch?v=iAY2LxaFtaM)
-
I think that the photorealism is the easier part these days. The lip sync isn't great and notice how they all have short hair.
Interesting though, hopefully they will push the bar up a bit.
-
The lip sync isn't great and notice how they all have short hair.
Watching the video I've seen at least two so far who had long hair: the woman on the right at 1:50 and the woman on the left at 1:53. The images are all based on real people and are deep fakes rather than fictitious humans generated randomly by artificial intelligence like we have seen elsewhere. The difference here is that the actions of the deep fakes are controlled by chatbots rather than human directors, and not very good ones.
It seems more like a polished but cobbled together marketing gimmick than anything especially novel and interesting. I guess if it gets traction it might turn into something eventually. Another example of "fake it till you make it" maybe.
-
Id love to see them make some photoreal aliens and monsters using the technology.
-
I know right! This whole recreating people thing is tricky, not really an achievable pursuit. Much more rewarding to make something which is not an incomplete imitation, but rather the beginning of something original. Imagine you could be chatting with a dragon, the current best of its kind. That type of project would have an exciting future.
-
https://www.youtube.com/watch?v=TxErDzsIdKI (https://www.youtube.com/watch?v=TxErDzsIdKI)
Deep fake alien generated in real-time.
-
The lip sync isn't great and notice how they all have short hair.
Watching the video I've seen at least two so far who had long hair: the woman on the right at 1:50 and the woman on the left at 1:53.
That's the real people. Listen to the commentary at that point.
The reason I mention it is that flowing human hair would take a lot of processing power to look photo realistic in real-time.
-
That's the real people. Listen to the commentary at that point.
Yes that's the point that I was trying to make, albeit badly apparently. They were all based on real people.
The reason I mention it is that flowing human hair would take a lot of processing power to look photo realistic in real-time.
Agreed.
-
These models would be good for presenting your work, and I noticed they r so biased, the peace woman, the laid back black guy, the teacher geek look, the family dad, or maybe that's just my biases? Oof.
I suppose the clothing changes u a lot.
-
Yup, I also had such an initial reaction, can't help it. That's why I say people are tricky. We are too self involved not to bring our cultural baggage into play. No one's gonna be able to relax.
-
That's the real people. Listen to the commentary at that point.
Yes that's the point that I was trying to make, albeit badly apparently. They were all based on real people.
Sorry we got our wires crossed. I was referring to the finished avatars, in that they all had short hair :D
-
Well technically we are made of context and are affected by local context most directly, so yes it rubs off. However technically some people do have to be teachers, dads, creeps, news casters, and peace makers. At least when you do one for long enough, you seem to be doing that one currently, not that you are stuck 'X'.
-
they have put a lot of work into it, but I don't see them getting much users and attention with
modestly dressed bold chicks with nose rings.
-
Perhaps they were just trying to showcase what's possible, something or someone for everyone kind of approach. Just a thought.
I've recently seen some amazingly human-like 3-D characters so close one could easily see the pores in their faces and backing away was only reminded that they weren't real. The graphics also had much longer hair that must have been produced and rendered in longer strands then "grouped" together to appear as one head of hair. There was also no noticeable drag, lag or slowdown from a graphics standpoint.
The human-like characters will be totally lifelike and indistinguishable from real people easily within the next 5 years!! (IMHO).
-
Not everybody has your tastes, Yotamarker. I, for instance, prefer my virtual women (and men) modestly dressed.
On the non-human front, I was pretty impressed by this goat I stumbled across the other day. https://twitter.com/spyxx/status/1215797433512464387 Somebody should get him talking, that'd be fun.
-
Yottamarkers got it good bro. Keep that way.
-
Unfortunately, the NEON avatars were real people rather than CGI. Nothing to see here. Move along please.
https://venturebeat.com/2020/01/14/ai-or-bs-distinguishing-artificial-intelligence-from-trade-show-hype/
(http://www.square-bear.co.uk/temp/neon_article.png)
-
One of the videos does show however a very poor quality lip sync demo, so as far as I can tell this is the same thing as last year's digital Chinese anchorman: Real life footage with only a generated mouth transposed. That is, I know this is possible, but I can't tell from the surface whether the mouth was generated with AI (which usually has a more blurry effect) or just photos manually mapped to vowels. Any more confirmation on this?
-
omg, i believed it, now i don't, thanks
-
Came across this today: https://www.cnet.com/how-to/samsung-neon-artificial-humans-are-confusing-everyone-we-set-record-straight/
Entitled "Samsung's Neon 'artificial humans' are confusing everyone. We set the record straight".
These next-gen AI chatbots promise to keep your secrets, teach you yoga and help you find a great restaurant. Just don't ask about the weather.
A robot that brings you more toilet paper when you run out. Samsung's softball-shaped robot named Ballie. Of all the digital pals shown off at CES last week, none were quite as uncanny as Neon, the humanlike "life form" funded by a Samsung lab (but don't confuse it with a Samsung product). In a practical sense, consider it a chatbot with a human face and personality.
-
What seems to be missing, and I haven't found any solution for, is a framework that builds the concept of a virtual head with muscles. All of the anthropomorphic face animations are based on memorized facial expressions that have to be captured from a human being. What would be better is a rig that uses a concept of muscles to pull and stretch polygons. Where virtual skin has algorithmic kinematic principles so it reacts to muscles pulling, tugging, and stretching the way living skin would. This way you can regulate and fine-tune facial expressions programmatically no need for capturing the expression from an actual person, which is time-consuming and expensive...