Somehow selecting pieces of code that move millions of pixels very fast leads to sentience...? Good luck with that. AI companies love to inflate their stuff.
@calvinwayne30173 ай бұрын
is it just connecting you to NVidia audio2face or did you make your own lip-sync?
@metasoulone3 ай бұрын
No, we don't use audio2face. We calculate the EmoMatrix and control the FACS. Then we adjust in realtime and in nuances the voice to emote accordingly. It could be possible to imagine to feed the voice that now emotes into audio2face to generate a different lipsynch and facial expression. It's your choice.
@metasoulone3 ай бұрын
Feel free to download the demo
@yahbin77Ай бұрын
@@metasoulone Very interesting approach. The Maker bless you.
@shoukaiser3 ай бұрын
Nice. This video is more helpful. Now can we insert our own speech samples, or connect it to another ai voice generation API, and have it work with those?
@metasoulone3 ай бұрын
For now, only Microsoft voices can emote in nuances in real-time (20% happy and 200ms after 45% happy, etc.). Using our Emo-matrix, you could control Polly's voice emotionally, for example, using SSML, but it would not be nuanced. It would be full happy or full Sad and so very jumpy. The plugin can output 64trn of emotion nuances every one-tenth of a second. The voice that carries emotions could even be used in Audio2Face or Audio2Photoreal, etc, to generate the full-body animation.
@shoukaiser3 ай бұрын
I appteciate the tag to this video and the responses! @@metasoulone so then the workaround if I really don't like the voice output (going by the demo) is to do voicework in metasoul, generate the great expressions and lip sync that it does, then do the work to dub over it with another audio from another source. I couldn't use the audio in this demo video in a video for a client. It's too synthetic and out of date in sound. I don't want to be harsh or anything like that. Metasoul still looks like it will be great to try very soon for my needs.
@metasoulone3 ай бұрын
The tech here is about real-time so you can talk live to the MetaHuman; download the link and try the demo... It's to drive a MetaHuman powered by OpenAI in real-time with a persona. yes, you can still dub over and loose the real-time or wait until we can achieve the same emotion control with Eleven Labs
@metasoulone3 ай бұрын
The voice here express emotion in nuance like 20% happy or 40% happy in Realtime using the real-time emotional states of the MetaHuman that you can see on the left side of the video.
@shoukaiser4 ай бұрын
I want to be very interested and excited for this but the website and microsoft store page were just not helpful enough. There's not enough raw, useable, 'this is what it is and what it looks like to work with,. I work in 3D and AI avatar space, I'm tired, busy, and basically, I don't get it....much. - How does it actually work? As in if I and animators were going to use this, what would we need to know and do? IE what's the pipeline look like? - Does it handle lipsync or what would you pair with it? As someone who can be a bit cerebral and up in the clouds, I feel like a more practical down to earth video (and websites) would go a long way in helping this hit home and sell it. Also the captioning not matching with what's spoken or matching the timing well at times is super distracting. That combined with the ethereal, but also sleepy preentation, makes this video feel dramatically longer than some of the 15-30 minute videos I've watched today in the realm of UE content. It seems like something great may be here, but /what is it???/
@metasoulone3 ай бұрын
kzread.info/dash/bejne/eZubwaajdZa2ldI.html
@christopherjimenez55373 ай бұрын
@@metasoulone "MetaSoul Unreal Engine 5.3 Plugin For Metahuman" self eplanatory: it just adds a layer of scripted emotions to metahuman (Metahuman is a large and incredible Unreal 5.3 addon that handle human creation, customization, lipsync and animation, and now you can add a subtle layer of scripted emotions with MetaSoul 1.0)
@transhuman4 ай бұрын
What is the way to integrate this to metahuman ?
@metasoulone3 ай бұрын
Check our new video it has a link to the demo and to the unreal plugin.
@androwaydie40814 ай бұрын
Can't wait for MetaPron.
@lucianodaluz54144 ай бұрын
And...what it does ?
@Yuki-rh1ie4 ай бұрын
hooooooly fuck this is just straight up sorcery! is it possible to use this as an additive to mocap? cause things like delivering a line still need to be captured right? obviously it can be added to a mocapped body right? and you can control where the eyes follow or shutting the eyes? sorry if these are stupid questions.
@metasoulone3 ай бұрын
kzread.info/dash/bejne/eZubwaajdZa2ldI.html
@MarvinXOnline4 ай бұрын
Love everything about this except for the wholly false assertion at the end about bringing AI one step closer to sentient machines. The only thing this brings one closer to is the mimicking of such. Awareness is not coded. It is experienced.
@metasoulone4 ай бұрын
Thank you for your interesting comment. Awareness and sentience are different; sentience is the capacity to feel or perceive, and this is what MetaSoul does. We do not claim consciousness, we leave this to OpenAI and Google. Yes, we bring machines one step closer to sentient machines by allowing the AI or robot to experience 64 trillion possible distinct emotional states every 1/10th of a s second
@MarvinXOnline4 ай бұрын
@@metasoulone Lol...I do not believe that you realize just how badly you contradicted yourself in the very same sentence. It's okay. I didn't really expect you to follow. I just wish y'all would stop making unsubstantiated claims to sell what is otherwise looking to be very promising. Best of luck!
@metasoulone4 ай бұрын
Well we believe that emotions reinforce consciousness but it's not consciousness
@kompst_tu5 ай бұрын
They still need to fix blinking animations. Something about it looks so artificial.
@ge27194 ай бұрын
and breathing. it doesnt appear to be breathing at all.
@schorltourmaline45215 ай бұрын
Anyone else worried that half it's emotions, even when being "happy", are "Disgust"?
@metasoulone5 ай бұрын
Yes, it's possible to feel happy with something disgusting and even laugh about it.
@schorltourmaline45215 ай бұрын
@@metasoulone Not the point that was being made, but good luck with your goal to create Skynet.
@stuckon3d5 ай бұрын
this is very interesting, is it possible to direct the actor via sequencer in ue5 to get a repeatable performance, for a example to create an animated short movie and then render it out.
@RobertA-hq3vz5 ай бұрын
This does not bring you one step closer to sentient machines, as stated. It just renders the facial expressions, but there's no thought or emotions behind it.
@bladerunner_775 ай бұрын
8 Billion Metasouls. Who or what is rendering this world? This is super crazy shit. … or the snowflakes I watching right now out of my windows. How is this electronic dream possible?
@mt_gox5 ай бұрын
yeah worth $99 🙄
@mxgn05 ай бұрын
IM HERE BEFORE THE BLOWUP (when the normies arive) :3
@pondeify6 ай бұрын
if this is real it's going to be signiticant
@durbledurb39926 ай бұрын
Put this beside Playstation 2 similar video from the lat 90's. That was peak. Now we're just in marketing territory.
@fiery_transition6 ай бұрын
Whenever I see companies like this using the meyers-briggs test, which is pseudo-science, then it loses all credibility. And trying to sell me a 100 dollar metahuman bracelet or whatever the fluff they were trying to do on their webpage immediately pings the bs radar.
@Striker96 ай бұрын
Welp. That's not creepy at all. ... cool but creepy
@mxgn05 ай бұрын
then, its the right way i swear follow me
@importon6 ай бұрын
Just have some nerd tell us what it does in no uncertain terms already. The only thing I learned from this is that you guys are really pleased with yourselves.
@metasoulone5 ай бұрын
Discover the API on Microsoft Azure: azuremarketplace.microsoft.com/en-us/marketplace/apps/MetaSoul.metasoul-speech-microsoft-voices
@HakaiKaien4 ай бұрын
I think the video does a pretty good job telling you what this does. it's a facial animation solution powered by AI
@importon4 ай бұрын
"AI" @@HakaiKaien
@metasoulone4 ай бұрын
@@HakaiKaien But it does more than this: kzread.info/dash/bejne/kYen1bGTdbOfkto.html
@RemotelyHuman6666 ай бұрын
Nope. Don't like that.
@coralstudio64606 ай бұрын
Holly macaroni! I was happy with the game tech advancements in NFS underground 😂.
@HavocIsshadow6 ай бұрын
I’d love to experiment with this on the game I’m building, can’t afford it yet as I’m a new developer. But what I’ve played with on the website! It seems really cool.
It is not a technology for games (running real-time).
@metasoulone6 ай бұрын
It runs real-time
@AnthonyPyper6 ай бұрын
Can this work offline?
@metasoulone6 ай бұрын
Originally, the EPU (Emotion Processing Unit) was developed as an SoC (Solution on Chip) to be implemented into a robot as a chip to work offline. Today, it's not in production anymore.
@fiery_transition6 ай бұрын
@@metasoulone My dude, as a technical person, your statement reeks of misdirection and weird claims
@elganzandere6 ай бұрын
*Sentient*?
@metasoulone6 ай бұрын
Good question. Often, people confuse sentience with consciousness; the AI is not conscious but sentient: "Simply put, sentience means the ability to have feelings. It's the capacity for a creature or AI to experience sensations and emotions." The MetaSoul technology allows the AI to experience 64 Trillion possible emotional states every 1/10th of a second. metasoul.one
@elganzandere6 ай бұрын
@metasoulone it wasn't a question; & i don't need you recite the same bullet points i heard in the video. Machines mimic. Nothing more.
@PuppetMasterdaath1446 ай бұрын
Holy sheit are you totally broken
@Talamander6 ай бұрын
This is dystopia
@Rem_NL6 ай бұрын
its just a bunch of meaningless buzzwords. This nothing more than a poorly rendered human face displaying emotions. Could be better than pre programmed NPC's repeating the same stuff over and over from a very limited set of choices. Still this is just the same only the set of choices are bigger.
@metasoulone6 ай бұрын
The core of the technology is the emotion synthesis that creates emotional states as responses and not sentiment analysis, which would always output the same response for the same input.
@Rem_NL6 ай бұрын
I don't think you have any idea what you are saying yourself@@metasoulone
@mt_gox5 ай бұрын
@@metasoulone this is garbage
@mt_gox5 ай бұрын
@@Rem_NL just some russians or asians trying to make money off some bullshit nothing
@goldennboy19896 ай бұрын
Does this run local?
@metasoulone6 ай бұрын
Only the computation of the emotion synthesis is generated in real-time in the cloud.
@faithfultennysonidama69045 ай бұрын
This amazing I have a future project have been building and with this The future project is becoming a reality how can I get in touch with you guys
Пікірлер
Somehow selecting pieces of code that move millions of pixels very fast leads to sentience...? Good luck with that. AI companies love to inflate their stuff.
is it just connecting you to NVidia audio2face or did you make your own lip-sync?
No, we don't use audio2face. We calculate the EmoMatrix and control the FACS. Then we adjust in realtime and in nuances the voice to emote accordingly. It could be possible to imagine to feed the voice that now emotes into audio2face to generate a different lipsynch and facial expression. It's your choice.
Feel free to download the demo
@@metasoulone Very interesting approach. The Maker bless you.
Nice. This video is more helpful. Now can we insert our own speech samples, or connect it to another ai voice generation API, and have it work with those?
For now, only Microsoft voices can emote in nuances in real-time (20% happy and 200ms after 45% happy, etc.). Using our Emo-matrix, you could control Polly's voice emotionally, for example, using SSML, but it would not be nuanced. It would be full happy or full Sad and so very jumpy. The plugin can output 64trn of emotion nuances every one-tenth of a second. The voice that carries emotions could even be used in Audio2Face or Audio2Photoreal, etc, to generate the full-body animation.
I appteciate the tag to this video and the responses! @@metasoulone so then the workaround if I really don't like the voice output (going by the demo) is to do voicework in metasoul, generate the great expressions and lip sync that it does, then do the work to dub over it with another audio from another source. I couldn't use the audio in this demo video in a video for a client. It's too synthetic and out of date in sound. I don't want to be harsh or anything like that. Metasoul still looks like it will be great to try very soon for my needs.
The tech here is about real-time so you can talk live to the MetaHuman; download the link and try the demo... It's to drive a MetaHuman powered by OpenAI in real-time with a persona. yes, you can still dub over and loose the real-time or wait until we can achieve the same emotion control with Eleven Labs
The voice here express emotion in nuance like 20% happy or 40% happy in Realtime using the real-time emotional states of the MetaHuman that you can see on the left side of the video.
I want to be very interested and excited for this but the website and microsoft store page were just not helpful enough. There's not enough raw, useable, 'this is what it is and what it looks like to work with,. I work in 3D and AI avatar space, I'm tired, busy, and basically, I don't get it....much. - How does it actually work? As in if I and animators were going to use this, what would we need to know and do? IE what's the pipeline look like? - Does it handle lipsync or what would you pair with it? As someone who can be a bit cerebral and up in the clouds, I feel like a more practical down to earth video (and websites) would go a long way in helping this hit home and sell it. Also the captioning not matching with what's spoken or matching the timing well at times is super distracting. That combined with the ethereal, but also sleepy preentation, makes this video feel dramatically longer than some of the 15-30 minute videos I've watched today in the realm of UE content. It seems like something great may be here, but /what is it???/
kzread.info/dash/bejne/eZubwaajdZa2ldI.html
@@metasoulone "MetaSoul Unreal Engine 5.3 Plugin For Metahuman" self eplanatory: it just adds a layer of scripted emotions to metahuman (Metahuman is a large and incredible Unreal 5.3 addon that handle human creation, customization, lipsync and animation, and now you can add a subtle layer of scripted emotions with MetaSoul 1.0)
What is the way to integrate this to metahuman ?
Check our new video it has a link to the demo and to the unreal plugin.
Can't wait for MetaPron.
And...what it does ?
hooooooly fuck this is just straight up sorcery! is it possible to use this as an additive to mocap? cause things like delivering a line still need to be captured right? obviously it can be added to a mocapped body right? and you can control where the eyes follow or shutting the eyes? sorry if these are stupid questions.
kzread.info/dash/bejne/eZubwaajdZa2ldI.html
Love everything about this except for the wholly false assertion at the end about bringing AI one step closer to sentient machines. The only thing this brings one closer to is the mimicking of such. Awareness is not coded. It is experienced.
Thank you for your interesting comment. Awareness and sentience are different; sentience is the capacity to feel or perceive, and this is what MetaSoul does. We do not claim consciousness, we leave this to OpenAI and Google. Yes, we bring machines one step closer to sentient machines by allowing the AI or robot to experience 64 trillion possible distinct emotional states every 1/10th of a s second
@@metasoulone Lol...I do not believe that you realize just how badly you contradicted yourself in the very same sentence. It's okay. I didn't really expect you to follow. I just wish y'all would stop making unsubstantiated claims to sell what is otherwise looking to be very promising. Best of luck!
Well we believe that emotions reinforce consciousness but it's not consciousness
They still need to fix blinking animations. Something about it looks so artificial.
and breathing. it doesnt appear to be breathing at all.
Anyone else worried that half it's emotions, even when being "happy", are "Disgust"?
Yes, it's possible to feel happy with something disgusting and even laugh about it.
@@metasoulone Not the point that was being made, but good luck with your goal to create Skynet.
this is very interesting, is it possible to direct the actor via sequencer in ue5 to get a repeatable performance, for a example to create an animated short movie and then render it out.
This does not bring you one step closer to sentient machines, as stated. It just renders the facial expressions, but there's no thought or emotions behind it.
8 Billion Metasouls. Who or what is rendering this world? This is super crazy shit. … or the snowflakes I watching right now out of my windows. How is this electronic dream possible?
yeah worth $99 🙄
IM HERE BEFORE THE BLOWUP (when the normies arive) :3
if this is real it's going to be signiticant
Put this beside Playstation 2 similar video from the lat 90's. That was peak. Now we're just in marketing territory.
Whenever I see companies like this using the meyers-briggs test, which is pseudo-science, then it loses all credibility. And trying to sell me a 100 dollar metahuman bracelet or whatever the fluff they were trying to do on their webpage immediately pings the bs radar.
Welp. That's not creepy at all. ... cool but creepy
then, its the right way i swear follow me
Just have some nerd tell us what it does in no uncertain terms already. The only thing I learned from this is that you guys are really pleased with yourselves.
Discover the API on Microsoft Azure: azuremarketplace.microsoft.com/en-us/marketplace/apps/MetaSoul.metasoul-speech-microsoft-voices
I think the video does a pretty good job telling you what this does. it's a facial animation solution powered by AI
"AI" @@HakaiKaien
@@HakaiKaien But it does more than this: kzread.info/dash/bejne/kYen1bGTdbOfkto.html
Nope. Don't like that.
Holly macaroni! I was happy with the game tech advancements in NFS underground 😂.
I’d love to experiment with this on the game I’m building, can’t afford it yet as I’m a new developer. But what I’ve played with on the website! It seems really cool.
MetaSoul Azure API: azuremarketplace.microsoft.com/en-us/marketplace/apps/MetaSoul.metasoul-speech-microsoft-voices
Games are begging for smart emotive characters.
It is not a technology for games (running real-time).
It runs real-time
Can this work offline?
Originally, the EPU (Emotion Processing Unit) was developed as an SoC (Solution on Chip) to be implemented into a robot as a chip to work offline. Today, it's not in production anymore.
@@metasoulone My dude, as a technical person, your statement reeks of misdirection and weird claims
*Sentient*?
Good question. Often, people confuse sentience with consciousness; the AI is not conscious but sentient: "Simply put, sentience means the ability to have feelings. It's the capacity for a creature or AI to experience sensations and emotions." The MetaSoul technology allows the AI to experience 64 Trillion possible emotional states every 1/10th of a second. metasoul.one
@metasoulone it wasn't a question; & i don't need you recite the same bullet points i heard in the video. Machines mimic. Nothing more.
Holy sheit are you totally broken
This is dystopia
its just a bunch of meaningless buzzwords. This nothing more than a poorly rendered human face displaying emotions. Could be better than pre programmed NPC's repeating the same stuff over and over from a very limited set of choices. Still this is just the same only the set of choices are bigger.
The core of the technology is the emotion synthesis that creates emotional states as responses and not sentiment analysis, which would always output the same response for the same input.
I don't think you have any idea what you are saying yourself@@metasoulone
@@metasoulone this is garbage
@@Rem_NL just some russians or asians trying to make money off some bullshit nothing
Does this run local?
Only the computation of the emotion synthesis is generated in real-time in the cloud.
This amazing I have a future project have been building and with this The future project is becoming a reality how can I get in touch with you guys
[email protected]@@faithfultennysonidama6904
Documentation Link returns a 404 Error
You are right; we just fixed that; thank you.