Chapter 151: [Text-To-Speech]
I haven't been able to speak due to a sore throat for a few days. In the end, it's all thanks to my older sister or rather, because of her, that I've been living through these busy days.
[Ding Question]
I shook both of my wet hands as if drying them once.
Answers were being posted one after another in the comments section.
>The thing I do after going to the bathroom
>Is it satisfying? (U.S.)
>Are you tired?
[PinPon. That's right! The correct answer: "Are you tired~".]
I raised both my thumbs and index fingers, moving them as if in a close-up.
It's sign language for "correct answer".
So, today, I'm doing a sign language livestream.
By the way, I learned it using a language cheat just for this livestream.
>Iroha-chan, you're amazing for being able to do sign language
>I thought it was difficult, but surprisingly, I can understand it through the nuances
>Sign language in my country is quite different (U.S.)
As chats from international viewers suggest, sign language actually varies from country to country.
Today, what I'm doing in the livestream is what's known as "Japanese Sign Language".
[Haven't you all started to understand it somewhat?]
>"It feels like a gesture game, and it's fun
>↑That was actually a specific sign language expression called ‘Classifier'
>But the accuracy of hand tracking is amazing (U.S.)
[Yeah, it's really amazing. And the fact that this is considered home recording quality...]
I often use a 2D model in my regular streams, but today I'm using a 3D model just for this occasion.
I got help from a solo VTuber who's skilled in handling 3D equipment.
Recently, I can even use hand tracking with 2D models in "VTuber Studio", but...
When it comes to sign language, 3D is the way to go, no doubt.
>Especially recently, the development of 3D has been remarkable (Korean)
I realized I couldn't leave things to them like this.
I had to create an environment where I could stream even without them.
That's when I came up with the idea of using sign language and text-to-speech software like I'm doing now.
I type the content on the keyboard, and it's read out in a somewhat monotone manner on my behalf.
Then I came up with this plan.
I intended to use the well-known "relaxed voice" for the text-to-speech.
However, one of the volunteers used AI voice synthesis technology to replicate my voice.
I mean, you're a wild pro, aren't you!?
[There's a link in the description for "Monotone Iroha", so as long as it doesn't go against public decency, everyone can say whatever they like. If you want to hear monotone songs. Feel free to create them yourselves.]
>Haha, Ange was the first to use it and uploaded a video. That's hilarious! (U.S.)
>I see. She's already back in America
>So is Iroha saying ‘I want to see Ogu soon' a matter of interpretation? www
[Ogu, are you ready for punishment next time?]
>Oh!
>I was just speaking on behalf of the voiceless Iroha (U.S.)
>It's like I'm always in the comments section LOL
[That's right, for now, it's only available in Japanese, but they're planning to create international versions, including English, in the future.]
>For real!? (U.S.)
>Thank you, thank you so much!" (Korean)
>The international viewers must be thrilled!
>I just had a brilliant idea. Let's use this for the translation device's voice.
>↑Genius, right???
>Will there come a time when every family has an Iroha!?
[Of course, let's keep it reasonable, shall we?]
The chats section seemed to be unusually lively for some reason.
Could it be that things might get weird in the future...? Nah, that won't happen, right?
I had a somewhat forced smile on my face.