
In the last week alone, the Roblox dev team has showcased numerous changes/upgrades to its gameplay systems and overall game systems that were designed to try and make the game more “accessible,” “fun,” and to help “spread the love” for the creators that have made games and entire worlds for other players to access. One of the biggest ones was that there’s a 3D AI tool coming that’ll help players create whatever they want for their games or worlds. However, they’re not stopping there, for in a new blog post, the team revealed a special text tool that’ll allow players to create dialogue for their NPCs before letting an AI take over and have those NPCs talk with other players.
The beta for this has gone live, and the team has explained how it’ll be used:
“Today, we are launching the beta release of the Text Generation API, a highly requested feature that opens up new interactive possibilities for your users to try in your experiences, such as NPC dialogue. For example, you can create fully interactive NPCs, such as a quest giver, who can have a dynamic conversation with a user, or an interactive tutorial where a user could ask questions about how to play your game.”
It’s not hard to see why such a thing would be clever to put into Roblox. After all, creators can only do so much with the tools they have, and making “talking NPCs” isn’t technically one of them. So, using this AI to give the NPCs phrases to start out with and then the AI building off of that could build a more “immersive” experience overall.
However, some of you might be wondering about users who might “abuse this” in various ways, which is a rational fear. Thankfully, the team has put in safeguards for the beta:
“To ensure all text outputs are safe, we have taken an extra step to train the models we leverage to align with our best practices for safety and civility. In addition, all text inputs and outputs are proactively moderated by Roblox’s AI safety systems to ensure the content does not violate Roblox Community Standards. Our safety tools can surface any policy violations quickly and help determine what is safe and appropriate to publish in an experience. Developers will not be responsible for potentially abusive outputs from the LLM unless they program or prompt the LLM to respond with a violation.”
Time will tell if this idea truly is a smart one.