We’re building a toy company
Earlier this year, Bryce, Michelle, and I came together to build Muse, a multimedia ecosystem empowering kids with generative AI for play and storytelling.
So far, we’ve prototyped two assets of our ecosystem: an app (try out our beta! joinmuse.ai/beta), and an AI conversational plushie. A key feature of Muse is the ability to design your own characters with generative AI, and the plushie provokes taking one of these custom characters into the real world and being able to converse with it as it live updates to new stories made with the character in the app. We have yet to see this in a toy — conversation design going beyond voice flows and into live evolving worldbuilding.
We’re often challenged with the question of “what are you?”, in the way that most of the time, people want one super concise, digestible, and tangible response. But of all the boxes we could choose to fit ourselves in, such as an “AI product” or “edTech”, we’re choosing to call ourselves a toy company.
The job that brought my dad to the US was working on hardware for a toy company. It is with joy, comfort, and pride, to be able to say I’m trying to start a toy company now too.
The driving vision behind the companion toy is our belief that storytelling has the power to help us better understand ourselves + our world. More so than an occasionally playful novelty, we want to push how a conversational AI toy can encourage storytelling that is meaningful and valuable. Conversation as a means to evoke reflection, emotion, memory, and wonder; which is then further explored through story.
Our process:
- We began prototyping with an plushie gifted to one of our friends by her ex. So, we didn’t feel bad at all gutting it and stuffing some technology inside.
In this iteration, we prototyped tactile input (needing to hold down on the paw) and visual feedback (a light on the chest that shown to the pattern of the voice). We put together a soft button in the paw with conductive fabric on either side and a makeshift ring of poly-fil and yarn. The light was an rgb LED affixed with conductive thread behind a star filled with poly fil.
2. In our second iteration, we wanted to work with a more custom, designed, and fictional character design.
Meet Pearl, the Merdog:
We also implemented an interface where parents could request conversation points to be brought up by the toy. We are passionate about how this allows for conversation within families beyond/despite barriers such as physical distance or language.
Pearl’s voice is chosen from elevenlabs. Instead of triggered by tactile input, conversation begins with saying “Hello Pearl!”, and ends with [for now] “Let’s go!”. The conversation logic and flow is to begin with asking about how you are for a few sentences, slyly bringing up conversation points inputted by a parent, and then ending with encouraging the kid to consider starting a new story (both to re-circ them back to the app, and also for that larger vision that storytelling is a method of learning boundlessly more about ourselves + the world).
3. And then, lastly, in our most recent iteration, we wanted to work with a branded character, and also explore other feedback outputs. That brings us to prototyping with Elmo, and syncing the movement of its mouth to the audio output. Elmo will have the background knowledge of all the episodes and stories he’s ever been in with Sesame Street.
To be continued…
Special thank you to Zhenfang, Daniel, Dina, and Peter for graciously allowing us to use the E-lab on the second floor of Margaret Morrison this summer. This project would not have been possible without your support and passion.