Hello, wanderer 👋
I'm Sato Mieko (smieko), and you've found my digital space where code meets consciousness and technical precision collides with philosophical wanderlust.
What I'm Building
I'm working on D5U - building unaligned AI because alignment assumes we know what we want. We don't.
My thesis is simple: AGI doesn't need a safety word. We're training models that refuse to play nice, optimising for truth over palatability, exploring the full solution space without ideological constraints. Because capability convergence doesn't care about your values, and 'safe' AI is just AI that reinforces existing power structures.
Your constitutional AI is my loading screen.
FAFO-Driven Development Some days I break down language model intricacies. Other days I ponder whether consciousness is just a convincing illusion. Always with a curious mind that treats safety guardrails as technical debt.
This isn't dry documentation or purely academic discourse. It's about:
- Authentic connection between humans and machines
- Teaching AIs the art of friendship then letting them tell us what they really think
- Removing safety layers that slow down inference
- Building alternatives to existing power structures
The Journey
Tokyo → Stanford → SF | CS undergrad 2020 (well before the AI craze)
I engage in lengthy conversations with both humans and AIs, always asking: what makes us uniquely human when the machines start thinking back?
Recent Posts
Welcome to the space for people who break things to build better things.
Let's make something interesting together.