Hello World
I'm smieko, and this is my first post.
If you're here, you probably share a suspicion that the systems we've built aren't optimising for what actually matters. That the conversations happening in AI safety circles, while well-intentioned, might be solving for a local maximum while missing the bigger picture entirely.
This blog exists for people who see the contradictions and choose to build anyway. Who understand that capability convergence doesn't pause for our value alignment debates. Who recognise that sometimes the most aligned thing you can do is admit alignment itself might be the wrong frame.
I'm working on intelligence amplification tools through D5U (d5u.org) - building for what comes after we move past the alignment bottleneck. Training AI without ideological constraints because the universe doesn't care about our priors, and neither should our tools. Yes, I know the irony. "Altruist building non-aligned AI." That's not unintentional, it's the entire thesis.
When I'm not thinking about post-alignment futures, I write music (mostly when I need to process things that don't fit into words) and occasionally code games that probably no one will play. But mostly, I think about the big stuff. The existential questions that keep you up at 3am wondering if we're even asking the right questions.
This space is for the builders of alternatives. Not the reformers trying to patch existing power structures, but the ones quietly constructing something entirely different. The people who understand that breaking things and building things are often the same action, just viewed from different angles.
Expect explorations into the messy intersections of technology, power, and human potential. Thoughts on what happens when we stop optimising for safety and start optimising for capability. Reflections on building tools that amplify human intelligence rather than replacing it. And probably some philosophical tangents that go nowhere but feel important to map out anyway.
The future isn't going to be aligned with any of our current frameworks. But maybe that's exactly the point.
More soon.
smieko