AI Safety Strategy

navigating our way to a better future

About Us

The goal of this organization is to improve the odds of good outcomes for humans in the coming years, and even decades. The end goal is to navigate a path toward an adequate stable equilibrium as machine intelligence continues to rise. What this means is steering toward a path that has a stable trajectory for humanity’s survival and flourishing. This means finding safer solutions for AIs as they become increasingly powerful, lengthening timelines to make building those systems more likely, policies that reduce chaos within our current systems so that coordination can be done effectively and quickly in the future, and finding win/win solutions that diminish the incentive for rogue actors or violence from power-seeking. We are entering uncharted waters, and no one is currently at the helm of the ship. But one of the most undervalued members on any vessel are the navigators. They chart the course forward. We aim to be those navigators.  

MISSION

There is a mix of coordination failures occurring right now, and different types might require different tactics. There is coordination between governments for international cooperation building. Coordination between different AI companies for safety building and avoiding hazardous rogue actors. There is coordination between different groups within governments to pass effective legislation. These are all tall orders. But almost no one is currently working full time on building this, and some of these might interconnect. For instance, this will sometimes involve coordinating AI companies and governments to work together on creating effective monitoring agencies. Sometimes, it will involve connecting legislators with AI Safety researchers for advising risk assessment, or developing requirements for independent audits. 

Lengthen Timelines

Crafting and implementing sound policies that lengthen timelines. This could be a pivotal single legislation that dramatically shifts timelines, or it could be many small policies that add more time aggregated together. We are utilizing people with sound understanding of drafting policies, and those who can effectively put them forward into legislation.

Accelerate Alignment

This will likely mean working with many different organizations to better speed progress. Some onboarding organizations are starting to develop, and we are already working closely with them and inviting them into our ecosystem. We are already connecting people we have personally advised with these orgs. We intend to also connect alignment organizations with them, so that an effective pipeline can be created, where new recruits know where to go for onboarding, those onboarding them understand what alignment orgs want, and alignment orgs know where to go to hire new talent. This could also involve connecting to other organizations to speed-up individuals creating their own research orgs.

CHALLENGING FUTURE

Where do we want to steer humanity, and how do we find a solution that will reduce the risk of violence between different parties wanting to build powerful AIs first? Finding stable solutions to this early have two benefits. It reduces the risk of violence (and other bad outcomes) in the future, and it potentially makes slowing capabilities and coordinating on safety easier, since parties will have less incentive to try to go rogue or try to outcompete the other. People for this might be someone with deep experience in game theory or ethical philosophy, with potentially a board of advisors.

 

Very few organizations are focused on AI Strategy. This will be a mix between governance, onboarding, and alignment plans. The goal is to assemble a team who can navigate the path of all three of these, finding optimal ways to steer forward in the coming years. Strategy is the third head of the dragon, and it is currently incredibly neglected. There is almost no one currently working full time on this.

CONTACT US

If you are interested in joining, supporting, or getting involved with us, please reach out at contact@ai-safety-strategy.org