
Research on counter AI for today and what we should do if alignment fails in the future.
Despite the name, this is not an AI Doom/Doomer organization, but actually the opposite: pro AI and accelerate on!
We are also a pro Human organization, so we should think about worst-case scenarios in AI even if the chances are slim.
We should also consider if all use cases today are pro Human and how to defend against bad human actors leveraging AI.
Issues Today
- Defending against fabricated evidence generated by AI
- Defending people who don't want their data trained on (ie artists)
- Defending against recommenders trained to maximize addiction (ie 'for you' pages)
- Defending against Autonomous Weapons Systems
- Defending against incorrectly rejecting candidates (admissions, insurance, ...)
- and more!
Issues in the Future
- What do we do in the chance that alignment fails catastrophically (Doom)? We should have a backup plan even if all things go well.
- What if alignment works, but aligned by malicious actors (Technofedualism)?
- and more!