Posts

Showing posts from August, 2023

AGI Safety Fundamentals: Governance Week 1

Image
 The AI governance landscape composes of series of standards and regulations that addresses catastrophic risks, AI-assisted bioterrorism, and AI-exacerbated conflict. Some example approaches to address AI risk are: model evaluation, hardware export controls, cautious coalition expansion, privacy-preserving verification mechanism treaties, lab governance, "CERN for AI".  I will be documenting my AI Safety Fundamentals: Governance course notes on a weekly basis. I will be posting on LessWrong also.  The AI Triad and What It Means for National Security Strategy [https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Triad-Report.pdf] Modern AI's complexity is summarized as: Machine learning systems use computing power to execute algorithms that learn from data.  An example of Supervised Learning (SL)'s impact on national security is that US military's instatement of Project Maven (the creation of the Algorithmic Warfare Cross-Functional Team). It applied SL to photos