Pascal's Wager


 I haven't written in quite some time, and I thought about pushing it even further. So be it, let us push further into the future. Here is what I wrote for a newly-learned topic. 
_________________________________________________________________________

Capabilities: 

Algorithmic and compute advancement 


Social: 

Socioeconomic changes 


Systemic: 

Policies, regulations, and incentives. 

_________________________________________________________________________


Question: What will AI systems look like in 10 years and how will current AI systems develop to get there?


AI Systems Landscape by 2032 — evaluating current and future trajectories 


Before I begin, I would like to address my critical shortcomings in my prediction. First, I suspect that I will be taking a convenient sampling, basing my predictions mainly on artificial intelligence precedents and their respective characteristics. Second, I will approximate this landscape with a limited number of variables. There might be variables that we have not even heard of, such as sudden changes in the political climate regarding AI developments in the future. This report acts as an idea/opinion bank. 


There are a couple of sub-questions that I want to answer in my report: 

  1. What are the current AI types, usages, and advancements? 

  2. How do I define transformative AI? 

  3. How will transparency/open sourcing affect the development of AI systems? 

  4. What is an example of how I would predict [insert scenario]?

  5. Is an intelligence explosion achievable in the next ten years? 


Current AI types, usages, and advancements


Currently, narrow AI has been progressively able to surpass human capacity— either by obtaining larger memory storage or higher information retrieval rate at an instance of time. For example, Open AI currently uses supervised learning, meta-learning, and generative models. However, there has been a report on how the recent successor of GPT-2, GPT-3, could be approaching the capability limitations of predictive language models (Sobieszek et. al 2022). 


If GPT 2 was released in early 2019, and GPT 3 was released in the middle of the consequent year— then within two years, or by 2024, it is foreseeable that predictive language models will have reached their threshold cognitive performance. However, it is only one type of narrow AI. 

Using a combination of systems neuroscience and machine learning, DeepMind’s reinforcement learning algorithms culminated in a sensational headline in 2016 (Silver et. al 2016). AlphaGo, an algorithm to solve Go, has since been refined to MuZero in late 2020. MuZero is a general-purpose algorithm that masters Go, chess, shogi and Atari without parameters— it did not need to be told the rules. Planning winning strategies in unknown environments highlights DeepMind’s and other initiatives’ algorithms to implement an adaptable AI upon narrow AI. It is broadening the focal point. 


From 2016 to 2020, MuZero has sinced improved drastically. From this example, I predict that games with exhaustive steps and walks would be able to reach its threshold by 2024. 


While we can continuously improve on narrow AI, the impending topic is how and when a generalist AI agent will come about. 


Defining Transformative AI 


Transformative AI is a type of AI that will radically change the way society operates to the common individual, on a daily basis. Honing in on the topic of widespread AI use, from the graph below, we can see that in 2019 Capterra Top Technology Trends Survey yields a 25% US small businesses intending to use AI and ML in the next one to two years while 27% plans to evaluate some more. 


Figure 1. AI & ML usage 



We can thereby infer a great majority of the small businesses, and can deduct that large industries just by necessity of AI and ML support, will use AI and ML when given the opportunity. Transformative AI, then, would not need too much time to be incorporated once AGI appears. 


[ I didn’t have enough time to find context for AGI, will continue tomorrow ]  


The impact of increasing transparency to AI systems & how it will affect our timeline


The feasibility of increasing transparency to AI systems is high, and even higher in the next ten years. Deepmind’s AlphaFold is an example of increasing transparency to the general public. AlphaFold, an algorithm trained from a public repository of proteins, aims to predict protein structure. Similarly, Google Brain’s TensorFlow was also open-sourced to whoever like to access the data. 


This will affect our timeline by shortening our AI systems growth, by disseminating more information to the public. However, we must also implement more appropriate publishing norms to avoid information hazards from open publication. The most risk comes from what Bostrom would define as data hazards and idea hazards, which multiple rounds of editing and converting to layman terms would fix (Bostrom 2011). 



Is an intelligence explosion achievable in the next ten years? 


Based on my understanding of an intelligence explosion, it will happen as fast as the next recursively intelligent-refining machine is to be invented. When this machine will be invented is a debate. Now, Paul Christiano says that economic output can be used to model technological acceleration. He is measuring it via a logarithmic transformation of GDP over time. Inferring from his model, technological acceleration will just keep on increasing. However, this hyperbolic growth is contingent on continuous, unrestricted limiting factors. Unless a transformative AI can be developed, the leap-frogging growth simply would not be true. Therefore, an intelligence explosion, I hypothesize, is not achievable in the next ten years. 


Figure 2. From Paul Christiano


Comments

Popular posts from this blog

ArtificiaI General Intelligence Safety Fundamentals Notes

A year, introspectively summarized

Vicissitude