Distillation Article
There are a couple of questions I want to answer in my distillation:
What are some terms that are essential for a successful distillation?
How is Paul Christiano organizing his article?
What is the Humans vs. chimps “fast” take-off argument?
What is “the AGI will be a side effect” argument?
What is the “Understanding” is discontinuous argument?
What are some terms that are essential for a successful distillation?
Before distilling, I would like to define some technical terms that may be beneficial for gaining the most from my article.
Artificial intelligence (AI) systems— are developing at a qualitative rate. There are three types of AI: narrow AI (ANI), general artificial intelligence (AGI), and superintelligent AI (ASI). Each of these compartmentalizations of AI systems has respective political, sociological, and economical importance on human progress.
Benevolent AI— is aligned with human values, and would not present existential threats. However, misaligned AI will be overtly problematic because of its lack of transparency in cognitive computation, where the inputs and outputs of the algorithm represent two endpoints but no path. This lack of transparency limits how to correct misalignment with ethics, and may lead to many routes— the recursive need for external rewards, maximization of happiness but using the wrong approach, and use the of superintelligence to bring about an Orwellian dystopian future (Bostrom 2002).
AI alignment— aims to effectively mitigate existential risk by lessening AI bias, avoiding information hazards, and governing alignment ethics.
Intelligence explosion— was discussed in Yudkowsky’s 2015 article on a prompt k >>1, where an AI agency can self-improve by triggering a recursively optimized growth curve (Yudkowsky 2015). This would result in an intelligence explosion after general artificial intelligence is achieved.
General Artificial Intelligence— is yet to be achieved.
Transformative AI— is a type of AI that will radically change the way society operates to the common individual, on a daily basis.
Narrow AI— has been progressively able to become experts on specialized tasks either by obtaining larger memory storage or higher information retrieval rate at an instance of time.
How is Paul Christiano organizing his article?
In Paul Christiano’s The Sideways view post in February 24, 2018, he discusses:
The development of AGI will most likely be a continuous acceleration rate, or a “slow”
take off. In this scenario, the premature AGI will have incremental advances and its role in society will reflect correspondingly.There are many arguments supporting a “fast” take-off.
In this delivery format, Christiano first puts forward *his* AGI takeoff speed speculations then he addresses each counter-arguments. On the first read, I thought he delivered his own speculations wonderfully. However, I thought that the combination of technical terms, heavy use of logic, and fluid examples made the counter-arguments section or the “fast” take-off section difficult to comprehend. I will be distilling three of the “fast” take-off arguments for the purpose of my article.
Arguments for “fast take-off”:
Humans vs. chimps argument
AGI will be a side effect
“Understanding” is discontinuous
What is the Humans vs. Chimps Argument?
Christiano points out one argument that parallels the AI timeline with the evolution timeline. In the “Humans vs. Chimps” section, Christiano states that “chimpanzees were not optimized to be useful, and evolution was not an agent— it didn’t trial-and-error until humans became a dominant species”.
Deriving from the fact that Chimps have only a thrice smaller brain than humans, if we increased the compute power of chimps by three folds, then theoretically we would have equally intelligent chimps as humans. Here Christiano suggests that if AI systems development is just a mere measurement of computing power, then a “fast” takeoff is very likely.
However, this is not the case because chimps are optimized for their respective tasks, just as humans are optimized for their respective tasks. Evolution, if anthropomorphized, does not evaluate a species’ extinction or survival based on their ability to swing from tree branch to tree branch nor the ability to solve a differential equation, for example. In fact, evolution changes from time to time what it wants to optimize in each species. Furthermore, evolutionary optimization in abilities is starkly different from a large population scale to a small individual scale.
Here Christiano suggests that humans evolved in a “fast” take off speed on an evolutionary timescale because humans were optimizing the generation of *intelligence*, whereas chimps did not. He intends to compare AGI development to human intellectual development, accelerating with technological development in symphony. Should AGI be achieved, then it will be optimized for usefulness, hence propelling into a “fast” takeoff.
What is “the AGI will be a side effect” argument?
Increased investment, increased global empirical understanding of how fast AI is developing, and increased manpower attributed to AI research will determine the AI landscape in the impending future. Christiano addresses this fact in the second argument for “fast” takeoff speed.
He comments on the heightened advances in narrow AI, or weak AI. Indeed, narrow AI has marked fascinating milestones. For example, Open AI currently works on supervised learning, meta-learning, and generative models. A recently published report states how the recent successor of GPT-2, GPT-3, could be approaching the capability limitations of predictive language models (Sobieszek et. al 2022).
While we can continuously improve on narrow AI, the impending topic is how and when a generalist AI agent will come about.
Christiano says that this argument supposes that these narrow AIs will eventually converge, and increased investment, increased global empirical understanding of how fast AI is developing, and increased manpower will lead them to “jump” to AGI as a side-effect.
Here Christiano suggests that if the researchers do not optimize and align general intelligence in time, they underestimated its impact because it was only a “side-effect”, then a fast take-off speed is very tangible and plausible.
What is the “Understanding” is discontinuous argument?
The LessWrong article “Discontinuous progress in history: an update” highlights the historic cases of discontinuously fast technological progress. The author addresses this topic to predict the likelihood of abrupt progress in AI development— by looking at precedented examples. In Figure 1, we see an explanation of discontinuity.
Figure 1. Discontinuity, divergent from progress trend (Rick Korzekwa)
Christiano emphasizes that the argument is saying a fast take-off speed means that AGI will be a discontinuous progress event— it is a huge advancement in Korzekwa’s graph. Understanding, the argument says, is a binary action. Knowledge is inherently associated and established upon the platform of other knowledge. If the AGI only understands 20% of the world’s knowledge, then everything will be confusing. So, the true AGI will be an AI system that obtains a great fraction of the world’s knowledge. This, in turn, will give us a very fast take-off speed.
Comments
Post a Comment