Technological Temptations
Technological temptations are instances of an actor (foreign powers, corporations, international policymakers, etc.) facing a decision about pursuing a particular technology, with substantial incentives to continue research and investment, to capture substantial value. We are particularly interested in cases in which the direct benefits substantially outweigh the direct costs, but the overall risks (or perceived risks) outweigh the overall benefits, or those in which a technology was very tempting, but an actor chose not to pursue it, whether there were external costs or not.
Chemical and biological weapons, human reproductive experimentation, nuclear power are all examples of technologies that we have tested yet left obsolete in the more recent decade. In this case study, we hope to unearth the underlying causes for these temptations, extraneous reasons (if present) for not adopting AI technology. We will particularly have a deeper-dives into AI Winters and temptations about neural networks, etc.
This is a case study on a technological temptation that was not pursued, a case study on the first AI winter, marking a time when finding for tempting AI technologies was suspended. Here we define technical temptations conditions being opportune, has perceived value, proportionate direct costs, and substantial removed costs.
We would also like to answer this question: has any of precedent AI winters been motivated by AI risk? Why or why not? Will overconfidence in Deep Learning bring forth “the third” AI winter?
Background Information
The Gartner Hype Cycle & How AI winters follows this trend
The Gartner Hype Cycle are common trends that an emergent technology undergoes with respect to time.
The innovation trigger initiates the cycle, usually with a media publication or an significant event that garnered a significant amount of publicity and interest. Neither tangible products nor predictable commercial use options introduced at this time.
The peak of inflated expectations follows the innovation trigger. Usually flooded with a series of preliminary success stories on research and development, major failures are also covered. Companies or individual entities start to put in initial investments, mostly do not.
The trough of disillusionment is when few experimentations and trials did not succeed, although funding and investments are continued.
The slope of enlightenment period shows more defined ways that this technology can benefit industries as it gains more visibility. Usually, second and third-generation products, or as I understood it, updated versions of the available product appear from technology providers.
Finally, the plateau of productivity is when the heightened disbelief completely dissipates and mainstream popularity of the product takes off. Past investments are being paid off or being broken even because of wide applicability.
The Dartmouth Summer Research Project
We have reason to believe that AI winters happened to follow the Gartner hype cycle. The technology trigger happened from 1956 to 1974, followed by gradual inflation of expectations from a population that believed in a future of “thinking machines”. One of the most notable was the Dartmouth Summer Research Project or the Dartmouth workshop which took place in the summer of 1956. Amongst the most renowned attendees were Claude Shannon, Oliver Selfrige, and Marvin Minsky.
Of the topics discussed, automatic computers that can be programmed to use a language, self-improvement intelligent systems, and machine ability to generate controlled random thinking (creativity) were most likely the most aligned with modern concepts of artificial intelligence.
The topics that the researchers discussed that were highly ambitious for the time were: ______________________________________.
The report detailed deep confidence in writing a program for automatic calculator and a chess machine. We can suspect that the first inklings of anthropomorphizing technological capabilities started in 1956. These AI technologies were predicted to be extremely advanced, so much so that their capabilities were compared to human cognition.
The Lighthill Report
The first AI winter happened between 1974 and 1980, spurred by James Lighthill.
James Lighthill authored the Lighthill report for the British Science Research Council as a reaction to constant dispute over if AI research funding should be continued. The report, titled "Artificial Intelligence: A General Survey", was published in a paper symposium in July 1973.
The content of the report was an evaluation of robotics and language processing--- what used to be described to be "a broad field with mathematical, engineering, and biological aspects". The agent, the Science Research Council, determined that core aspects of AI would not be successfully integrated into large, realistic problem-solving.
The Council categorizes AI into Advanced Automation (Category A), Computer-based CNS research (Category C), and Bridge activities (Category B). Albeit the Council addresses machine recognition of printed or typewritten characters as "an area where good progress has been made", there is another roadblock. For machine recognition of speech and machine translation of languages, the Council assigned these tasks to have the most economic incentive but "the progress in both has so far been very disappointing".
At the time of the report, the industrial motivation was to shift toward automation of product assembly, beyond conventional control systems engineering. The Council debates the validity of "advanced" AI being able to perform militarily and have contribution toward space exploration. These are all components of Advanced Automation, which also includes the ability to retrieve information and store it, learning and decision making.
Category C, Computer-based CNS research, aims to develop computer models that mimics how the human intellect acquires knowledge and skills. It dives into the aspects of central nervous system (CNS) activity, how to best interpret visual cortex or psychological data. Category B is about bridging, or building robots to "feed information into works of category A and C".
The Council draws on past disappointments being that the pattern-recognition field is non-competitive, that there is too much expenditure spent on machine recognition of speech for such low returns, that mathematical theorem-proving has been filled with “disappointments”. Machine translation, however, was the most notable failure at the time— the report quotes a discouraging review from US National Academy of Sciences’ in 1966.
Machine-readable text is text that can be used as an input to a computer. According to "Language and Machines: Computers in Translation and Linguistics", tests showed that the reader of raw Machine Translation (MT) output was 10 percent less accurate, 21 percent slower, and overall comprehension level 29 percent lower than human translation without post-editing when dealing with physics topics. This report was evidence of the gloomy contemporary sentiments.
The ability of machine algorithms solving Gödel's completeness theorem gained a lot of optimism but was quickly dismissed because of the power of the combinatorial explosion canceling out whatever advantages obtained by a large computer power. In this case, there are infinitely many possible mathematical theorems you can reach by extrapolating out the axioms, so no matter the computing power you won't be able to churn through all of them. Therefore, the Council recommends continuing research in the software industry, and developing high-level programming languages instead.
In the report, the Council considered chess-playing programs to be at the level of "experienced amateur" and that they have been built on from a foundation of solely heuristic methods. This is one of many examples listed.
The report concludes with the sentiment that there will be a “fission” of AI research in the next twenty-five years (1973-1998).
In thes context of the Lighthill report, the actor is aware of the technology is able to pursue it in a meaningful capacity, or believes itself to be able. Lighthill addresses that contemporary funding comes from military agencies (ARPA in the United States) or mission-oriented public bodies. The US department of defense was heavily funding AI R&D in the 1960s.
However, the report says that it is disadvantageous to continue funding in “poorer countries” like Britain, in comparison to the United States. In essence, the British government was investing in AI capabilities but saw only inflated technological expectations. As a result, the British AI government-funded research and development (R&D) ceased except in only two universities in the next decade. Private sectors such as companies and laboratories were excluded from our scope of understanding.
It is also worth noting that some sources believe the funding was simply “spread” to other technologies such as supercomputing, rather than a lack of predicted results.
Lisp Language
A phenomenon that came after the AI hype was the decline of Lisp language usage for the purpose of AI development.
Lisp is a programming language like C, and Java. Prior to the first AI winter, MIT AI Lab built a Lisp machine to further their AI research. Then, many lisp machines were commercialized in the early 1980s. Following the Hype Cycle, these companies such as Symbolics, Lisp Machines Incs, and Texas Instruments were pouring billions of dollars into the hype because it promised thinking machines in 10 years. When the promises turned out to be harder that originally thought, the AI wave crashed and the use of Lisp became also relative obsolete.
We should note here that Lisp, as a programming language, was not dependent on AI. However, early AI were dependent on Lisp. There are many dialects of Lisp present even to this day.
The Fifth Generation Computer Systems (FGCS)
The Fifth Generation Computer Systems (FGCS) was an effort from the Japanese Ministry of International Trade and Industry to create “supercomputers” that excelled in parallel computing and logic programming. This sentiment of increasing numbers of CPUs would bring about “the fifth generation” of computer systems was echoed in the 1980s. However, this project was considered a commercial failure due to CPUs reaching their performance capacity in the 1980s and over $400 million of investment yielded little returns.
In the United States, most of AI R&D in the 1980s were associated with universities (MIT, Harvard) or private companies like Bell Laboratories, I.B.M. This paradigm shift from government-funded research symbolized public withdrawal.
The general analysis from these events is that as funding and research waned, the projects were outsourced in private sectors, which allowed persistent AI progress. For example, the backpropagation system was developed in the 1986 by the University of California researchers.
Conclusion & Summary of Background Information
There is significant evidence that there was an AI winter that followed the Gartner Hype Cycle and were marked by key events such as the Dartmouth Summer Research Project, the Lighthill report, the Fifth Generation Computer Systems, and the Lisp programming language being disused for AI. Based on these landmark instances, we can move on to answering some relevant sets of questions.
Comments
Post a Comment