Artificial Intelligence (AI) is the intelligence of machines and the branch of computer science which aims to create it. Textbooks define the field as "the study and design of intelligent agents,"[1] where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.[2] John McCarthy, who coined the term in 1956,[3] defines it as "the science and engineering of making intelligent machines."[4]
The field was founded on the claim that a central property of human beings, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine.[5] This raises philosophical issues about the nature of the mind and limits of scientific hubris, issues which have been addressed by myth, fiction and philosophy since antiquity.[6] Artificial intelligence has been the subject of breathtaking optimism,[7] has suffered stunning setbacks[8] and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.[9]
AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other.[10] Subfields have grown up around particular institutions, the work of individual researchers, the solution of specific problems, longstanding differences of opinion about how AI should be done and the application of widely differing tools. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[11] General intelligence (or "strong AI") is still a long-term goal of (some) research.[12]
Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the golden robots of Hephaestus and Pygmalion's Galatea.[13] Human likenesses believed to have intelligence were built in every major civilization: animated statues were worshipped in Egypt and Greece[14] and humanoid automatons were built by Yan Shi,[15] Hero of Alexandria,[16] Al-Jazari[17] and Wolfgang von Kempelen.[18] It was also widely believed that artificial beings had been created by Jābir ibn Hayyān,[19] Judah Loew[20] and Paracelsus.[21] By the 19th and 20th centuries, artificial beings had become a common feature in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots).[22] Pamela McCorduck argues that all of these are examples of an ancient urge, as she describes it, "to forge the gods".[6] Stories of these creatures and their fates discuss many of the same hopes, fears and ethical concerns that are presented by artificial intelligence.
The problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. The traits described below have received the most attention.[11]
[edit] Deduction, reasoning, problem solving
Early AI researchers developed algorithms that imitated the step-by-step reasoning that human beings use when they solve puzzles, play board games or make logical deductions.[39] By the late 80s and 90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[40]
For difficult problems, most of these algorithms can require enormous computational resources — most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem solving algorithms is a high priority for AI research.[41]
Human beings solve most of their problems using fast, intuitive judgments rather than the conscious, step-by-step deduction that early AI research was able to model.[42] AI has made some progress at imitating this kind of "sub-symbolic" problem solving: embodied approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside human and animal brains that gives rise to this skill.
General intelligence
Main articles: Strong AI and AI-complete
Most researchers hope that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them.[12] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.[74]
Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's intention (social intelligence). Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it.[75]
[edit] Approaches
There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[76] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence, by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[77] Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[78] Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require "sub-symbolic" processing?[79]
[edit] Cybernetics and brain simulation
Main articles: Cybernetics and Computational neuroscience
There is no consensus on how closely the brain should be simulated.In the 1940s and 1950s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[24] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.
How can one determine if an agent is intelligent? In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.
Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.
The broad classes of outcome for an AI test are:
Optimal: it is not possible to perform better
Strong super-human: performs better than all humans
Super-human: performs better than most humans
Sub-human: performs worse than most humans
For example, performance at draughts is optimal,[143] performance at chess is super-human and nearing strong super-human,[144] and performance at many everyday tasks performed by humans is sub-human.
A quite different approach is based on measuring machine intelligence through tests which are developed from mathematical definitions of intelligence. Examples of this kind of tests start in the late nineties devising intelligence tests using notions from Kolmogorov Complexity and compression [145] [146]. Similar definitions of machine intelligence have been put forward by Marcus Hutter in his book Universal Artificial Intelligence (Springer 2005), which was further developed by Legg and Hutter [147]. Mathematical definitions have, as one advantage, that they could be applied to nonhuman intelligences and in the absence of human testers.
AI is a common topic in both science fiction and in projections about the future of technology and society. The existence of an artificial intelligence that rivals human intelligence raises difficult ethical issues and the potential power of the technology inspires both hopes and fears.
Mary Shelley's Frankenstein,[160] considers a key issue in the ethics of artificial intelligence: if a machine can be created that has intelligence, could it also feel? If it can feel, does it have the same rights as a human being? The idea also appears in modern science fiction: the film Artificial Intelligence: A.I. considers a machine in the form of a small boy which has been given the ability to feel human emotions, including, tragically, the capacity to suffer. This issue, now known as "robot rights", is currently being considered by, for example, California's Institute for the Future,[161] although many critics believe that the discussion is premature.[162]
Another issue explored by both science fiction writers and futurists is the impact of artificial intelligence on society. In fiction, AI has appeared as a servant (R2D2 in Star Wars), a law enforcer (K.I.T.T. "Knight Rider"), a comrade (Lt. Commander Data in Star Trek), a conqueror (The Matrix), a dictator (With Folded Hands), an exterminator (Terminator, Battlestar Galactica), an extension to human abilities (Ghost in the Shell) and the saviour of the human race (R. Daneel Olivaw in the Foundation Series). Academic sources have considered such consequences as: a decreased demand for human labor,[163] the enhancement of human ability or experience,[164] and a need for redefinition of human identity and basic values.[165]
Several futurists argue that artificial intelligence will transcend the limits of progress and fundamentally transform humanity. Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology with uncanny accuracy) to calculate that desktop computers will have the same processing power as human brains by the year 2029, and that by 2045 artificial intelligence will reach a point where it is able to improve itself at a rate that far exceeds anything conceivable in the past, a scenario that science fiction writer Vernor Vinge named the "technological singularity".[164] Edward Fredkin argues that "artificial intelligence is the next stage in evolution,"[166] an idea first proposed by Samuel Butler's "Darwin among the Machines" (1863), and expanded upon by George Dyson in his book of the same name in 1998. Several futurists and science fiction writers have predicted that human beings and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, which has roots in Aldous Huxley and Robert Ettinger, is now associated with robot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil.[164] Transhumanism has been illustrated in fiction as well, for example in the manga Ghost in the Shell and the science fiction series Dune. Pamela McCorduck writes that these scenarios are expressions of the ancient human desire to, as she calls it, "forge the gods."[6]
Intelligent processing tools is usually deal with uncertain, unstructured,
of no fixed algorithm, the process is a process of inference control processing,
the final results are often not sure, may be right, may be is not correct.
Natural speech understanding is mainly studied how to make the computer can
understand and raw or natural voice technology, natural speech understanding
process can be divided into three levels: lexical analysis, syntactic analysis
and semantic analysis, due to the natural voice is rich and colorful, so the
natural speech understanding is quite difficult, moving from words, we can find
some shortages at current levels of natural speech comprehension. Radio,
television and the Internet through the waves propagated, digital circuit,
newspapers need to typesetting printing, fast and slow step. Magazines, books,
movies, more slowly. Release speed of the tool, holds a large advantage in the
aspect of news release; Slow release tool that is used to release more to think
about and research materials, such as publishing a variety of social science and
natural science research, often in the form of magazines and books. In the
information society, the use of network to network communication has been
thought highly of by people more and more quickly, because the network has
provided a broad space to people, shorten the distance between people. In a
certain period of time, we can gather in different places, different age,
different education and different classes of people to communicate and discuss,
make people more broad vision, to know more comprehensive information,
experience more rich, therefore, with the further development of information
technology and the progress of the society, and believe that there will be more
and more people using the Internet the medium for communication and study, but
we should also see, there are also all kinds of problems on the network, such as
some people release some bad information on the Internet, trap set all kinds of
information. Contrast we should distinguish right and wrong, penetrative, taken
as true, let the Internet become our good place to study and communication.
Intelligent interface technology is the study of how to enable people to make
nature to communicate with the computer, in order to achieve this goal, for the
computer to read text, understand language, speech, and even be able to
translate between different languages, and the realization of the function of
these depend on the knowledge expression method of research, therefore, the
intelligent interface technology has made remarkable achievements, character
recognition, speech recognition, speech synthesis, image machine translation and
natural language understanding technology has practical application
智能处理工具通常处理的问题是不确定的,非结构的,没有固定算法的,处理的过程是推理控制的过程,最终得到的结果常常是不太确定的,可能是正确的,可能能是不正确。自然语音理解主要是研究如何使计算机能够理解和生或自然语音的技术,自然语音理解过程可以分为三个层次:词法分析,句法分析和语义分析,由于自然语音是丰富多彩的,所以,自然语音理解也是相当困难的,从话动中,我们可以发现目前水平的自然语音理解能力的一些不足。广播、电视和网络通过电波、数字线路进行传播,发布的速度快,报纸需要排版印刷,速度慢了一步。杂志、书籍、电影更慢。发布速度快的工具,在发布新闻方面占有很大的优势;发布速度慢的工具,则多用来发布需要思考和研究的材料,如发布各种社会科学和自然科学的研究成果,常采用杂志与书籍的形式。
在信息社会中,利用网络进行进行网络进行交流已经越来越快受到人们的重视,因为网络给人们提供了广阔的空间,缩短了人与人之间的距离。在一定的时间内,我们可以聚集不同地方、不同年龄、不同学历、不同阶层的人们进行交流和探讨,使人们的视野更加广阔,了解到信息更为全面,得到的经验更加丰富,因此,随着信息技术的进一步发展和社会的进步,相信会有更多的人利用网络这种媒介进行交流和学习,但是我们也应该看到,网络上也存在各种各样的问题,如有些人在网上发布一些不良的信息,设置各种信息陷阱。对比我们应该分辨是非,明察秋毫,劫为存真,让因特网成为我们学习交流的好地方。
智能接口技术是研究如何使人们能够方使自然地与计算机交流,为了实现这目标,要求计算机能够看懂文字、听懂语言、说话表达,甚至能够进行不同语言之间的翻译,而这些功能的实现又依赖于知识表达方法的研究,因此,智能接口技术已经取得显著成果,文字识别、语言识别、语音合成、图像机器翻译以及自然语言理解等技术已经实用化
你好朋友翻译专业论文题目写人工智能不是太好的,如果是搞翻译的一定要有数,高倍是让人能更清楚。