Copyright © 2015 Bert N. Langford (Images may be subject to copyright. Please send feedback)
Welcome to Our Generation USA!
Under this Web Page
Artificial Intelligence (AI)
we cover both the (many) positive and (some) negative impact of the many emerging technologies that AI enables
Artificial Intelligence (AI):
Articles Covered below:
TOP: AI Systems: What is Intelligence Composed of?;
BOTTOM: AI & Artificial Cognitive Systems
Articles Covered below:
- 14 Ways AI Will Benefit Or Harm Society (Forbes Technology Council March 1, 2018)
- These are the jobs most at risk of automation according to Oxford University: Is yours one of them?
- YouTube Video: What is Artificial Intelligence Exactly?
- YouTube Video: Bill Gates on the impact of AI on the job market
- YouTube Video: The Future of Artificial Intelligence (Stanford University)
TOP: AI Systems: What is Intelligence Composed of?;
BOTTOM: AI & Artificial Cognitive Systems
14 Ways AI Will Benefit Or Harm Society (Forbes Technology Council March 1, 2018)
"Artificial intelligence (AI) is on the rise both in business and in the world in general. How beneficial is it really to your business in the long run? Sure, it can take over those time-consuming and mundane tasks that are bogging your employees down, but at what cost?
With AI spending expected to reach $46 billion by 2020, according to an IDC report, there’s no sign of the technology slowing down. Adding AI to your business may be the next step as you look for ways to advance your operations and increase your performance.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?To understand how AI will impact your business going forward, 14 members of Forbes Technology Council weigh in on the concerns about artificial intelligence and provide reasons why AI is either a detriment or a benefit to society. Here is what they had to say:
1. Enhances Efficiency And Throughput
Concerns about disruptive technologies are common. A recent example is automobiles -- it took years to develop regulation around the industry to make it safe. That said, AI today is a huge benefit to society because it enhances our efficiency and throughput, while creating new opportunities for revenue generation, cost savings and job creation. - Anand Sampat, Datmo
2. Frees Up Humans To Do What They Do Best
Humans are not best served by doing tedious tasks. Machines can do that, so this is where AI can provide a true benefit. This allows us to do the more interpersonal and creative aspects of work. - Chalmers Brown, Due
3. Adds Jobs, Strengthens The Economy
We all see the headlines: Robots and AI will destroy jobs. This is fiction rather than fact. AI encourages a gradual evolution in the job market which, with the right preparation, will be positive. People will still work, but they’ll work better with the help of AI. The unparalleled combination of human and machine will become the new normal in the workforce of the future. - Matthew Lieberman, PwC
4. Leads To Loss Of Control
If machines do get smarter than humans, there could be a loss of control that can be a detriment. Whether that happens or whether certain controls can be put in place remains to be seen. - Muhammed Othman, Calendar
5. Enhances Our Lifestyle
The rise of AI in our society will enhance our lifestyle and create more efficient businesses. Some of the mundane tasks like answering emails and data entry will by done by intelligent assistants. Smart homes will also reduce energy usage and provide better security, marketing will be more targeted and we will get better health care thanks to better diagnoses. - Naresh Soni, Tsunami ARVR
6. Supervises Learning For Telemedicine
AI is a technology that can be used for both good and nefarious purposes, so there is a need to be vigilant. The latest technologies seem typically applied towards the wealthiest among us, but AI has the potential to extend knowledge and understanding to a broader population -- e.g. image-based AI diagnoses of medical conditions could allow for a more comprehensive deployment of telemedicine. - Harald Quintus-Bosz, Cooper Perkins, Inc.
7. Creates Unintended And Unforeseen Consequences
While fears about killer robots grab headlines, unintended and unforeseen consequences of artificial intelligence need attention today, as we're already living with them. For example, it is believed that Facebook's newsfeed algorithm influenced an election outcome that affected geopolitics. How can we better anticipate and address such possible outcomes in future? - Simon Smith, BenchSci
8. Increases Automation
There will be economic consequences to the widespread adoption of machine learning and other AI technologies. AI is capable of performing tasks that would once have required intensive human labor or not have been possible at all. The major benefit for business will be a reduction in operational costs brought about by AI automation -- whether that’s a net positive for society remains to be seen. - Vik Patel, Nexcess
9. Elevates The Condition Of Mankind
The ability for technology to solve more problems, answer more questions and innovate with a number of inputs beyond the capacity of the human brain can certainly be used for good or ill. If history is any guide, the improvement of technology tends to elevate the condition of mankind and allow us to focus on higher order functions and an improved quality of life. - Wade Burgess, Shiftgig
10. Solves Complex Social Problems
Much of the fear with AI is due to the misunderstanding of what it is and how it should be applied. Although AI has promise for solving complex social problems, there are ethical issues and biases we must still explore. We are just beginning to understand how AI can be applied to meaningful problems. As our use of AI matures, we will find it to be a clear benefit in our lives. - Mark Benson, Exosite, LLC
11. Improves Demand Side Management
AI is a benefit to society because machines can become smarter over time and increase efficiencies. Additionally, computers are not susceptible to the same probability of errors as human beings are. From an energy standpoint, AI can be used to analyze and research historical data to determine how to most efficiently distribute energy loads from a grid perspective. - Greg Sarich, CLEAResult
12. Benefits Multiple Industries
Society has and will continue to benefit from AI based on character/facial recognition, digital content analysis and accuracy in identifying patterns, whether they are used for health sciences, academic research or technology applications. AI risks are real if we don't understand the quality of the incoming data and set AI rules which are making granular trade-off decisions at increasing computing speeds. - Mark Butler, Qualys.com
13. Absolves Humans Of All Responsibility
It is one thing to use machine learning to predict and help solve problems; it is quite another to use these systems to purposely control and act in ways that will make people unnecessary.
When machine intelligence exceeds our ability to understand it, or it becomes superior intelligence, we should take care to not blindly follow its recommendation and absolve ourselves of all responsibility. - Chris Kirby, Voices.com
14. Extends And Expands Creativity
AI intelligence is the biggest opportunity of our lifetime to extend and expand human creativity and ingenuity. The two main concerns that the fear-mongers raise are around AI leading to job losses in the society and AI going rogue and taking control of the human race.
I believe that both these concerns raised by critics are moot or solvable. - Ganesh Padmanabhan, CognitiveScale, Inc
[End of Article #1]
___________________________________________________________________________
These are the jobs most at risk of automation according to Oxford University: Is yours one of them? (The Telegraph September 27, 2017)
In his speech at the 2017 Labour Party conference, Jeremy Corbyn outlined his desire to "urgently... face the challenge of automation", which he called a " threat in the hands of the greedy".
Whether or not Corbyn is planing a potentially controversial 'robot tax' wasn't clear from his speech, but addressing the forward march of automation is a savvy move designed to appeal to voters in low-paying, routine work.
Click here for rest of Article.
___________________________________________________________________________
Artificial Intelligence (AI) by Wikipedia 10/29/2023:
In a Nutshell:
Part of a series on Artificial intelligence
Major goals:
AI Expanded follows:
Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or animals. It is also the field of study in computer science that develops and studies intelligent machines. "AI" may also refer to the machines themselves.
AI technology is widely used throughout industry, government and science. Some high-profile applications are:
Artificial intelligence was founded as an academic discipline in 1956. The field went through multiple cycles of optimism followed by disappointment and loss of funding, but after 2012, when deep learning surpassed all previous AI techniques, there was a vast increase in funding and interest.
The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include:
General intelligence (the ability to solve an arbitrary problem) is among the field's long-term goals. To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques, including:
AI also draws upon psychology, linguistics, philosophy, neuroscience and many other fields.
Goals:
The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research:
Reasoning, problem-solving:
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics.
Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": they became exponentially slower as the problems grew larger. Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments. Accurate and efficient reasoning is an unsolved problem.
Knowledge representation:
Knowledge representation and knowledge engineering allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in:
A knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations, concepts, and properties used by a particular domain of knowledge.
Knowledge bases need to represent things such as:
Among the most difficult problems in KR are:
Knowledge acquisition is the difficult problem of obtaining knowledge for AI applications. Modern AI gathers knowledge by "scraping" the internet (including Wikipedia). The knowledge itself was collected by the volunteers and professionals who published the information (who may or may not have agreed to provide their work to AI companies).
This "crowd sourced" technique does not guarantee that the knowledge is correct or reliable. The knowledge of Large Language Models (such as ChatGPT) is highly unreliable -- it generates misinformation and falsehoods (known as "hallucinations"). Providing accurate knowledge for these modern AI applications is an unsolved problem.
Planning and decision making:
An "agent" is anything that perceives and takes actions in the world. A rational agent has goals or preferences and takes actions to make them happen:
The decision making agent assigns a number to each situation (called the "utility") that measures how much the agent prefers it. For each possible action, it can calculate the "expected utility": the utility of all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility.
In classical planning, the agent knows exactly what the effect of any action will be. In most real-world problems, however, the agent may not be certain about the situation they are in (it is "unknown" or "unobservable") and it may not know for certain what will happen after each possible action (it is not "deterministic"). It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked.
In some problems, the agent's preferences may be uncertain, especially if there are other agents or humans involved. These can be learned (e.g., with inverse reinforcement learning) or the agent can seek information to improve its preferences. Information value theory can be used to weigh the value of exploratory or experimental actions. The space of possible future actions and situations is typically intractably large, so the agents must take actions and evaluate situations while being uncertain what the outcome will be.
A Markov decision process has a transition model that describes the probability that a particular action will change the state in a particular way, and a reward function that supplies the utility of each state and the cost of each action. A policy associates a decision with each possible state.
The policy could be calculated (e.g. by iteration), be heuristic, or it can be learned.
Game theory describes rational behavior of multiple interacting agents, and is used in AI programs that make decisions that involve other agents.
Learning:
Machine learning is the study of programs that can improve their performance on a given task automatically. It has been a part of AI from the beginning.
There are several kinds of machine learning:
Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.
Natural language processing:
Natural language processing (NLP) allows programs to read, write and communicate in human languages such as English.
Specific problems include:
Early work, based on Noam Chomsky's generative grammar and semantic networks, had difficulty with word-sense disambiguation unless restricted to small domains called "micro-worlds" (due to the common sense knowledge problem).
Modern deep learning techniques for NLP include word embedding (how often one word appears near another), transformers (which finds patterns in text), and others.
In 2019, generative pre-trained transformer (or "GPT") language models began to generate coherent text, and by 2023 these models were able to get human-level scores on the bar exam, SAT, GRE, and many other real-world applications.
Perception:
Machine perception is the ability to use input from sensors to deduce aspects of the world (such as the following sensors:
Computer vision is the ability to analyze visual input.The field includes:
Robotics uses AI.
Social intelligence:
Affective computing is an interdisciplinary umbrella that comprises systems that recognize, interpret, process or simulate human feeling, emotion and mood.
For example, some virtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.
However, this tends to give naïve users an unrealistic conception of how intelligent existing computer agents actually are. Moderate successes related to affective computing include:
General intelligence:
A machine with artificial general intelligence should be able to solve a wide variety of problems with breadth and versatility similar to human intelligence.
Tools:
AI research uses a wide variety of tools to accomplish the goals above.
Search and optimization:
AI can solve many problems by intelligently searching through many possible solutions. There are two very different kinds of search used in AI:
State space search searches through a tree of possible states to try to find a goal state. For example, Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.
Simple exhaustive searches are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. "Heuristics" or "rules of thumb" can help to prioritize choices that are more likely to reach a goal.
Adversarial search is used for game-playing programs, such as chess or Go. It searches through a tree of possible moves and counter-moves, looking for a winning position.
Local search uses mathematical optimization to find a numeric solution to a problem. It begins with some form of a guess and then refines the guess incrementally until no more refinements can be made.
These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. This process is called stochastic gradient descent.
Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses).
Distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).
Neural networks and statistical classifiers (discussed below), also use a form of local search, where the "landscape" to be searched is formed by learning.
Logic:
Formal Logic is used for reasoning and knowledge representation. Formal logic comes in two main forms:
Logical inference (or deduction) is the process of proving a new statement (conclusion) from other statements that are already known to be true (the premises). A logical knowledge base also handles queries and assertions as a special case of inference. An inference rule describes what is a valid step in a proof. The most general inference rule is resolution.
Inference can be reduced to performing a search to find a path that leads from premises to conclusions, where each step is the application of an inference rule. Inference performed this way is intractable except for short proofs in restricted domains. No efficient, powerful and general method has been discovered.
Fuzzy logic assigns a "degree of truth" between 0 and 1 and handles uncertainty and probabilistic situations. Non-monotonic logics are designed to handle default reasoning. Other specialized versions of logic have been developed to describe many complex domains (see knowledge representation above).
Probabilistic methods for uncertain reasoning:
Many problems in AI (including in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods from probability theory and economics.
Bayesian networks are a very general tool that can be used for many problems, including:
Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).
Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using:
These tools include models such as:
Classifiers and statistical learning methods:
The simplest AI applications can be divided into two types: classifiers (e.g. "if shiny then diamond"), on one hand, and controllers (e.g. "if diamond then pick up"), on the other hand.
Classifiers are functions that use pattern matching to determine the closest match. They can be fine-tuned based on chosen examples using supervised learning. Each pattern (also called an "observation") is labeled with a certain predefined class. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.
There are many kinds of classifiers in use. The decision tree is the simplest and most widely used symbolic machine learning algorithm.
K-nearest neighbor algorithm was the most widely used analogical AI until the mid-1990s, and Kernel methods such as the support vector machine (SVM) displaced k-nearest neighbor in the 1990s.
The naive Bayes classifier is reportedly the "most widely used learner" at Google, due in part to its scalability. Neural networks are also used as classifiers.
Artificial neural networks:
Artificial neural networks were inspired by the design of the human brain: a simple "neuron" N accepts input from other neurons, each of which, when activated (or "fired"), casts a weighted "vote" for or against whether neuron N should itself activate.
In practice, the input "neurons" are a list of numbers, the "weights" are:
"The resemblance to real neural cells and structures is superficial", according to Russell and Norvig.
Learning algorithms for neural networks use local search to choose the weights that will get the right output for each input during training. The most common training technique is the backpropagation algorithm. Neural networks learn to model complex relationships between inputs and outputs and find patterns in data.
In theory, a neural network can learn any function.
In feedforward neural networks the signal passes in only one direction. Recurrent neural networks feed the output signal back into the input, which allows short-term memories of previous input events.
Long short term memory is the most successful network architecture for recurrent networks.
Perceptrons use only a single layer of neurons, whereas deep learning uses multiple layers. Convolutional neural networks strengthen the connection between neurons that are "close" to each other – this is especially important in image processing, where a local set of neurons must identify an "edge" before the network can identify an object.
Deep learning:
Deep learning uses several layers of neurons between the network's inputs and outputs.
The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces.
Deep learning has drastically improved the performance of programs in many important subfields of artificial intelligence, including:
The reason that deep learning performs so well in so many applications is not known as of 2023.
The sudden success of deep learning in 2012–2015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s) but because of two factors:
Specialized hardware and software:
Main articles:
In the late 2010s, graphics processing units (GPUs) that were increasingly designed with AI-specific enhancements and used with specialized TensorFlow software, had replaced previously used central processing unit (CPUs) as the dominant means for large-scale (commercial and academic) machine learning models' training. Historically, specialized languages, such as Lisp, Prolog, and others, had been used.
Applications:
Main article: Applications of artificial intelligence
AI and machine learning technology is used in most of the essential applications of the 2020s, including:
There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported they had incorporated "AI" in some offerings or processes. A few examples are:
Game playing programs have been used since the 1950s to demonstrate and test AI's most advanced techniques. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.
In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.
In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. Then it defeated Ke Jie in 2017, who at the time continuously held the world No. 1 ranking for two years.
Other programs handle imperfect-information games; such as:
DeepMind in the 2010s developed a "generalized artificial intelligence" that could learn many diverse Atari games on its own.
In the early 2020s, generative AI gained widespread prominence. ChatGPT, based on GPT-3, and other large language models, were tried by 14% of Americans adults.
The increasing realism and ease-of-use of AI-based text-to-image generators such as Midjourney, DALL-E, and Stable Diffusion sparked a trend of viral AI-generated photos. Widespread attention was gained by:
AlphaFold 2 (2020) demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein.
Ethics:
AI, like any powerful technology, has potential benefits and potential risks. AI may be able to advance science and find solutions for serious problems: Demis Hassabis of Deep Mind hopes to "solve intelligence, and then use that to solve everything else".
However, as the use of AI has become widespread, several unintended consequences and risks have been identified.
Risks and harm, e.g.:
Privacy and copyright:
Further information: Information privacy and Artificial intelligence and copyright
Machine learning algorithms require large amounts of data. The techniques used to acquire this data have raised concerns about privacy, surveillance and copyright.
Technology companies collect a wide range of data from their users, including online activity, geolocation data, video and audio.
For example, in order to build speech recognition algorithms, Amazon and others have recorded millions of private conversations and allowed temps to listen to and transcribe some of them.
Opinions about this widespread surveillance range from those who see it as a necessary evil to those for whom it is clearly unethical and a violation of the right to privacy.
AI developers argue that this is the only way to deliver valuable applications. and have developed several techniques that attempt to preserve privacy while still obtaining the data, such as data aggregation, de-identification and differential privacy.
Since 2016, some privacy experts, such as Cynthia Dwork, began to view privacy in terms of fairness -- Brian Christian wrote that experts have pivoted "from the question of 'what they know' to the question of 'what they're doing with it'.".
Generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under a rationale of "fair use". Experts disagree about how well, and under what circumstances, this rationale will hold up in courts of law; relevant factors may include "the purpose and character of the use of the copyrighted work" and "the effect upon the potential market for the copyrighted work".
In 2023, leading authors (including John Grisham and Jonathan Franzen) sued AI companies for using their work to train generative AI.
Misinformation:
See also: YouTube § Moderation and offensive content
YouTube, Facebook and others use recommender systems to guide users to more content.
These AI programs were given the goal of maximizing user engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choose misinformation, conspiracy theories, and extreme partisan content, and, to keep them watching, the AI recommended more of it.
Users also tended to watch more content on the same subject, so the AI led people into filter bubbles where they received multiple versions of the same misinformation. This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government.
The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took steps to mitigate the problem.
In 2022, generative AI began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda.
This technology has been widely distributed at minimal cost. Geoffrey Hinton (who was an instrumental developer of these tools) expressed his concerns about AI disinformation. He quit his job at Google to freely criticize the companies developing AI.
Algorithmic bias and fairness:
Main articles:
Machine learning applications will be biased if they learn from biased data. The developers may not be aware that the bias exists. Bias can be introduced by the way training data is selected and by the way a model is deployed.
If a biased algorithm is used to make decisions that can seriously harm people (as it can in the following) then the algorithm may cause discrimination:
Fairness in machine learning is the study of how to prevent the harm caused by algorithmic bias. It has become serious area of academic study within AI. Researchers have discovered it is not always possible to define "fairness" in a way that satisfies all stakeholders.
On June 28, 2015, Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as "gorillas" because they were black. The system was trained on a dataset that contained very few images of black people, a problem called "sample size disparity".
Google "fixed" this problem by preventing the system from labelling anything as a "gorilla". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon.
COMPAS is a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. In 2016, Julia Angwin at ProPublica discovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants.
Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different -- the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend.
In 2017, several researchers showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data.
A program can make biased decisions even if the data does not explicitly mention a problematic feature (such as "race" or "gender"). The feature will correlate with other features (like "address", "shopping history" or "first name"), and the program will make the same decisions based on these features as it would on "race" or "gender".
Moritz Hardt said “the most robust fact in this research area is that fairness through blindness doesn't work.”
Criticism of COMPAS highlighted a deeper problem with the misuse of AI. Machine learning models are designed to make "predictions" that are only valid if we assume that the future will resemble the past. If they are trained on data that includes the results of racist decisions in the past, machine learning models must predict that racist decisions will be made in the future.
Unfortunately, if an applications then uses these predictions as recommendations, some of these "recommendations" will likely be racist. Thus, machine learning is not well suited to help make decisions in areas where there is hope that the future will be better than the past. It is necessarily descriptive and not proscriptive.
Bias and unfairness may go undetected because the developers are overwhelmingly white and male: among AI engineers, about 4% are black and 20% are women.
At its 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022) the Association for Computing Machinery, in Seoul, South Korea, presented and published findings recommending that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed.
Lack of transparency:
See also:
Most modern AI applications can not explain how they have reached a decision. The large amount of relationships between inputs and outputs in deep neural networks and resulting complexity makes it difficult for even an expert to explain how they produced their outputs, making them a black box.
There have been many cases where a machine learning program passed rigorous tests, but nevertheless learned something different than what the programmers intended. For example, Justin Ko and Roberto Novoa developed a system that could identify skin diseases better than medical professionals, however it classified any image with a ruler as "cancerous", because pictures of malignancies typically include a ruler to show the scale.
A more dangerous example was discovered by Rich Caruana in 2015: a machine learning system that accurately predicted risk of death classified a patient that was over 65, asthma and difficulty breathing as "low risk". Further research showed that in high-risk cases like this, the hospital would allocate more resources and save the patient's life, decreasing the risk measured by the program.
Mistakes like these become obvious when we know how the program has reached a decision. Without an explanation, these problems may not not be discovered until after they have caused harm.
A second issue is that people who have been harmed by an algorithm's decision have a right to an explanation. Doctors, for example, are required to clearly and completely explain the reasoning behind any decision they make. Early drafts of the European Union's General Data Protection Regulation in 2016 included an explicit statement that this right exists. Industry experts noted that this is an unsolved problem with no solution in sight. Regulators argued that nevertheless the harm is real: if the problem has no solution, the tools should not be used.
DARPA established the XAI ("Explainable Artificial Intelligence") program in 2014 to try and solve these problems.
There are several potential solutions to the transparency problem. Multitask learning provides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned.
Deconvolution, DeepDream and other generative methods can allow developers to see what different layers of a deep network have learned and produce output that can suggest what the network is learning.
Supersparse linear integer models use learning to identify the most important features, rather than the classification. Simple addition of these features can then make the classification (i.e. learning is used to create a scoring system classifier, which is transparent).
Bad actors and weaponized AI:
Main articles:
A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision. By 2015, over fifty countries were reported to be researching battlefield robots. These weapons are considered especially dangerous for several reasons: if they kill an innocent person it is not clear who should be held accountable, it is unlikely they will reliably choose targets, and, if produced at scale, they are potentially weapons of mass destruction.
In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed.
AI provides a number of tools that are particularly useful for authoritarian governments:
Terrorists, criminals and rogue states can use weaponized AI such as advanced digital warfare and lethal autonomous weapons.
Machine-learning AI is also able to design tens of thousands of toxic molecules in a matter of hours.
Technological unemployment:
Main articles:
From the early days of the development of artificial intelligence there have been arguments, for example those put forward by Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculation and qualitative, value-based judgement.
Economists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.
In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed.
Risk estimates vary; for example, in the 2010s Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classified only 9% of U.S. jobs as "high risk".
The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology (rather than social policy) creates unemployment (as opposed to redundancies).
Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist stated in 2015 that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".
Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.
In April 2023, it was reported that 70% of the jobs for Chinese video game illlustrators had been eliminated by generative artificial intelligence.
Existential risk:
Main article: Existential risk from artificial general intelligence
It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as the physicist Stephen Hawking puts it, "spell the end of the human race".
This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like "self-awareness" (or "sentience" or "consciousness") and becomes a malevolent character. These sci-fi scenarios are misleading in several ways.
First, AI does not require human-like "sentience" to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them.
Philosopher Nick Bostrom argued that if one gives almost any goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of a paperclip factory manager). Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that "you can't fetch the coffee if you're dead."
In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is "fundamentally on our side".
Second, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are made of language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.
The opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI. Personalities such as Stephen Hawking, Bill Gates, Elon Musk have expressed concern about existential risk from AI.
In the early 2010's, experts argued that the risks are too distant in the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine.
However, after 2016, the study of current and future risks and possible solutions became a serious area of research. In 2023, AI pioneers including Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, and Sam Altman issued the joint statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".
Ethical machines and alignment:
Main articles:
Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk.
Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas. The field of machine ethics is also called computational morality, and was founded at an AAAI symposium in 2005.
Other approaches include Wendell Wallach's "artificial moral agents" and Stuart J. Russell's three principles for developing provably beneficial machines.
Regulation:
Main articles:
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally.
According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone. Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.
Most EU member states had released national AI strategies, as had the followng;
Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.
The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology.
Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI.
In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.
In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks". A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity.
In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".
History:
The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate both mathematical deduction and formal reasoning, which is known as the Church–Turing thesis.
This, along with concurrent discoveries in cybernetics and information theory, led researchers to consider the possibility of building an "electronic brain". The first paper later recognized as "AI" was McCullouch and Pitts design for Turing-complete "artificial neurons" in 1943.
The field of AI research was founded at a workshop at Dartmouth College in 1956. The attendees became the leaders of AI research in the 1960s. They and their students produced programs that the press described as "astonishing": computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English.
By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense and laboratories had been established around the world. Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do".
Marvin Minsky agreed, writing, "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".
They had, however, underestimated the difficulty of the problem. Both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects.
Minsky's and Papert's book Perceptrons was understood as proving that artificial neural networks approach would never be useful for solving real-world tasks, thus discrediting the approach altogether. The "AI winter", a period when obtaining funding for AI projects was difficult, followed.
In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research.
However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.
Many researchers began to doubt that the current practices would be able to imitate all the processes of human cognition, especially:
A number of researchers began to look into "sub-symbolic" approaches:
Robotics researchers, such as Rodney Brooks, rejected "representation" in general and focussed directly on engineering machines that move and survive.
Judea Pearl, Lofti Zadeh and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic. But the most important development was the revival of "connectionism", including neural network research, by Geoffrey Hinton and others.
In 1990, Yann LeCun successfully showed that convolutional neural networks can recognize handwritten digits, the first of many successful applications of neural networks.
AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This "narrow" and "formal" focus allowed researchers to produce verifiable results and collaborate with other fields (such as statistics, economics and mathematics).
By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence".
Several academic researchers became concerned that AI was no longer pursuing the original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield of artificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s.
Deep learning began to dominate industry benchmarks in 2012 and was adopted throughout the field. For many specific tasks, other methods were abandoned. Deep learning's success was based on both hardware improvements such as:
Deep learning's success led to an enormous increase in interest and funding in AI. The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019, and WIPO reported that AI was the most prolific emerging technology in terms of the number of patent applications and granted patents.
According to 'AI Impacts', about $50 billion annually was invested in "AI" around 2022 in the US alone and about 20% of new US Computer Science PhD graduates have specialized in "AI"; about 800,000 "AI"-related US job openings existed in 2022.
In 2016, issues of fairness and the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. The alignment problem became a serious field of academic study.
Philosophy:
Main article: Philosophy of artificial intelligence
Defining artificial intelligence:
Main articles:
Alan Turing wrote in 1950 "I propose to consider the question 'can machines think'?" He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour". He devised the Turing test, which measures the ability of a machine to simulate human conversation.
Since we can only observe the behavior of the machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes that we can not determine these things about other people but "it is usual to have a polite convention that everyone thinks"
Russell and Norvig agree with Turing that AI must be defined in terms of "acting" and not "thinking". However, they are critical that the test compares machines to people. "Aeronautical engineering texts," they wrote, "do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool other pigeons.'"
AI founder John McCarthy agreed, writing that "Artificial intelligence is not, by definition, simulation of human intelligence".
McCarthy defines intelligence as "the computational part of the ability to achieve goals in the world." Another AI founder, Marvin Minsky similarly defines it as "the ability to solve hard problems".
These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the "intelligence" of the machine—and no other philosophical discussion is required, or may not even be possible.
Another definition has been adopted by Google, a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence.
Evaluating approaches to AI:
No established unifying theory or paradigm has guided AI research for most of its history. The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks").
This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.
Symbolic AI and its limits:
Symbolic AI (or "GOFAI") simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."
However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult.
Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge. Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree.
The issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias.
Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence, in part because sub-symbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision.
The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.
Neat vs. scruffy:
Main article: Neats and scruffies
"Neats" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work.
This issue was actively discussed in the 70s and 80s, but eventually was seen as irrelevant. Modern AI has elements of both.
Soft vs. hard computing:
Main article: Soft computing
Finding a provably correct or optimal solution is intractable for many important problems. Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation.
Soft computing was introduced in the late 80s and most successful AI programs in the 21st century are examples of soft computing with neural networks.
Narrow vs. general AI:
Main article: Artificial general intelligence
AI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals.
General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The experimental sub-field of artificial general intelligence studies this area exclusively.
Machine consciousness, sentience and mind:
Main articles:
The philosophy of mind does not know whether a machine can have a mind, consciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior.
Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence.
Russell and Norvig add that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on." However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction.
Consciousness:
Main articles:
David Chalmers identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness. The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion).
Human information processing is easy to explain, however, human subjective experience is difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to know what red looks like.
Computationalism and functionalism:
Main articles:
Computationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind–body problem.
This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam.
Philosopher John Searle characterized this position as "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds." Searle counters this assertion with his Chinese room argument, which attempts to show that, even if a machine perfectly simulates human behavior, there is still no reason to suppose it also has a mind.
Robot rights:
Main article: Robot rights
If a machine has a mind and subjective experience, then it may also have sentience (the ability to feel), and if so it could also suffer; it has been argued that this could entitle it to certain rights. Any hypothetical robot rights would lie on a spectrum with animal rights and human rights.
This issue has been considered in fiction for centuries, and is now being considered by, for example, California's Institute for the Future; however, critics argue that the discussion is premature.
Future:
Superintelligence and the singularity:
A superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind.
If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an "intelligence explosion" and Vernor Vinge called a "singularity".
However, most technologies do not improve exponentially indefinitely, but rather follow an S-curve, slowing when they reach the physical limits of what the technology can do.
Consider, for example, transportation: speed increased exponentially from 1830 to 1970, but then the trend abruptly stopped when it reached physical limits.
Transhumanism:
Robot designer Hans Moravec, cyberneticist Kevin Warwick, and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger.
Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his book of the same name in 1998.
In fiction:
Main article: Artificial intelligence in fiction
Thought-capable artificial beings have appeared as storytelling devices since antiquity, and have been a persistent theme in science fiction.
A common trope in these works began with Mary Shelley's Frankenstein, where a human creation becomes a threat to its masters.
This also includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999).
In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are less prominent in popular culture.
Isaac Asimov introduced the Three Laws of Robotics in many books and stories, most notably the "Multivac" series about a super-intelligent computer of the same name. Asimov's laws are often brought up during lay discussions of machine ethics; while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.
Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.
See also
"Artificial intelligence (AI) is on the rise both in business and in the world in general. How beneficial is it really to your business in the long run? Sure, it can take over those time-consuming and mundane tasks that are bogging your employees down, but at what cost?
With AI spending expected to reach $46 billion by 2020, according to an IDC report, there’s no sign of the technology slowing down. Adding AI to your business may be the next step as you look for ways to advance your operations and increase your performance.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?To understand how AI will impact your business going forward, 14 members of Forbes Technology Council weigh in on the concerns about artificial intelligence and provide reasons why AI is either a detriment or a benefit to society. Here is what they had to say:
1. Enhances Efficiency And Throughput
Concerns about disruptive technologies are common. A recent example is automobiles -- it took years to develop regulation around the industry to make it safe. That said, AI today is a huge benefit to society because it enhances our efficiency and throughput, while creating new opportunities for revenue generation, cost savings and job creation. - Anand Sampat, Datmo
2. Frees Up Humans To Do What They Do Best
Humans are not best served by doing tedious tasks. Machines can do that, so this is where AI can provide a true benefit. This allows us to do the more interpersonal and creative aspects of work. - Chalmers Brown, Due
3. Adds Jobs, Strengthens The Economy
We all see the headlines: Robots and AI will destroy jobs. This is fiction rather than fact. AI encourages a gradual evolution in the job market which, with the right preparation, will be positive. People will still work, but they’ll work better with the help of AI. The unparalleled combination of human and machine will become the new normal in the workforce of the future. - Matthew Lieberman, PwC
4. Leads To Loss Of Control
If machines do get smarter than humans, there could be a loss of control that can be a detriment. Whether that happens or whether certain controls can be put in place remains to be seen. - Muhammed Othman, Calendar
5. Enhances Our Lifestyle
The rise of AI in our society will enhance our lifestyle and create more efficient businesses. Some of the mundane tasks like answering emails and data entry will by done by intelligent assistants. Smart homes will also reduce energy usage and provide better security, marketing will be more targeted and we will get better health care thanks to better diagnoses. - Naresh Soni, Tsunami ARVR
6. Supervises Learning For Telemedicine
AI is a technology that can be used for both good and nefarious purposes, so there is a need to be vigilant. The latest technologies seem typically applied towards the wealthiest among us, but AI has the potential to extend knowledge and understanding to a broader population -- e.g. image-based AI diagnoses of medical conditions could allow for a more comprehensive deployment of telemedicine. - Harald Quintus-Bosz, Cooper Perkins, Inc.
7. Creates Unintended And Unforeseen Consequences
While fears about killer robots grab headlines, unintended and unforeseen consequences of artificial intelligence need attention today, as we're already living with them. For example, it is believed that Facebook's newsfeed algorithm influenced an election outcome that affected geopolitics. How can we better anticipate and address such possible outcomes in future? - Simon Smith, BenchSci
8. Increases Automation
There will be economic consequences to the widespread adoption of machine learning and other AI technologies. AI is capable of performing tasks that would once have required intensive human labor or not have been possible at all. The major benefit for business will be a reduction in operational costs brought about by AI automation -- whether that’s a net positive for society remains to be seen. - Vik Patel, Nexcess
9. Elevates The Condition Of Mankind
The ability for technology to solve more problems, answer more questions and innovate with a number of inputs beyond the capacity of the human brain can certainly be used for good or ill. If history is any guide, the improvement of technology tends to elevate the condition of mankind and allow us to focus on higher order functions and an improved quality of life. - Wade Burgess, Shiftgig
10. Solves Complex Social Problems
Much of the fear with AI is due to the misunderstanding of what it is and how it should be applied. Although AI has promise for solving complex social problems, there are ethical issues and biases we must still explore. We are just beginning to understand how AI can be applied to meaningful problems. As our use of AI matures, we will find it to be a clear benefit in our lives. - Mark Benson, Exosite, LLC
11. Improves Demand Side Management
AI is a benefit to society because machines can become smarter over time and increase efficiencies. Additionally, computers are not susceptible to the same probability of errors as human beings are. From an energy standpoint, AI can be used to analyze and research historical data to determine how to most efficiently distribute energy loads from a grid perspective. - Greg Sarich, CLEAResult
12. Benefits Multiple Industries
Society has and will continue to benefit from AI based on character/facial recognition, digital content analysis and accuracy in identifying patterns, whether they are used for health sciences, academic research or technology applications. AI risks are real if we don't understand the quality of the incoming data and set AI rules which are making granular trade-off decisions at increasing computing speeds. - Mark Butler, Qualys.com
13. Absolves Humans Of All Responsibility
It is one thing to use machine learning to predict and help solve problems; it is quite another to use these systems to purposely control and act in ways that will make people unnecessary.
When machine intelligence exceeds our ability to understand it, or it becomes superior intelligence, we should take care to not blindly follow its recommendation and absolve ourselves of all responsibility. - Chris Kirby, Voices.com
14. Extends And Expands Creativity
AI intelligence is the biggest opportunity of our lifetime to extend and expand human creativity and ingenuity. The two main concerns that the fear-mongers raise are around AI leading to job losses in the society and AI going rogue and taking control of the human race.
I believe that both these concerns raised by critics are moot or solvable. - Ganesh Padmanabhan, CognitiveScale, Inc
[End of Article #1]
___________________________________________________________________________
These are the jobs most at risk of automation according to Oxford University: Is yours one of them? (The Telegraph September 27, 2017)
In his speech at the 2017 Labour Party conference, Jeremy Corbyn outlined his desire to "urgently... face the challenge of automation", which he called a " threat in the hands of the greedy".
Whether or not Corbyn is planing a potentially controversial 'robot tax' wasn't clear from his speech, but addressing the forward march of automation is a savvy move designed to appeal to voters in low-paying, routine work.
Click here for rest of Article.
___________________________________________________________________________
Artificial Intelligence (AI) by Wikipedia 10/29/2023:
In a Nutshell:
Part of a series on Artificial intelligence
Major goals:
- Artificial general intelligence
- Planning
- Computer vision
- General game playing
- Knowledge reasoning
- Machine learning
- Natural language processing
- Robotics
- AI safety
- Symbolic
- Deep learning
- Bayesian networks
- Evolutionary algorithms
- Situated approach
- Hybrid intelligent systems
- Systems integration
AI Expanded follows:
Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or animals. It is also the field of study in computer science that develops and studies intelligent machines. "AI" may also refer to the machines themselves.
AI technology is widely used throughout industry, government and science. Some high-profile applications are:
- advanced web search engines (e.g., Google Search), recommendation systems (used by YouTube, Amazon, and Netflix),
- understanding human speech (such as Siri and Alexa),
- self-driving cars (e.g., Waymo),
- generative or creative tools (ChatGPT and AI art),
- and competing at the highest level in strategic games (such as chess and Go).
Artificial intelligence was founded as an academic discipline in 1956. The field went through multiple cycles of optimism followed by disappointment and loss of funding, but after 2012, when deep learning surpassed all previous AI techniques, there was a vast increase in funding and interest.
The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include:
- reasoning,
- knowledge representation,
- planning,
- learning,
- natural language processing,
- perception,
- and support for robotics.
General intelligence (the ability to solve an arbitrary problem) is among the field's long-term goals. To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques, including:
- search and mathematical optimization,
- formal logic,
- artificial neural networks,
- and methods based on
AI also draws upon psychology, linguistics, philosophy, neuroscience and many other fields.
Goals:
The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research:
Reasoning, problem-solving:
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics.
Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": they became exponentially slower as the problems grew larger. Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments. Accurate and efficient reasoning is an unsolved problem.
Knowledge representation:
Knowledge representation and knowledge engineering allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in:
- content-based indexing and retrieval,
- scene interpretation,
- clinical decision support,
- knowledge discovery (mining "interesting" and actionable inferences from large databases),
- and other areas.
A knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations, concepts, and properties used by a particular domain of knowledge.
Knowledge bases need to represent things such as:
- objects,
- properties,
- categories and relations between objects,
- situations,
- events,
- states and time;
- causes and effects;
- knowledge about knowledge (what we know about what other people know);
- default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing);
- and many other aspects and domains of knowledge.
Among the most difficult problems in KR are:
- the breadth of commonsense knowledge (the set of atomic facts that the average person knows is enormous);
- and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally).
Knowledge acquisition is the difficult problem of obtaining knowledge for AI applications. Modern AI gathers knowledge by "scraping" the internet (including Wikipedia). The knowledge itself was collected by the volunteers and professionals who published the information (who may or may not have agreed to provide their work to AI companies).
This "crowd sourced" technique does not guarantee that the knowledge is correct or reliable. The knowledge of Large Language Models (such as ChatGPT) is highly unreliable -- it generates misinformation and falsehoods (known as "hallucinations"). Providing accurate knowledge for these modern AI applications is an unsolved problem.
Planning and decision making:
An "agent" is anything that perceives and takes actions in the world. A rational agent has goals or preferences and takes actions to make them happen:
- In automated planning, the agent has a specific goal.
- In automated decision making, the agent has preferences – there are some situations it would prefer to be in, and some situations it is trying to avoid.
The decision making agent assigns a number to each situation (called the "utility") that measures how much the agent prefers it. For each possible action, it can calculate the "expected utility": the utility of all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility.
In classical planning, the agent knows exactly what the effect of any action will be. In most real-world problems, however, the agent may not be certain about the situation they are in (it is "unknown" or "unobservable") and it may not know for certain what will happen after each possible action (it is not "deterministic"). It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked.
In some problems, the agent's preferences may be uncertain, especially if there are other agents or humans involved. These can be learned (e.g., with inverse reinforcement learning) or the agent can seek information to improve its preferences. Information value theory can be used to weigh the value of exploratory or experimental actions. The space of possible future actions and situations is typically intractably large, so the agents must take actions and evaluate situations while being uncertain what the outcome will be.
A Markov decision process has a transition model that describes the probability that a particular action will change the state in a particular way, and a reward function that supplies the utility of each state and the cost of each action. A policy associates a decision with each possible state.
The policy could be calculated (e.g. by iteration), be heuristic, or it can be learned.
Game theory describes rational behavior of multiple interacting agents, and is used in AI programs that make decisions that involve other agents.
Learning:
Machine learning is the study of programs that can improve their performance on a given task automatically. It has been a part of AI from the beginning.
There are several kinds of machine learning:
- Unsupervised learning analyzes a stream of data and finds patterns and makes predictions without any other guidance.
- Supervised learning requires a human to label the input data first, and comes in two main varieties:
- classification (where the program must learn to predict what category the input belongs in)
- and regression (where the program must deduce a numeric function based on numeric input).
- In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as "good".
- Transfer learning is when the knowledge gained from one problem is applied to a new problem.
- Deep learning uses artificial neural networks for all of these types of learning.
Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.
Natural language processing:
Natural language processing (NLP) allows programs to read, write and communicate in human languages such as English.
Specific problems include:
- speech recognition,
- speech synthesis,
- machine translation,
- information extraction,
- information retrieval
- and question answering.
Early work, based on Noam Chomsky's generative grammar and semantic networks, had difficulty with word-sense disambiguation unless restricted to small domains called "micro-worlds" (due to the common sense knowledge problem).
Modern deep learning techniques for NLP include word embedding (how often one word appears near another), transformers (which finds patterns in text), and others.
In 2019, generative pre-trained transformer (or "GPT") language models began to generate coherent text, and by 2023 these models were able to get human-level scores on the bar exam, SAT, GRE, and many other real-world applications.
Perception:
Machine perception is the ability to use input from sensors to deduce aspects of the world (such as the following sensors:
- cameras,
- microphones,
- wireless signals,
- active lidar,
- sonar,
- radar,
- and tactile sensors) .
Computer vision is the ability to analyze visual input.The field includes:
- speech recognition,
- image classification,
- facial recognition,
- object recognition,
- and robotic perception.
Robotics uses AI.
Social intelligence:
Affective computing is an interdisciplinary umbrella that comprises systems that recognize, interpret, process or simulate human feeling, emotion and mood.
For example, some virtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.
However, this tends to give naïve users an unrealistic conception of how intelligent existing computer agents actually are. Moderate successes related to affective computing include:
- textual sentiment analysis
- and, more recently, multimodal sentiment analysis, wherein AI classifies the affects displayed by a videotaped subject.
General intelligence:
A machine with artificial general intelligence should be able to solve a wide variety of problems with breadth and versatility similar to human intelligence.
Tools:
AI research uses a wide variety of tools to accomplish the goals above.
Search and optimization:
AI can solve many problems by intelligently searching through many possible solutions. There are two very different kinds of search used in AI:
State space search searches through a tree of possible states to try to find a goal state. For example, Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.
Simple exhaustive searches are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. "Heuristics" or "rules of thumb" can help to prioritize choices that are more likely to reach a goal.
Adversarial search is used for game-playing programs, such as chess or Go. It searches through a tree of possible moves and counter-moves, looking for a winning position.
Local search uses mathematical optimization to find a numeric solution to a problem. It begins with some form of a guess and then refines the guess incrementally until no more refinements can be made.
These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. This process is called stochastic gradient descent.
Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses).
Distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).
Neural networks and statistical classifiers (discussed below), also use a form of local search, where the "landscape" to be searched is formed by learning.
Logic:
Formal Logic is used for reasoning and knowledge representation. Formal logic comes in two main forms:
- propositional logic (which operates on statements that are true or false and uses logical connectives such as "and", "or", "not" and "implies")
- predicate logic (which also operates on objects, predicates and relations and uses quantifiers such as "Every X is a Y" and "There are some Xs that are Ys").
Logical inference (or deduction) is the process of proving a new statement (conclusion) from other statements that are already known to be true (the premises). A logical knowledge base also handles queries and assertions as a special case of inference. An inference rule describes what is a valid step in a proof. The most general inference rule is resolution.
Inference can be reduced to performing a search to find a path that leads from premises to conclusions, where each step is the application of an inference rule. Inference performed this way is intractable except for short proofs in restricted domains. No efficient, powerful and general method has been discovered.
Fuzzy logic assigns a "degree of truth" between 0 and 1 and handles uncertainty and probabilistic situations. Non-monotonic logics are designed to handle default reasoning. Other specialized versions of logic have been developed to describe many complex domains (see knowledge representation above).
Probabilistic methods for uncertain reasoning:
Many problems in AI (including in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods from probability theory and economics.
Bayesian networks are a very general tool that can be used for many problems, including:
- reasoning (using the Bayesian inference algorithm),
- learning (using the expectation-maximization algorithm),
- planning (using decision networks)
- and perception (using dynamic Bayesian networks).
Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).
Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using:
These tools include models such as:
Classifiers and statistical learning methods:
The simplest AI applications can be divided into two types: classifiers (e.g. "if shiny then diamond"), on one hand, and controllers (e.g. "if diamond then pick up"), on the other hand.
Classifiers are functions that use pattern matching to determine the closest match. They can be fine-tuned based on chosen examples using supervised learning. Each pattern (also called an "observation") is labeled with a certain predefined class. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.
There are many kinds of classifiers in use. The decision tree is the simplest and most widely used symbolic machine learning algorithm.
K-nearest neighbor algorithm was the most widely used analogical AI until the mid-1990s, and Kernel methods such as the support vector machine (SVM) displaced k-nearest neighbor in the 1990s.
The naive Bayes classifier is reportedly the "most widely used learner" at Google, due in part to its scalability. Neural networks are also used as classifiers.
Artificial neural networks:
Artificial neural networks were inspired by the design of the human brain: a simple "neuron" N accepts input from other neurons, each of which, when activated (or "fired"), casts a weighted "vote" for or against whether neuron N should itself activate.
In practice, the input "neurons" are a list of numbers, the "weights" are:
- a matrix,
- the next layer is the dot product (i.e., several weighted sums)
- scaled by an increasing function, such as the logistic function.
"The resemblance to real neural cells and structures is superficial", according to Russell and Norvig.
Learning algorithms for neural networks use local search to choose the weights that will get the right output for each input during training. The most common training technique is the backpropagation algorithm. Neural networks learn to model complex relationships between inputs and outputs and find patterns in data.
In theory, a neural network can learn any function.
In feedforward neural networks the signal passes in only one direction. Recurrent neural networks feed the output signal back into the input, which allows short-term memories of previous input events.
Long short term memory is the most successful network architecture for recurrent networks.
Perceptrons use only a single layer of neurons, whereas deep learning uses multiple layers. Convolutional neural networks strengthen the connection between neurons that are "close" to each other – this is especially important in image processing, where a local set of neurons must identify an "edge" before the network can identify an object.
Deep learning:
Deep learning uses several layers of neurons between the network's inputs and outputs.
The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces.
Deep learning has drastically improved the performance of programs in many important subfields of artificial intelligence, including:
- computer vision,
- speech recognition,
- image classification
- and others.
The reason that deep learning performs so well in so many applications is not known as of 2023.
The sudden success of deep learning in 2012–2015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s) but because of two factors:
- the incredible increase in computer power (including the hundred-fold increase in speed by switching to GPUs)
- and the availability of vast amounts of training data, especially the giant curated datasets used for benchmark testing, such as ImageNet.
Specialized hardware and software:
Main articles:
In the late 2010s, graphics processing units (GPUs) that were increasingly designed with AI-specific enhancements and used with specialized TensorFlow software, had replaced previously used central processing unit (CPUs) as the dominant means for large-scale (commercial and academic) machine learning models' training. Historically, specialized languages, such as Lisp, Prolog, and others, had been used.
Applications:
Main article: Applications of artificial intelligence
AI and machine learning technology is used in most of the essential applications of the 2020s, including:
- search engines
- (such as Google Search),
- targeting online advertisements,
- recommendation systems
- driving internet traffic,
- targeted advertising
- virtual assistants
- autonomous vehicles (including:
- drones,
- ADAS
- and self-driving cars),
- automatic language translation
- facialrecognition
- and image labeling
There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported they had incorporated "AI" in some offerings or processes. A few examples are:
- energy storage,
- medical diagnosis,
- military logistics,
- applications that predict the result of judicial decisions,
- foreign policy,
- or supply chain management.
Game playing programs have been used since the 1950s to demonstrate and test AI's most advanced techniques. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.
In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.
In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. Then it defeated Ke Jie in 2017, who at the time continuously held the world No. 1 ranking for two years.
Other programs handle imperfect-information games; such as:
DeepMind in the 2010s developed a "generalized artificial intelligence" that could learn many diverse Atari games on its own.
In the early 2020s, generative AI gained widespread prominence. ChatGPT, based on GPT-3, and other large language models, were tried by 14% of Americans adults.
The increasing realism and ease-of-use of AI-based text-to-image generators such as Midjourney, DALL-E, and Stable Diffusion sparked a trend of viral AI-generated photos. Widespread attention was gained by:
- a fake photo of Pope Francis wearing a white puffer coat,
- the fictional arrest of Donald Trump,
- and a hoax of an attack on the Pentagon,
- as well as the usage in professional creative arts.
AlphaFold 2 (2020) demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein.
Ethics:
AI, like any powerful technology, has potential benefits and potential risks. AI may be able to advance science and find solutions for serious problems: Demis Hassabis of Deep Mind hopes to "solve intelligence, and then use that to solve everything else".
However, as the use of AI has become widespread, several unintended consequences and risks have been identified.
Risks and harm, e.g.:
Privacy and copyright:
Further information: Information privacy and Artificial intelligence and copyright
Machine learning algorithms require large amounts of data. The techniques used to acquire this data have raised concerns about privacy, surveillance and copyright.
Technology companies collect a wide range of data from their users, including online activity, geolocation data, video and audio.
For example, in order to build speech recognition algorithms, Amazon and others have recorded millions of private conversations and allowed temps to listen to and transcribe some of them.
Opinions about this widespread surveillance range from those who see it as a necessary evil to those for whom it is clearly unethical and a violation of the right to privacy.
AI developers argue that this is the only way to deliver valuable applications. and have developed several techniques that attempt to preserve privacy while still obtaining the data, such as data aggregation, de-identification and differential privacy.
Since 2016, some privacy experts, such as Cynthia Dwork, began to view privacy in terms of fairness -- Brian Christian wrote that experts have pivoted "from the question of 'what they know' to the question of 'what they're doing with it'.".
Generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under a rationale of "fair use". Experts disagree about how well, and under what circumstances, this rationale will hold up in courts of law; relevant factors may include "the purpose and character of the use of the copyrighted work" and "the effect upon the potential market for the copyrighted work".
In 2023, leading authors (including John Grisham and Jonathan Franzen) sued AI companies for using their work to train generative AI.
Misinformation:
See also: YouTube § Moderation and offensive content
YouTube, Facebook and others use recommender systems to guide users to more content.
These AI programs were given the goal of maximizing user engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choose misinformation, conspiracy theories, and extreme partisan content, and, to keep them watching, the AI recommended more of it.
Users also tended to watch more content on the same subject, so the AI led people into filter bubbles where they received multiple versions of the same misinformation. This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government.
The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took steps to mitigate the problem.
In 2022, generative AI began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda.
This technology has been widely distributed at minimal cost. Geoffrey Hinton (who was an instrumental developer of these tools) expressed his concerns about AI disinformation. He quit his job at Google to freely criticize the companies developing AI.
Algorithmic bias and fairness:
Main articles:
Machine learning applications will be biased if they learn from biased data. The developers may not be aware that the bias exists. Bias can be introduced by the way training data is selected and by the way a model is deployed.
If a biased algorithm is used to make decisions that can seriously harm people (as it can in the following) then the algorithm may cause discrimination:
- medicine,
- finance,
- recruitment,
- housing
- or policing.
Fairness in machine learning is the study of how to prevent the harm caused by algorithmic bias. It has become serious area of academic study within AI. Researchers have discovered it is not always possible to define "fairness" in a way that satisfies all stakeholders.
On June 28, 2015, Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as "gorillas" because they were black. The system was trained on a dataset that contained very few images of black people, a problem called "sample size disparity".
Google "fixed" this problem by preventing the system from labelling anything as a "gorilla". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon.
COMPAS is a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. In 2016, Julia Angwin at ProPublica discovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants.
Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different -- the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend.
In 2017, several researchers showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data.
A program can make biased decisions even if the data does not explicitly mention a problematic feature (such as "race" or "gender"). The feature will correlate with other features (like "address", "shopping history" or "first name"), and the program will make the same decisions based on these features as it would on "race" or "gender".
Moritz Hardt said “the most robust fact in this research area is that fairness through blindness doesn't work.”
Criticism of COMPAS highlighted a deeper problem with the misuse of AI. Machine learning models are designed to make "predictions" that are only valid if we assume that the future will resemble the past. If they are trained on data that includes the results of racist decisions in the past, machine learning models must predict that racist decisions will be made in the future.
Unfortunately, if an applications then uses these predictions as recommendations, some of these "recommendations" will likely be racist. Thus, machine learning is not well suited to help make decisions in areas where there is hope that the future will be better than the past. It is necessarily descriptive and not proscriptive.
Bias and unfairness may go undetected because the developers are overwhelmingly white and male: among AI engineers, about 4% are black and 20% are women.
At its 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022) the Association for Computing Machinery, in Seoul, South Korea, presented and published findings recommending that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed.
Lack of transparency:
See also:
Most modern AI applications can not explain how they have reached a decision. The large amount of relationships between inputs and outputs in deep neural networks and resulting complexity makes it difficult for even an expert to explain how they produced their outputs, making them a black box.
There have been many cases where a machine learning program passed rigorous tests, but nevertheless learned something different than what the programmers intended. For example, Justin Ko and Roberto Novoa developed a system that could identify skin diseases better than medical professionals, however it classified any image with a ruler as "cancerous", because pictures of malignancies typically include a ruler to show the scale.
A more dangerous example was discovered by Rich Caruana in 2015: a machine learning system that accurately predicted risk of death classified a patient that was over 65, asthma and difficulty breathing as "low risk". Further research showed that in high-risk cases like this, the hospital would allocate more resources and save the patient's life, decreasing the risk measured by the program.
Mistakes like these become obvious when we know how the program has reached a decision. Without an explanation, these problems may not not be discovered until after they have caused harm.
A second issue is that people who have been harmed by an algorithm's decision have a right to an explanation. Doctors, for example, are required to clearly and completely explain the reasoning behind any decision they make. Early drafts of the European Union's General Data Protection Regulation in 2016 included an explicit statement that this right exists. Industry experts noted that this is an unsolved problem with no solution in sight. Regulators argued that nevertheless the harm is real: if the problem has no solution, the tools should not be used.
DARPA established the XAI ("Explainable Artificial Intelligence") program in 2014 to try and solve these problems.
There are several potential solutions to the transparency problem. Multitask learning provides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned.
Deconvolution, DeepDream and other generative methods can allow developers to see what different layers of a deep network have learned and produce output that can suggest what the network is learning.
Supersparse linear integer models use learning to identify the most important features, rather than the classification. Simple addition of these features can then make the classification (i.e. learning is used to create a scoring system classifier, which is transparent).
Bad actors and weaponized AI:
Main articles:
A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision. By 2015, over fifty countries were reported to be researching battlefield robots. These weapons are considered especially dangerous for several reasons: if they kill an innocent person it is not clear who should be held accountable, it is unlikely they will reliably choose targets, and, if produced at scale, they are potentially weapons of mass destruction.
In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed.
AI provides a number of tools that are particularly useful for authoritarian governments:
- smart spyware, face recognition and voice recognition allow widespread surveillance; such surveillance allows machine learning to classify potential enemies of the state and can prevent them from hiding;
- recommendation systems can precisely target propaganda and misinformation for maximum effect; deepfakes and generative AI aid in producing misinformation;
- advanced AI can make authoritarian centralized decision making more competitive with liberal and decentralized systems such as markets.
Terrorists, criminals and rogue states can use weaponized AI such as advanced digital warfare and lethal autonomous weapons.
Machine-learning AI is also able to design tens of thousands of toxic molecules in a matter of hours.
Technological unemployment:
Main articles:
From the early days of the development of artificial intelligence there have been arguments, for example those put forward by Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculation and qualitative, value-based judgement.
Economists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.
In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed.
Risk estimates vary; for example, in the 2010s Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classified only 9% of U.S. jobs as "high risk".
The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology (rather than social policy) creates unemployment (as opposed to redundancies).
Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist stated in 2015 that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".
Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.
In April 2023, it was reported that 70% of the jobs for Chinese video game illlustrators had been eliminated by generative artificial intelligence.
Existential risk:
Main article: Existential risk from artificial general intelligence
It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as the physicist Stephen Hawking puts it, "spell the end of the human race".
This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like "self-awareness" (or "sentience" or "consciousness") and becomes a malevolent character. These sci-fi scenarios are misleading in several ways.
First, AI does not require human-like "sentience" to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them.
Philosopher Nick Bostrom argued that if one gives almost any goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of a paperclip factory manager). Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that "you can't fetch the coffee if you're dead."
In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is "fundamentally on our side".
Second, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are made of language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.
The opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI. Personalities such as Stephen Hawking, Bill Gates, Elon Musk have expressed concern about existential risk from AI.
In the early 2010's, experts argued that the risks are too distant in the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine.
However, after 2016, the study of current and future risks and possible solutions became a serious area of research. In 2023, AI pioneers including Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, and Sam Altman issued the joint statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".
Ethical machines and alignment:
Main articles:
- Machine ethics,
- AI safety,
- Friendly artificial intelligence,
- Artificial moral agents,
- Human Compatible
Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk.
Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas. The field of machine ethics is also called computational morality, and was founded at an AAAI symposium in 2005.
Other approaches include Wendell Wallach's "artificial moral agents" and Stuart J. Russell's three principles for developing provably beneficial machines.
Regulation:
Main articles:
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally.
According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone. Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.
Most EU member states had released national AI strategies, as had the followng;
- Canada,
- China,
- India,
- Japan,
- Mauritius,
- the Russian Federation,
- Saudi Arabia,
- United Arab Emirates,
- US
- and Vietnam.
Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.
The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology.
Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI.
In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.
In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks". A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity.
In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".
History:
- Main article: History of artificial intelligence
- For a chronological guide, see Timeline of artificial intelligence.
The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate both mathematical deduction and formal reasoning, which is known as the Church–Turing thesis.
This, along with concurrent discoveries in cybernetics and information theory, led researchers to consider the possibility of building an "electronic brain". The first paper later recognized as "AI" was McCullouch and Pitts design for Turing-complete "artificial neurons" in 1943.
The field of AI research was founded at a workshop at Dartmouth College in 1956. The attendees became the leaders of AI research in the 1960s. They and their students produced programs that the press described as "astonishing": computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English.
By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense and laboratories had been established around the world. Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do".
Marvin Minsky agreed, writing, "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".
They had, however, underestimated the difficulty of the problem. Both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects.
Minsky's and Papert's book Perceptrons was understood as proving that artificial neural networks approach would never be useful for solving real-world tasks, thus discrediting the approach altogether. The "AI winter", a period when obtaining funding for AI projects was difficult, followed.
In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research.
However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.
Many researchers began to doubt that the current practices would be able to imitate all the processes of human cognition, especially:
A number of researchers began to look into "sub-symbolic" approaches:
Robotics researchers, such as Rodney Brooks, rejected "representation" in general and focussed directly on engineering machines that move and survive.
Judea Pearl, Lofti Zadeh and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic. But the most important development was the revival of "connectionism", including neural network research, by Geoffrey Hinton and others.
In 1990, Yann LeCun successfully showed that convolutional neural networks can recognize handwritten digits, the first of many successful applications of neural networks.
AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This "narrow" and "formal" focus allowed researchers to produce verifiable results and collaborate with other fields (such as statistics, economics and mathematics).
By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence".
Several academic researchers became concerned that AI was no longer pursuing the original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield of artificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s.
Deep learning began to dominate industry benchmarks in 2012 and was adopted throughout the field. For many specific tasks, other methods were abandoned. Deep learning's success was based on both hardware improvements such as:
- faster computers,
- graphics processing units,
- cloud computing
- and access to large amounts of data
- (including curated datasets, such as ImageNet).
Deep learning's success led to an enormous increase in interest and funding in AI. The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019, and WIPO reported that AI was the most prolific emerging technology in terms of the number of patent applications and granted patents.
According to 'AI Impacts', about $50 billion annually was invested in "AI" around 2022 in the US alone and about 20% of new US Computer Science PhD graduates have specialized in "AI"; about 800,000 "AI"-related US job openings existed in 2022.
In 2016, issues of fairness and the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. The alignment problem became a serious field of academic study.
Philosophy:
Main article: Philosophy of artificial intelligence
Defining artificial intelligence:
Main articles:
Alan Turing wrote in 1950 "I propose to consider the question 'can machines think'?" He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour". He devised the Turing test, which measures the ability of a machine to simulate human conversation.
Since we can only observe the behavior of the machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes that we can not determine these things about other people but "it is usual to have a polite convention that everyone thinks"
Russell and Norvig agree with Turing that AI must be defined in terms of "acting" and not "thinking". However, they are critical that the test compares machines to people. "Aeronautical engineering texts," they wrote, "do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool other pigeons.'"
AI founder John McCarthy agreed, writing that "Artificial intelligence is not, by definition, simulation of human intelligence".
McCarthy defines intelligence as "the computational part of the ability to achieve goals in the world." Another AI founder, Marvin Minsky similarly defines it as "the ability to solve hard problems".
These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the "intelligence" of the machine—and no other philosophical discussion is required, or may not even be possible.
Another definition has been adopted by Google, a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence.
Evaluating approaches to AI:
No established unifying theory or paradigm has guided AI research for most of its history. The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks").
This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.
Symbolic AI and its limits:
Symbolic AI (or "GOFAI") simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."
However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult.
Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge. Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree.
The issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias.
Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence, in part because sub-symbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision.
The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.
Neat vs. scruffy:
Main article: Neats and scruffies
"Neats" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work.
This issue was actively discussed in the 70s and 80s, but eventually was seen as irrelevant. Modern AI has elements of both.
Soft vs. hard computing:
Main article: Soft computing
Finding a provably correct or optimal solution is intractable for many important problems. Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation.
Soft computing was introduced in the late 80s and most successful AI programs in the 21st century are examples of soft computing with neural networks.
Narrow vs. general AI:
Main article: Artificial general intelligence
AI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals.
General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The experimental sub-field of artificial general intelligence studies this area exclusively.
Machine consciousness, sentience and mind:
Main articles:
The philosophy of mind does not know whether a machine can have a mind, consciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior.
Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence.
Russell and Norvig add that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on." However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction.
Consciousness:
Main articles:
David Chalmers identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness. The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion).
Human information processing is easy to explain, however, human subjective experience is difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to know what red looks like.
Computationalism and functionalism:
Main articles:
Computationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind–body problem.
This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam.
Philosopher John Searle characterized this position as "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds." Searle counters this assertion with his Chinese room argument, which attempts to show that, even if a machine perfectly simulates human behavior, there is still no reason to suppose it also has a mind.
Robot rights:
Main article: Robot rights
If a machine has a mind and subjective experience, then it may also have sentience (the ability to feel), and if so it could also suffer; it has been argued that this could entitle it to certain rights. Any hypothetical robot rights would lie on a spectrum with animal rights and human rights.
This issue has been considered in fiction for centuries, and is now being considered by, for example, California's Institute for the Future; however, critics argue that the discussion is premature.
Future:
Superintelligence and the singularity:
A superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind.
If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an "intelligence explosion" and Vernor Vinge called a "singularity".
However, most technologies do not improve exponentially indefinitely, but rather follow an S-curve, slowing when they reach the physical limits of what the technology can do.
Consider, for example, transportation: speed increased exponentially from 1830 to 1970, but then the trend abruptly stopped when it reached physical limits.
Transhumanism:
Robot designer Hans Moravec, cyberneticist Kevin Warwick, and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger.
Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his book of the same name in 1998.
In fiction:
Main article: Artificial intelligence in fiction
Thought-capable artificial beings have appeared as storytelling devices since antiquity, and have been a persistent theme in science fiction.
A common trope in these works began with Mary Shelley's Frankenstein, where a human creation becomes a threat to its masters.
This also includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999).
In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are less prominent in popular culture.
Isaac Asimov introduced the Three Laws of Robotics in many books and stories, most notably the "Multivac" series about a super-intelligent computer of the same name. Asimov's laws are often brought up during lay discussions of machine ethics; while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.
Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.
See also
- AI alignment – Conformance to the intended objective
- AI safety – Research area on making AI safe and beneficial
- Artificial intelligence arms race – Arms race for the most advanced AI-related technologies
- Artificial intelligence detection software
- Artificial intelligence in healthcare – Overview of the use of artificial intelligence in healthcare
- Behavior selection algorithm – Algorithm that selects actions for intelligent agents
- Business process automation – Technology-enabled automation of complex business processes
- Case-based reasoning – Process of solving new problems based on the solutions of similar past problems
- Emergent algorithm – Algorithm exhibiting emergent behavior
- Female gendering of AI technologies
- Glossary of artificial intelligence – List of definitions of terms and concepts commonly used in the study of artificial intelligence
- List of datasets for machine-learning research
- Operations research – Discipline concerning the application of advanced analytical methods
- Robotic process automation – Form of business process automation technology
- Synthetic intelligence – Alternate term for or form of artificial intelligence
- Weak artificial intelligence – Form of artificial intelligence
- "Artificial Intelligence" -- Internet Encyclopedia of Philosophy.
- Thomason, Richmond. "Logic and Artificial Intelligence". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
- Artificial Intelligence. BBC Radio 4 discussion with John Agar, Alison Adam & Igor Aleksander (In Our Time, 8 December 2005).
- Theranostics and AI—The Next Advance in Cancer Precision Medicine
Ethics and Existential Threat of Artificial Intelligence
- YouTube Video: Elon Musk on the Risks of Artificial Intelligence-- Elon Musk
- YouTube Video: A Tale Of Two Cities: How Smart Robots And AI Will Transform America
- YouTube Video: Robots And AI: The Future Is Automated And Every Job Is At Risk
The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).
Robot Ethics:
Main article: Robot ethics
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings. It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans.
Robot rights:
"Robot rights" is the concept that people should have moral obligations towards their machines, similar to human rights or animal rights. It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society. These could include the right to life and liberty, freedom of thought and expression and equality before the law. The issue has been considered by the Institute for the Future and by the U.K. Department of Trade and Industry.
Experts disagree whether specific and detailed laws will be required soon or safely in the distant future. Glenn McGee reports that sufficiently humanoid robots may appear by 2020. Ray Kurzweil sets the date at 2029. Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.
The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own:
In October 2017, the android Sophia was granted "honorary" citizenship in Saudi Arabia, though some observers found this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of human rights and the rule of law.
The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligences show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights.
Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.[13]
Threat to human dignity:
Main article: Computer Power and Human Reason
Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as any of these:
Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."
Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.
However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines:
Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against. AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.
Bill Hibbard writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."
Transparency, accountability, and open source:
Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts. Ben Goertzel and David Hart created OpenCog as an open source framework for AI development.
OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open source AI beneficial to humanity. There are numerous other open source AI developments.
Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI it codes is not transparent. The IEEE has a standardization effort on AI transparency. The IEEE effort identifies multiple scales of transparency for different users. Further, there is concern that releasing the full capacity of contemporary AI to some organisations may be a public bad, that is, do more damage than good.
For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do.
Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term. The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.
Biases of AI Systems:
AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business implications and directly impact people. These systems are vulnerable to biases and errors introduced by its human makers. Also, the data used to train these AI systems itself can have biases.
For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people’s gender . These AI systems were able to detect gender of white men more accurately than gender of darker skin men.
Similarly, Amazon’s.com Inc’s termination of AI hiring and recruitment is another example which exhibit AI cannot be fair. The algorithm preferred more male candidates then female.
This was because Amazon’s system was trained with data collected over 10 year period that came mostly from male candidates.
Liability for Partial or Fully Automated Cars:
The wide use of partial to fully autonomous cars seems to be imminent in the future. But these new technologies also bring new issues. Recently, a debate over the legal liability have risen over the responsible party if these cars get into accident.
In one of the reports a driverless car hit a pedestrian and had a dilemma over whom to blame for the accident. Even though the driver was inside the car during the accident, the controls were fully in the hand of computers. Before such cars become widely used, these issues need to be tackled through new policies.
Weaponization of artificial intelligence:
Main article: Lethal autonomous weapon
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.
One researcher states that autonomous robots might be more humane, as they could make decisions more effectively.
Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."
From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on who to kill and that is why there should be a set moral framework that the A.I cannot override.
There has been a recent outcry with regard to the engineering of artificial-intelligence weapons that has included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry.
The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively.
Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.
"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.
Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology."
These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence.
Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".
Machine Ethics:
Main article: Machine ethics
Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.
Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.
More recently, academics and many governments have challenged the idea that AI can itself be held accountable. A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.
In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.
One problem in this case may have been that the goals were "terminal" (i.e. in contrast, ultimate human motives typically have a quality of requiring never-ending learning).
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.
The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.
The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction.
Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity." He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism.
The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.
In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions:
However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, non-linearly and with millions of interconnected artificial neurons.
Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.
In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines.
Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis), while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".
Unintended Consequences:
Further information: Existential risk from artificial general intelligence
Many researchers have argued that, by way of an "intelligence explosion" sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.
In his paper "Ethical Issues in Advanced Artificial Intelligence," philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that general super-intelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent.
Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the super-intelligence to specify its original motivations. In theory, a super-intelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.
However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that super-intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense".
According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.
Bill Hibbard proposes an AI design that avoids several types of unintended AI behavior including self-delusion, unintended instrumental actions, and corruption of the reward generator.
Organizations:
Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. They stated: "This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning."
Apple joined other tech companies as a founding member of the Partnership on AI in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.
The Public Voice has proposed (in late 2018) a set of Universal Guidelines for Artificial Intelligence, which has received many notable endorsements.
The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organisation.
Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organisations to ensure AI is ethically applied.
In Fiction:
Main article: Artificial intelligence in fiction
The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment.
The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story "The Planck Dive" suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient.
The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies.
The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.
The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games. It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network.
This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.
Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.
Literature:
The standard bibliography on ethics of AI is on PhilPapers. A recent collection is V.C. Müller(ed.) (2016)
See also:
Robot Ethics:
Main article: Robot ethics
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings. It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans.
Robot rights:
"Robot rights" is the concept that people should have moral obligations towards their machines, similar to human rights or animal rights. It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society. These could include the right to life and liberty, freedom of thought and expression and equality before the law. The issue has been considered by the Institute for the Future and by the U.K. Department of Trade and Industry.
Experts disagree whether specific and detailed laws will be required soon or safely in the distant future. Glenn McGee reports that sufficiently humanoid robots may appear by 2020. Ray Kurzweil sets the date at 2029. Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.
The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own:
- If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry.
- If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.
In October 2017, the android Sophia was granted "honorary" citizenship in Saudi Arabia, though some observers found this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of human rights and the rule of law.
The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligences show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights.
Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.[13]
Threat to human dignity:
Main article: Computer Power and Human Reason
Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as any of these:
- A customer service representative (AI technology is already used today for telephone-based interactive voice response systems)
- A therapist (as was proposed by Kenneth Colby in the 1970s)
- A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation)
- A soldier
- A judge
- A police officer
Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."
Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.
However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines:
Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against. AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.
Bill Hibbard writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."
Transparency, accountability, and open source:
Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts. Ben Goertzel and David Hart created OpenCog as an open source framework for AI development.
OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open source AI beneficial to humanity. There are numerous other open source AI developments.
Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI it codes is not transparent. The IEEE has a standardization effort on AI transparency. The IEEE effort identifies multiple scales of transparency for different users. Further, there is concern that releasing the full capacity of contemporary AI to some organisations may be a public bad, that is, do more damage than good.
For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do.
Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term. The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.
Biases of AI Systems:
AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business implications and directly impact people. These systems are vulnerable to biases and errors introduced by its human makers. Also, the data used to train these AI systems itself can have biases.
For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people’s gender . These AI systems were able to detect gender of white men more accurately than gender of darker skin men.
Similarly, Amazon’s.com Inc’s termination of AI hiring and recruitment is another example which exhibit AI cannot be fair. The algorithm preferred more male candidates then female.
This was because Amazon’s system was trained with data collected over 10 year period that came mostly from male candidates.
Liability for Partial or Fully Automated Cars:
The wide use of partial to fully autonomous cars seems to be imminent in the future. But these new technologies also bring new issues. Recently, a debate over the legal liability have risen over the responsible party if these cars get into accident.
In one of the reports a driverless car hit a pedestrian and had a dilemma over whom to blame for the accident. Even though the driver was inside the car during the accident, the controls were fully in the hand of computers. Before such cars become widely used, these issues need to be tackled through new policies.
Weaponization of artificial intelligence:
Main article: Lethal autonomous weapon
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.
One researcher states that autonomous robots might be more humane, as they could make decisions more effectively.
Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."
From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on who to kill and that is why there should be a set moral framework that the A.I cannot override.
There has been a recent outcry with regard to the engineering of artificial-intelligence weapons that has included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry.
The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively.
Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.
"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.
Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology."
These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence.
Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".
Machine Ethics:
Main article: Machine ethics
Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.
Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.
More recently, academics and many governments have challenged the idea that AI can itself be held accountable. A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.
In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.
One problem in this case may have been that the goals were "terminal" (i.e. in contrast, ultimate human motives typically have a quality of requiring never-ending learning).
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.
The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.
The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction.
Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity." He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism.
The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.
In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions:
- They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard.
- They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons.
- They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.
However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, non-linearly and with millions of interconnected artificial neurons.
Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.
In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines.
Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis), while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".
Unintended Consequences:
Further information: Existential risk from artificial general intelligence
Many researchers have argued that, by way of an "intelligence explosion" sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.
In his paper "Ethical Issues in Advanced Artificial Intelligence," philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that general super-intelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent.
Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the super-intelligence to specify its original motivations. In theory, a super-intelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.
However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that super-intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense".
According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.
Bill Hibbard proposes an AI design that avoids several types of unintended AI behavior including self-delusion, unintended instrumental actions, and corruption of the reward generator.
Organizations:
Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. They stated: "This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning."
Apple joined other tech companies as a founding member of the Partnership on AI in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.
The Public Voice has proposed (in late 2018) a set of Universal Guidelines for Artificial Intelligence, which has received many notable endorsements.
The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organisation.
Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organisations to ensure AI is ethically applied.
- The European Commission has a High-Level Expert Group on Artificial Intelligence.
- The OECD on Artificial Intelligence
- In the United States the Obama administration put together a Roadmap for AI Policy (link is to Harvard Business Review's account of it. The Obama Administration released two prominent whitepapers on the future and impact of AI. The Trump administration has not been actively engaged in AI regulation to date (January 2019).
In Fiction:
Main article: Artificial intelligence in fiction
The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment.
The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story "The Planck Dive" suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient.
The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies.
The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.
The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games. It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network.
This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.
Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.
Literature:
The standard bibliography on ethics of AI is on PhilPapers. A recent collection is V.C. Müller(ed.) (2016)
See also:
- Algorithmic bias
- Artificial consciousness
- Artificial general intelligence (AGI)
- Computer ethics
- Effective altruism, the long term future and global catastrophic risks
- Existential risk from artificial general intelligence
- Laws of Robotics
- Philosophy of artificial intelligence
- Roboethics
- Robotic Governance
- Superintelligence: Paths, Dangers, Strategies
- Robotics: Ethics of artificial intelligence. "Four leading researchers share their concerns and solutions for reducing societal risks from intelligent machines." Nature, 521, 415–418 (28 May 2015) doi:10.1038/521415a
- BBC News: Games to take on a life of their own
- A short history of computer ethics
- Nick Bostrom
- Joanna Bryson
- Luciano Floridi
- Ray Kurzweil
- Vincent C. Müller
- Peter Norvig
- Steve Omohundro
- Stuart J. Russell
- Anders Sandberg
- Eliezer Yudkowsky
- Centre for the Study of Existential Risk
- Future of Humanity Institute
- Future of Life Institute
- Machine Intelligence Research Institute
- Partnership on AI
Applications of Artificial Intelligence
- YouTube Video: How artificial intelligence will change your world, for better or worse
- YouTube Video: Top 10 Terrifying Developments In Artificial Intelligence (WatchMojo)
- YouTube Video: A.I Supremacy 2020 | Rise of the Machines - "Super" Intelligence Quantum Computers Documentary
[Your WebHost: Note that the mentioned AI topics below will, over time, also be expanded as additional AI Content under this web page, in order to provide a better understanding of the relative importance of each AI application: AI is going to affect ALL of Us over time! For examples, the above graphic illustrates one website's vision of retail and commerce AI applications: click here for more]
Artificial intelligence (AI), defined as intelligence exhibited by machines, has many applications in today's society.
More specifically, it is Weak AI, the form of AI where programs are developed to perform specific tasks, that is being utilized for a wide range of activities including:
AI has been used to develop and advance numerous fields and industries, including finance, healthcare, education, transportation, and more.
Click on any of the following blue hyperlinks for more about Applications of Artificial Intelligence:
Artificial intelligence (AI), defined as intelligence exhibited by machines, has many applications in today's society.
More specifically, it is Weak AI, the form of AI where programs are developed to perform specific tasks, that is being utilized for a wide range of activities including:
AI has been used to develop and advance numerous fields and industries, including finance, healthcare, education, transportation, and more.
Click on any of the following blue hyperlinks for more about Applications of Artificial Intelligence:
- AI for Good
- Agriculture
- Aviation
- Computer science
- Deepfake
- Education
- Finance
- Government
- Heavy industry
- Hospitals and medicine
- Human resources and recruiting
- Job search
- Marketing
- Media and e-commerce
- Military
- Music
- News, publishing and writing
- Online and telephone customer service
- Power electronics
- Sensors
- Telecommunications maintenance
- Toys and games
- Transportation
- Wikipedia
- List of applications
- See also:
The Programming Languages and Glossary of Artificial Intelligence (AI)
- YouTube: How to Learn AI for Free??
- YouTube Video: This Canadian Genius Created Modern AI
- YouTube Video: Python Tutorial | Python Programming Tutorial for Beginners | Course Introduction
Article accompanying above illustration:
By: Oleksii Kharkovyna
"AI is a huge technology. That’s why a lot of developers simply don’t know how to get started. Also, personally, I’ve met a bunch of people who have no coding background whatsoever, yet they want to learn artificial intelligence.
Most aspiring AI developers wonder: what languages are needed to create an AI algorithm? So, I’ve decided to draw up a list of programming languages my friends-developers use to create AIs.:
1. Python
Python is one of the most popular programming language thanks to its adaptability and relatively low difficulty to master. Python is quite often used as a glue language that puts components together.
Why do developers choose Python to code AIs?
Python is gaining unbelievably huge momentum in AI. The language is used to develop data science algrorithms, machine learning, and IoT projects. There are a few reasons for this astonishing popularity:
2. C++
C++ is a solid choice for an AI developer. To start with, Google used the language to create TensorFlow libraries. Though most developers have already moved on to using “easier” programming languages such as Python, still, a lot of basic AI functions are built with C++.
Also, it’s quite an elegant choice for high-level AI heuristics.
To use C++ to develop AI-algorithms, you have to be a truly experienced developer with no rush pressing on you. Otherwise, you might have a bit of tough time trying to figure out a complicated code few hours before due date of the project.
3. Lisp:
A reason for Lisp’s huge AI momentum is its power of computing with symbolic expressions. One can argue that Lisp is a bit old-fashioned, and it might be true. These days, developers mostly use younger dynamic languages as Ruby and Python. Still, Lisp has its own powerful features. Let’s name but a few of those:
Should you take an in-depth course to learn Lisp? Not necessarily. However, knowing as much as basic principles is pretty much enough for AI developers.
4. Java:
Being one of the most popular programming languages in overall development, Java has also won its fans hearts as a fit and elegant language for AI development.
Why? I asked some developers I know use Java about it. Here are the reasons they’ve given to back of their fondness of the language:
5. Prolog:
Prolog is a less popular and mainstream choice as the previous ones we’ve been discussing.
However, you shouldn’t dismiss it simply because it doesn’t have a multi-million community of fans.
Prolog still comes in handy for AI developers. Most of those who start using it acknowledge that it’s, at no doubt, a convenient language to express relationships and goals.
6. SmallTalk:
Similar to Lisp, the wide use of SmallTalk was a common practice in 70s. Now, it loses its momentum in favor of Python, Java, and C++. However, SmallTalk libraries for AI are currently appearing at a rapid pace. Obviously, there aren’t as many as those for Python and Java.
Yet, highly underestimated as for now, the language keeps evolving through its newly developed project Pharo. Here are but a few innovations it made possible:
7. R:
R is a must-learn language for you if any of your future project make use of data and require data science. Though speed might not be R’s most prominent advantage, it does almost every AI-related task you can think of:
Sometimes R does things a bit differently from the traditional way. However, among its advantages, one has to name the little amount of code and interactive working environment.
8. Haskell:
Haskell is quite a good programming language to develop AI. It is a fit for writing neural networks, graphical models, genetic programming, etc. Here are some features that make the language a good choice for AI developers.
This was my list of programming languages that come in handy for AI developers. What are your favorites? Write them down in comments and explain why a particular language is your favorite one.
[End of Article]
___________________________________________________________________________
Artificial intelligence researchers have developed several specialized programming languages for artificial intelligence:
Languages:
This glossary of artificial intelligence terms is about artificial intelligence, its sub-disciplines, and related fields.
By: Oleksii Kharkovyna
- Bits and pieces about AI, ML, and Data Science
- https://www.instagram.com/miallez/
"AI is a huge technology. That’s why a lot of developers simply don’t know how to get started. Also, personally, I’ve met a bunch of people who have no coding background whatsoever, yet they want to learn artificial intelligence.
Most aspiring AI developers wonder: what languages are needed to create an AI algorithm? So, I’ve decided to draw up a list of programming languages my friends-developers use to create AIs.:
1. Python
Python is one of the most popular programming language thanks to its adaptability and relatively low difficulty to master. Python is quite often used as a glue language that puts components together.
Why do developers choose Python to code AIs?
Python is gaining unbelievably huge momentum in AI. The language is used to develop data science algrorithms, machine learning, and IoT projects. There are a few reasons for this astonishing popularity:
- Less coding required. AI has a lot of algorithms. Testing all of them can make into a hard work. That’s where Python usually comes in handy. The language has “check as you code” methodology that eases the process of testing.
- Built-in libraries. They proved to be convenient for AI developers. To name but a few, you can use Pybrain for machine learning, Numpy for scientific computation, and Scipy for advanced computing.
- Flexibility and independence. A good thing about Python is that you can get your project running on different OS with but a few changes in the code. That saves time as you don’t have to test the algorithm on every OS separately.
- Support. Python community is among the reasons why you cannot pass the language by when there’s an AI project at stakes. The community of Python’s users is very active — you can find a more experienced developer to help you with your trouble.
- Popularity. Millenials love the language. Its popularity grows day-to-day, and it’s only likely to remain so in the future. There are a lot of courses, open source projects, and comprehensive articles that’ll help you master Python in no time.
2. C++
C++ is a solid choice for an AI developer. To start with, Google used the language to create TensorFlow libraries. Though most developers have already moved on to using “easier” programming languages such as Python, still, a lot of basic AI functions are built with C++.
Also, it’s quite an elegant choice for high-level AI heuristics.
To use C++ to develop AI-algorithms, you have to be a truly experienced developer with no rush pressing on you. Otherwise, you might have a bit of tough time trying to figure out a complicated code few hours before due date of the project.
3. Lisp:
A reason for Lisp’s huge AI momentum is its power of computing with symbolic expressions. One can argue that Lisp is a bit old-fashioned, and it might be true. These days, developers mostly use younger dynamic languages as Ruby and Python. Still, Lisp has its own powerful features. Let’s name but a few of those:
- Lisp allows you to write self-modifying code rather easily;
- You can extend the language in the way that fits better for a particular domain thus creating a domain specific language;
- A solid choice for recursive algorithms.
Should you take an in-depth course to learn Lisp? Not necessarily. However, knowing as much as basic principles is pretty much enough for AI developers.
4. Java:
Being one of the most popular programming languages in overall development, Java has also won its fans hearts as a fit and elegant language for AI development.
Why? I asked some developers I know use Java about it. Here are the reasons they’ve given to back of their fondness of the language:
- It has impressive flexibility for data security. With GDPR regulation and overall concerns about data protection, being able to ensure of client’s data security is crucial. Java provides the most flexibility in creating different client environments, therefore protecting one’s personal information.
- It is loved for a robust ecosystem. A lot of open source projects are written using Java. The language accelerates development a great deal comparing to its alternatives.
- Low cost of streamlining.
- Impressive community. There are a lot of experienced developers and experts in Java who are open to share their knowledge and expertise. Also, there’s but a ton of open source projects and libraries you can use to learn AI development.
5. Prolog:
Prolog is a less popular and mainstream choice as the previous ones we’ve been discussing.
However, you shouldn’t dismiss it simply because it doesn’t have a multi-million community of fans.
Prolog still comes in handy for AI developers. Most of those who start using it acknowledge that it’s, at no doubt, a convenient language to express relationships and goals.
- You can declare facts and create rules based on those facts. That allows a developer to answer and reason different queries.
- Prolog is a straightforward language that for a problem-solution kind of development.
- Another good news is that Prolog supports backtracking so the overall algorithm management will be easier.
6. SmallTalk:
Similar to Lisp, the wide use of SmallTalk was a common practice in 70s. Now, it loses its momentum in favor of Python, Java, and C++. However, SmallTalk libraries for AI are currently appearing at a rapid pace. Obviously, there aren’t as many as those for Python and Java.
Yet, highly underestimated as for now, the language keeps evolving through its newly developed project Pharo. Here are but a few innovations it made possible:
- Oz — allows an image to manipulate another one;
- Moose — an impressive tool for code analysis and visualization;
- Amber (with Pharo as the reference language) is a tool for frond-end programming.
7. R:
R is a must-learn language for you if any of your future project make use of data and require data science. Though speed might not be R’s most prominent advantage, it does almost every AI-related task you can think of:
- creating clean datasets;
- split a big data set into a few training sets and test sets;
- use data analysis to create predictions for the new data;
- the language can be easily ported to Big Data environments.
Sometimes R does things a bit differently from the traditional way. However, among its advantages, one has to name the little amount of code and interactive working environment.
8. Haskell:
Haskell is quite a good programming language to develop AI. It is a fit for writing neural networks, graphical models, genetic programming, etc. Here are some features that make the language a good choice for AI developers.
- Haskell is great at creating domain specific languages.
- Using Haskell, you can separate pure actions from the I/O. That enables developers to write algorithms like alpha/beta search.
- There are a few very good libraries — take hmatrix for an example.
This was my list of programming languages that come in handy for AI developers. What are your favorites? Write them down in comments and explain why a particular language is your favorite one.
[End of Article]
___________________________________________________________________________
Artificial intelligence researchers have developed several specialized programming languages for artificial intelligence:
Languages:
- AIML (meaning "Artificial Intelligence Markup Language") is an XML dialect for use with A.L.I.C.E.-type chatterbots.
- IPL was the first language developed for artificial intelligence. It includes features intended to support programs that could perform general problem solving, such as lists, associations, schemas (frames), dynamic memory allocation, data types, recursion, associative retrieval, functions as arguments, generators (streams), and cooperative multitasking.
- Lisp is a practical mathematical notation for computer programs based on lambda calculus. Linked lists are one of the Lisp language's major data structures, and Lisp source code is itself made up of lists. As a result, Lisp programs can manipulate source code as a data structure, giving rise to the macro systems that allow programmers to create new syntax or even new domain-specific programming languages embedded in Lisp. There are many dialects of Lisp in use today, among which are Common Lisp, Scheme, and Clojure.
- Smalltalk has been used extensively for simulations, neural networks, machine learning and genetic algorithms. It implements the purest and most elegant form of object-oriented programming using message passing.
- Prolog is a declarative language where programs are expressed in terms of relations, and execution occurs by running queries over these relations. Prolog is particularly useful for symbolic reasoning, database and language parsing applications. Prolog is widely used in AI today.
- STRIPS is a language for expressing automated planning problem instances. It expresses an initial state, the goal states, and a set of actions. For each action preconditions (what must be established before the action is performed) and postconditions (what is established after the action is performed) are specified.
- Planner is a hybrid between procedural and logical languages. It gives a procedural interpretation to logical sentences where implications are interpreted with pattern-directed inference.
- POP-11 is a reflective, incrementally compiled programming language with many of the features of an interpreted language. It is the core language of the Poplog programming environment developed originally by the University of Sussex, and recently in the School of Computer Science at the University of Birmingham which hosts the Poplog website, It is often used to introduce symbolic programming techniques to programmers of more conventional languages like Pascal, who find POP syntax more familiar than that of Lisp. One of POP-11's features is that it supports first-class functions.
- R is widely used in new-style artificial intelligence, involving statistical computations, numerical analysis, the use of Bayesian inference, neural networks and in general Machine Learning. In domains like finance, biology, sociology or medicine it is considered as one of the main standard languages. It offers several paradigms of programming like vectorial computation, functional programming and object-oriented programming. It supports deep learning libraries like MXNet, Keras or TensorFlow.
- Python is widely used for artificial intelligence, with packages for several applications including General AI, Machine Learning, Natural Language Processing and Neural Networks.
- Haskell is also a very good programming language for AI. Lazy evaluation and the list and LogicT monads make it easy to express non-deterministic algorithms, which is often the case. Infinite data structures are great for search trees. The language's features enable a compositional way of expressing the algorithms. The only drawback is that working with graphs is a bit harder at first because of purity.
- Wolfram Language includes a wide range of integrated machine learning capabilities, from highly automated functions like Predict and Classify to functions based on specific methods and diagnostics. The functions work on many types of data, including numerical, categorical, time series, textual, and image.
- C++ (2011 onwards)
- MATLAB
- Perl
- Julia (programming language), e.g. for machine learning, using native or non-native libraries.
- List of constraint programming languages
- List of computer algebra systems
- List of logic programming languages
- List of knowledge representation languages
- Fifth-generation programming language
This glossary of artificial intelligence terms is about artificial intelligence, its sub-disciplines, and related fields.
- Contents: Itemized by starting letters "A" through "Z"
- See also:
AI for Good
- YouTube Video: HIGHLIGHTS: AI FOR GOOD Global Summit 2018 - DAY
- YouTube Video: HIGHLIGHTS: AI FOR GOOD Global Summit 2018 - DAY 2
- YouTube Video: HIGHLIGHTS: AI FOR GOOD Global Summit 2018 - DAY 3
AI for Good is a United Nations platform, centered around annual Global Summits, that fosters the dialogue on the beneficial use of Artificial Intelligence, by developing concrete projects.
The impetus for organizing global summits that are action oriented, came from existing discourse in artificial intelligence (AI) research being dominated by research streams such as the Netflix Prize (improve the movie recommendation algorithm).
The AI for Good series aims to bring forward Artificial Intelligence research topics that contribute towards more global problems, in particular through the Sustainable Development Goals, while at the same time avoiding typical UN style conferences where results are generally more abstract. The fourth AI for Good Global Summit will be held from 4–8 May 2020 in Geneva, Switzerland.
Click on any of the following blue hyperlinks for more about "AI for Good" initiative:
The impetus for organizing global summits that are action oriented, came from existing discourse in artificial intelligence (AI) research being dominated by research streams such as the Netflix Prize (improve the movie recommendation algorithm).
The AI for Good series aims to bring forward Artificial Intelligence research topics that contribute towards more global problems, in particular through the Sustainable Development Goals, while at the same time avoiding typical UN style conferences where results are generally more abstract. The fourth AI for Good Global Summit will be held from 4–8 May 2020 in Geneva, Switzerland.
Click on any of the following blue hyperlinks for more about "AI for Good" initiative:
Artificial Intelligence in Agriculture: Precision and Digital Applications
TOP: Climate-Smart Precision Agriculture
BOTTOM: Digital Technologies in Agriculture: adoption, value added and overview
- YouTube Video: The Future of Farming
- YouTube Video: Artificial intelligence could revolutionize farming industry
- YouTube Video: The High-Tech Vertical Farmer
TOP: Climate-Smart Precision Agriculture
BOTTOM: Digital Technologies in Agriculture: adoption, value added and overview
AI in Agriculture:
In agriculture new AI advancements show improvements in gaining yield and to increase the research and development of growing crops. New artificial intelligence now predicts the time it takes for a crop like a tomato to be ripe and ready for picking thus increasing efficiency of farming. These advances go on including Crop and Soil Monitoring, Agricultural Robots, and Predictive Analytics.
Crop and soil monitoring uses new algorithms and data collected on the field to manage and track the health of crops making it easier and more sustainable for the farmers.
More specializations of AI in agriculture is one such as greenhouse automation, simulation, modeling, and optimization techniques.
Due to the increase in population and the growth of demand for food in the future there will need to be at least a 70% increase in yield from agriculture to sustain this new demand. More and more of the public perceives that the adaption of these new techniques and the use of Artificial intelligence will help reach that goal.
___________________________________________________________________________
Precision agriculture (PA), satellite farming or site specific crop management (SSCM) is a farming management concept based on observing, measuring and responding to inter and intra-field variability in crops. The goal of precision agriculture research is to define a decision support system (DSS) for whole farm management with the goal of optimizing returns on inputs while preserving resources.
Among these many approaches is a phytogeomorphological approach which ties multi-year crop growth stability/characteristics to topological terrain attributes. The interest in the phytogeomorphological approach stems from the fact that the geomorphology component typically dictates the hydrology of the farm field.
The practice of precision agriculture has been enabled by the advent of GPS and GNSS. The farmer's and/or researcher's ability to locate their precise position in a field allows for the creation of maps of the spatial variability of as many variables as can be measured (e.g. crop yield, terrain features/topography, organic matter content, moisture levels, nitrogen levels, pH, EC, Mg, K, and others).
Similar data is collected by sensor arrays mounted on GPS-equipped combine harvesters. These arrays consist of real-time sensors that measure everything from chlorophyll levels to plant water status, along with multispectral imagery. This data is used in conjunction with satellite imagery by variable rate technology (VRT) including seeders, sprayers, etc. to optimally distribute resources.
However, recent technological advances have enabled the use of real-time sensors directly in soil, which can wirelessly transmit data without the need of human presence.
Precision agriculture has also been enabled by unmanned aerial vehicles like the DJI Phantom which are relatively inexpensive and can be operated by novice pilots.
These agricultural drones can be equipped with hyperspectral or RGB cameras to capture many images of a field that can be processed using photogrammetric methods to create orthophotos and NDVI maps.
These drones are capable of capturing imagery for a variety of purposes and with several metrics such as elevation and Vegetative Index (with NDVI as an example). This imagery is then turned into maps which can be used to optimize crop inputs such as water, fertilizer or chemicals such as herbicides and growth regulators through variable rate applications.
Click on any of the following blue hyperlinks for more about AI in Precision Agriculture:
Digital agriculture refers to tools that digitally collect, store, analyze, and share electronic data and/or information along the agricultural value chain. Other definitions, such as those from the United Nations Project Breakthrough, Cornell University, and Purdue University, also emphasize the role of digital technology in the optimization of food systems.
Sometimes known as “smart farming” or “e-agriculture,” digital agriculture includes (but is not limited to) precision agriculture (above). Unlike precision agriculture, digital agriculture impacts the entire agri-food value chain — before, during, and after on-farm production.
Therefore, on-farm technologies, like yield mapping, GPS guidance systems, and variable-rate application, fall under the domain of precision agriculture and digital agriculture.
On the other hand, digital technologies involved in e-commerce platforms, e-extension services, warehouse receipt systems, blockchain-enabled food traceability systems, tractor rental apps, etc. fall under the umbrella of digital agriculture but not precision agriculture.
Click on any of the following blue hyperlinks for more about Digital Agriculture:
In agriculture new AI advancements show improvements in gaining yield and to increase the research and development of growing crops. New artificial intelligence now predicts the time it takes for a crop like a tomato to be ripe and ready for picking thus increasing efficiency of farming. These advances go on including Crop and Soil Monitoring, Agricultural Robots, and Predictive Analytics.
Crop and soil monitoring uses new algorithms and data collected on the field to manage and track the health of crops making it easier and more sustainable for the farmers.
More specializations of AI in agriculture is one such as greenhouse automation, simulation, modeling, and optimization techniques.
Due to the increase in population and the growth of demand for food in the future there will need to be at least a 70% increase in yield from agriculture to sustain this new demand. More and more of the public perceives that the adaption of these new techniques and the use of Artificial intelligence will help reach that goal.
___________________________________________________________________________
Precision agriculture (PA), satellite farming or site specific crop management (SSCM) is a farming management concept based on observing, measuring and responding to inter and intra-field variability in crops. The goal of precision agriculture research is to define a decision support system (DSS) for whole farm management with the goal of optimizing returns on inputs while preserving resources.
Among these many approaches is a phytogeomorphological approach which ties multi-year crop growth stability/characteristics to topological terrain attributes. The interest in the phytogeomorphological approach stems from the fact that the geomorphology component typically dictates the hydrology of the farm field.
The practice of precision agriculture has been enabled by the advent of GPS and GNSS. The farmer's and/or researcher's ability to locate their precise position in a field allows for the creation of maps of the spatial variability of as many variables as can be measured (e.g. crop yield, terrain features/topography, organic matter content, moisture levels, nitrogen levels, pH, EC, Mg, K, and others).
Similar data is collected by sensor arrays mounted on GPS-equipped combine harvesters. These arrays consist of real-time sensors that measure everything from chlorophyll levels to plant water status, along with multispectral imagery. This data is used in conjunction with satellite imagery by variable rate technology (VRT) including seeders, sprayers, etc. to optimally distribute resources.
However, recent technological advances have enabled the use of real-time sensors directly in soil, which can wirelessly transmit data without the need of human presence.
Precision agriculture has also been enabled by unmanned aerial vehicles like the DJI Phantom which are relatively inexpensive and can be operated by novice pilots.
These agricultural drones can be equipped with hyperspectral or RGB cameras to capture many images of a field that can be processed using photogrammetric methods to create orthophotos and NDVI maps.
These drones are capable of capturing imagery for a variety of purposes and with several metrics such as elevation and Vegetative Index (with NDVI as an example). This imagery is then turned into maps which can be used to optimize crop inputs such as water, fertilizer or chemicals such as herbicides and growth regulators through variable rate applications.
Click on any of the following blue hyperlinks for more about AI in Precision Agriculture:
- History
- Overview
- Tools
- Usage around the world
- Economic and environmental impacts
- Emerging technologies
- Conferences
- See also:
- Agricultural drones
- Geostatistics
- Integrated farming
- Integrated pest management
- Landsat program
- Nutrient budgeting
- Nutrient management
- Phytobiome
- Precision beekeeping
- Precision livestock farming
- Precision viticulture
- Satellite crop monitoring
- SPOT (satellites)
- Variable rate technology
- Precision agriculture, IBM
- Antares AgroSense
Digital agriculture refers to tools that digitally collect, store, analyze, and share electronic data and/or information along the agricultural value chain. Other definitions, such as those from the United Nations Project Breakthrough, Cornell University, and Purdue University, also emphasize the role of digital technology in the optimization of food systems.
Sometimes known as “smart farming” or “e-agriculture,” digital agriculture includes (but is not limited to) precision agriculture (above). Unlike precision agriculture, digital agriculture impacts the entire agri-food value chain — before, during, and after on-farm production.
Therefore, on-farm technologies, like yield mapping, GPS guidance systems, and variable-rate application, fall under the domain of precision agriculture and digital agriculture.
On the other hand, digital technologies involved in e-commerce platforms, e-extension services, warehouse receipt systems, blockchain-enabled food traceability systems, tractor rental apps, etc. fall under the umbrella of digital agriculture but not precision agriculture.
Click on any of the following blue hyperlinks for more about Digital Agriculture:
- Historical context
- Technology
- Effects of digital agriculture adoption
- Enabling environment
- Sustainable Development Goals
Artificial Intelligence in Space Operations including the Air Operations Center
- YouTube Video about the role of Artificial Intelligence in Space Operations
- YouTube Video: The incredible inventions of intuitive AI | Maurice Conti
- YouTube Video: NATS is using Artificial Intelligence to cut delays at Heathrow Airport
The Air Operations Division (AOD) uses AI for the rule based expert systems. The AOD has use for artificial intelligence for surrogate operators for combat and training simulators, mission management aids, support systems for tactical decision making, and post processing of the simulator data into symbolic summaries.
The use of artificial intelligence in simulators is proving to be very useful for the AOD. Airplane simulators are using artificial intelligence in order to process the data taken from simulated flights. Other than simulated flying, there is also simulated aircraft warfare.
The computers are able to come up with the best success scenarios in these situations. The computers can also create strategies based on the placement, size, speed and strength of the forces and counter forces. Pilots may be given assistance in the air during combat by computers.
The artificial intelligent programs can sort the information and provide the pilot with the best possible maneuvers, not to mention getting rid of certain maneuvers that would be impossible for a human being to perform.
Multiple aircraft are needed to get good approximations for some calculations so computer simulated pilots are used to gather data. These computer simulated pilots are also used to train future air traffic controllers.
The system used by the AOD in order to measure performance was the Interactive Fault Diagnosis and Isolation System, or IFDIS. It is a rule based expert system put together by collecting information from TF-30 documents and the expert advice from mechanics that work on the TF-30. This system was designed to be used for the development of the TF-30 for the RAAF F-111C.
The performance system was also used to replace specialized workers. The system allowed the regular workers to communicate with the system and avoid mistakes, miscalculations, or having to speak to one of the specialized workers.
The AOD also uses artificial intelligence in speech recognition software. The air traffic controllers are giving directions to the artificial pilots and the AOD wants to the pilots to respond to the ATC's with simple responses. The programs that incorporate the speech software must be trained, which means they use neural networks.
The program used, the Verbex 7000, is still a very early program that has plenty of room for improvement. The improvements are imperative because ATCs use very specific dialog and the software needs to be able to communicate correctly and promptly every time.
The Artificial Intelligence supported Design of Aircraft, or AIDA, is used to help designers in the process of creating conceptual designs of aircraft. This program allows the designers to focus more on the design itself and less on the design process. The software also allows the user to focus less on the software tools.
The AIDA uses rule based systems to compute its data. This is a diagram of the arrangement of the AIDA modules. Although simple, the program is proving effective.
In 2003, NASA's Dryden Flight Research Center, and many other companies, created software that could enable a damaged aircraft to continue flight until a safe landing zone can be reached. The software compensates for all the damaged components by relying on the undamaged components. The neural network used in the software proved to be effective and marked a triumph for artificial intelligence.
The Integrated Vehicle Health Management system, also used by NASA, on board an aircraft must process and interpret data taken from the various sensors on the aircraft. The system needs to be able to determine the structural integrity of the aircraft. The system also needs to implement protocols in case of any damage taken the vehicle.
Haitham Baomar and Peter Bentley are leading a team from the University College of London to develop an artificial intelligence based Intelligent Autopilot System (IAS) designed to teach an autopilot system to behave like a highly experienced pilot who is faced with an emergency situation such as severe weather, turbulence, or system failure.
Educating the autopilot relies on the concept of supervised machine learning “which treats the young autopilot as a human apprentice going to a flying school”. The autopilot records the actions of the human pilot generating learning models using artificial neural networks. The autopilot is then given full control and observed by the pilot as it executes the training exercise.
The Intelligent Autopilot System combines the principles of Apprenticeship Learning and Behavioural Cloning whereby the autopilot observes the low-level actions required to maneuver the airplane and high-level strategy used to apply those actions. IAS implementation employs three phases; pilot data collection, training, and autonomous control.
Baomar and Bentley's goal is to create a more autonomous autopilot to assist pilots in responding to emergency situations.
___________________________________________________________________________
Air and Space Operations Center
An Air and Space Operations Center (AOC) is a type of command center used by the United States Air Force (USAF). It is the senior agency of the Air Force component commander to provide command and control of air and space operations.
The United States Air Force employs two kinds of AOCs: regional AOCs utilizing the AN/USQ-163 Falconer weapon system that support geographic combatant commanders, and functional AOCs that support functional combatant commanders.
When there is more than one U.S. military service working in an AOC, such as when naval aviation from the U.S. Navy (USN) and/or the U.S. Marine Corps (USMC) is incorporated, it is called a Joint Air and Space Operations Center (JAOC). In cases of allied or coalition (multinational) operations in tandem with USAF or Joint air and space operations, the AOC is called a Combined Air and Space Operations Center (CAOC).
An AOC is the senior element of the Theater Air Control System (TACS). The Joint Force Commander (JFC) assigns a Joint Forces Air Component Commander (JFACC) to lead the AOC weapon system. If allied or coalition forces are part of the operation, the JFC and JFACC will be redesignated as the CFC and CFACC, respectively.
Quite often the Commander, Air Force Forces (COMAFFOR) is assigned the JFACC/CFACC position for planning and executing theater-wide air and space forces. If another service also provides a significant share of air and space forces, the Deputy JFACC/CFACC will typically be a senior flag officer from that service.
For example, during Operation Enduring Freedom and Operation Iraqi Freedom, when USAF combat air forces (CAF) and mobility air forces (MAF) integrated extensive USN and USMC sea-based and land-based aviation and Royal Air Force (RAF) and Royal Navy / Fleet Air Arm aviation, the CFACC was an aeronautically rated USAF lieutenant general, assisted by an aeronautically designated USN rear admiral (upper half) as the Deputy CFACC, and an aeronautically rated RAF air commodore as the Senior British Officer (Air).
Click on any of the following blue hyperlinks for more about Air and Space Centers:
The use of artificial intelligence in simulators is proving to be very useful for the AOD. Airplane simulators are using artificial intelligence in order to process the data taken from simulated flights. Other than simulated flying, there is also simulated aircraft warfare.
The computers are able to come up with the best success scenarios in these situations. The computers can also create strategies based on the placement, size, speed and strength of the forces and counter forces. Pilots may be given assistance in the air during combat by computers.
The artificial intelligent programs can sort the information and provide the pilot with the best possible maneuvers, not to mention getting rid of certain maneuvers that would be impossible for a human being to perform.
Multiple aircraft are needed to get good approximations for some calculations so computer simulated pilots are used to gather data. These computer simulated pilots are also used to train future air traffic controllers.
The system used by the AOD in order to measure performance was the Interactive Fault Diagnosis and Isolation System, or IFDIS. It is a rule based expert system put together by collecting information from TF-30 documents and the expert advice from mechanics that work on the TF-30. This system was designed to be used for the development of the TF-30 for the RAAF F-111C.
The performance system was also used to replace specialized workers. The system allowed the regular workers to communicate with the system and avoid mistakes, miscalculations, or having to speak to one of the specialized workers.
The AOD also uses artificial intelligence in speech recognition software. The air traffic controllers are giving directions to the artificial pilots and the AOD wants to the pilots to respond to the ATC's with simple responses. The programs that incorporate the speech software must be trained, which means they use neural networks.
The program used, the Verbex 7000, is still a very early program that has plenty of room for improvement. The improvements are imperative because ATCs use very specific dialog and the software needs to be able to communicate correctly and promptly every time.
The Artificial Intelligence supported Design of Aircraft, or AIDA, is used to help designers in the process of creating conceptual designs of aircraft. This program allows the designers to focus more on the design itself and less on the design process. The software also allows the user to focus less on the software tools.
The AIDA uses rule based systems to compute its data. This is a diagram of the arrangement of the AIDA modules. Although simple, the program is proving effective.
In 2003, NASA's Dryden Flight Research Center, and many other companies, created software that could enable a damaged aircraft to continue flight until a safe landing zone can be reached. The software compensates for all the damaged components by relying on the undamaged components. The neural network used in the software proved to be effective and marked a triumph for artificial intelligence.
The Integrated Vehicle Health Management system, also used by NASA, on board an aircraft must process and interpret data taken from the various sensors on the aircraft. The system needs to be able to determine the structural integrity of the aircraft. The system also needs to implement protocols in case of any damage taken the vehicle.
Haitham Baomar and Peter Bentley are leading a team from the University College of London to develop an artificial intelligence based Intelligent Autopilot System (IAS) designed to teach an autopilot system to behave like a highly experienced pilot who is faced with an emergency situation such as severe weather, turbulence, or system failure.
Educating the autopilot relies on the concept of supervised machine learning “which treats the young autopilot as a human apprentice going to a flying school”. The autopilot records the actions of the human pilot generating learning models using artificial neural networks. The autopilot is then given full control and observed by the pilot as it executes the training exercise.
The Intelligent Autopilot System combines the principles of Apprenticeship Learning and Behavioural Cloning whereby the autopilot observes the low-level actions required to maneuver the airplane and high-level strategy used to apply those actions. IAS implementation employs three phases; pilot data collection, training, and autonomous control.
Baomar and Bentley's goal is to create a more autonomous autopilot to assist pilots in responding to emergency situations.
___________________________________________________________________________
Air and Space Operations Center
An Air and Space Operations Center (AOC) is a type of command center used by the United States Air Force (USAF). It is the senior agency of the Air Force component commander to provide command and control of air and space operations.
The United States Air Force employs two kinds of AOCs: regional AOCs utilizing the AN/USQ-163 Falconer weapon system that support geographic combatant commanders, and functional AOCs that support functional combatant commanders.
When there is more than one U.S. military service working in an AOC, such as when naval aviation from the U.S. Navy (USN) and/or the U.S. Marine Corps (USMC) is incorporated, it is called a Joint Air and Space Operations Center (JAOC). In cases of allied or coalition (multinational) operations in tandem with USAF or Joint air and space operations, the AOC is called a Combined Air and Space Operations Center (CAOC).
An AOC is the senior element of the Theater Air Control System (TACS). The Joint Force Commander (JFC) assigns a Joint Forces Air Component Commander (JFACC) to lead the AOC weapon system. If allied or coalition forces are part of the operation, the JFC and JFACC will be redesignated as the CFC and CFACC, respectively.
Quite often the Commander, Air Force Forces (COMAFFOR) is assigned the JFACC/CFACC position for planning and executing theater-wide air and space forces. If another service also provides a significant share of air and space forces, the Deputy JFACC/CFACC will typically be a senior flag officer from that service.
For example, during Operation Enduring Freedom and Operation Iraqi Freedom, when USAF combat air forces (CAF) and mobility air forces (MAF) integrated extensive USN and USMC sea-based and land-based aviation and Royal Air Force (RAF) and Royal Navy / Fleet Air Arm aviation, the CFACC was an aeronautically rated USAF lieutenant general, assisted by an aeronautically designated USN rear admiral (upper half) as the Deputy CFACC, and an aeronautically rated RAF air commodore as the Senior British Officer (Air).
Click on any of the following blue hyperlinks for more about Air and Space Centers:
- Divisions
- List of Air and Space Operations Centers
- Inactive AOCs
- Training/Experimentation
- AOC-equipping Units
- NATO CAOC
- See also:
Outline of Artificial Intelligence
- YouTube Video: How AI is changing Business: A look at the limitless potential of AI | ANIRUDH KALA | TEDxIITBHU
- YouTube Video: What happens when our computers get smarter than we are? | Nick Bostrom
- YouTube Video: Artificial Super intelligence - How close are we?
UNDERSTANDING THREE TYPES OF ARTIFICIAL INTELLIGENCE
(Illustration above)
In this era of technology, artificial intelligence is conquering over all the industries and domains, performing tasks more effectively than humans. Like in sci-fi movies, a day will come when world would be dominated by robots.
Artificial intelligence is surrounded by jargons like narrow, general, and super artificial intelligence or by machine learning, deep learning, supervised and unsupervised learning or neural networks and a whole lot of confusing terms. In this article, we will talk about artificial intelligence and its three main categories.
1) Understanding Artificial Intelligence:
The term AI was coined by John Mccarthy, an American computer scientist in 1956. Artificial Intelligence is the simulation of human intelligence but processed by machines mainly computer systems. The processes mainly include learning, reasoning and self-correction.
With the increase in speed, size and diversity of data, AI has gained its dominance in the businesses globally. AI can perform several tasks say, recognizing patterns in data more efficiently than a human giving more insights to businesses.
2) Types of Artificial Intelligence:
Narrow Artificial Intelligence: Weak AI also known for narrow AI is an AI system that is developed and trained for a particular task. Narrow AI is programmed to perform a single task and works within a limited context. It is very good at routine physical and cognitive jobs.
For example, narrow AI can identify pattern and correlations from data more efficiently than humans. Sales predictions, purchase suggestions and weather forecast are the implementation of narrow AI.
Even Google’s Translation Engine is a form of narrow AI. In the automotive industry, self-driving cars are the result of coordination of several narrow AI. But it cannot expand and take tasks beyond its field, for example, the AI engine which transcripts image recognition cannot perform sales recommendations.
Artificial General Intelligence: Artificial General Intelligence (AGI) is an AI system with generalized cognitive abilities which find solutions to the unfamiliar task it comes across. It is popularly termed as strong AI which can understand and reason the environment as a human would. Also known as human AI but it is hard to define a human level artificial intelligence. Human intelligence might not be able to compute as fast as computers but they can think abstractly, plan and solve problems without going into details. More importantly, humans can innovate and bring up thoughts and ideas that have no trails or precedence.
Artificial Super Intelligence: Artificial Super Intelligence (ASI) refers to the position where computer/machines will surpass humans and machines would be able to mimic human thoughts. ASI refers to a situation where the cognitive ability of machines will be superior to humans.
In the past, there had been developments like IBM’s Watson supercomputer beating human players at jeopardy and assistive devices like Siri involving into a conversation with people, but there is still no machine that can process the depth of knowledge and cognitive ability as that of a fully developed human.
ASI had two school of thoughts, on one side great scientist like Stephen Hawking saw the full development of AI as a danger to humanity whereas others such as Demis Hassabis, Co-Founder & CEO of DeepMind believes that smarter the AI becomes better the world would be and a helping hand to mankind.
Conclusion:
In today’s technological age, AI has gifted machines which are much more capable than human intelligence. It is difficult to predict how long it will take AI to achieve the cognitive ability and knowledge depth of human being. Finally, a day will come when AI would surpass the brightest human mind on earth.
[End of Article]
___________________________________________________________________________
Outline of Artificial Intelligence:
The following outline is provided as an overview of and topical guide to artificial intelligence:
Artificial intelligence (AI) – intelligence exhibited by machines or software. It is also the name of the scientific field which studies how to create computers and computer software that are capable of intelligent behavior.
Click on any of the following blue hyperlinks for more about the Outline of Artificial Intelligence:
(Illustration above)
In this era of technology, artificial intelligence is conquering over all the industries and domains, performing tasks more effectively than humans. Like in sci-fi movies, a day will come when world would be dominated by robots.
Artificial intelligence is surrounded by jargons like narrow, general, and super artificial intelligence or by machine learning, deep learning, supervised and unsupervised learning or neural networks and a whole lot of confusing terms. In this article, we will talk about artificial intelligence and its three main categories.
1) Understanding Artificial Intelligence:
The term AI was coined by John Mccarthy, an American computer scientist in 1956. Artificial Intelligence is the simulation of human intelligence but processed by machines mainly computer systems. The processes mainly include learning, reasoning and self-correction.
With the increase in speed, size and diversity of data, AI has gained its dominance in the businesses globally. AI can perform several tasks say, recognizing patterns in data more efficiently than a human giving more insights to businesses.
2) Types of Artificial Intelligence:
Narrow Artificial Intelligence: Weak AI also known for narrow AI is an AI system that is developed and trained for a particular task. Narrow AI is programmed to perform a single task and works within a limited context. It is very good at routine physical and cognitive jobs.
For example, narrow AI can identify pattern and correlations from data more efficiently than humans. Sales predictions, purchase suggestions and weather forecast are the implementation of narrow AI.
Even Google’s Translation Engine is a form of narrow AI. In the automotive industry, self-driving cars are the result of coordination of several narrow AI. But it cannot expand and take tasks beyond its field, for example, the AI engine which transcripts image recognition cannot perform sales recommendations.
Artificial General Intelligence: Artificial General Intelligence (AGI) is an AI system with generalized cognitive abilities which find solutions to the unfamiliar task it comes across. It is popularly termed as strong AI which can understand and reason the environment as a human would. Also known as human AI but it is hard to define a human level artificial intelligence. Human intelligence might not be able to compute as fast as computers but they can think abstractly, plan and solve problems without going into details. More importantly, humans can innovate and bring up thoughts and ideas that have no trails or precedence.
Artificial Super Intelligence: Artificial Super Intelligence (ASI) refers to the position where computer/machines will surpass humans and machines would be able to mimic human thoughts. ASI refers to a situation where the cognitive ability of machines will be superior to humans.
In the past, there had been developments like IBM’s Watson supercomputer beating human players at jeopardy and assistive devices like Siri involving into a conversation with people, but there is still no machine that can process the depth of knowledge and cognitive ability as that of a fully developed human.
ASI had two school of thoughts, on one side great scientist like Stephen Hawking saw the full development of AI as a danger to humanity whereas others such as Demis Hassabis, Co-Founder & CEO of DeepMind believes that smarter the AI becomes better the world would be and a helping hand to mankind.
Conclusion:
In today’s technological age, AI has gifted machines which are much more capable than human intelligence. It is difficult to predict how long it will take AI to achieve the cognitive ability and knowledge depth of human being. Finally, a day will come when AI would surpass the brightest human mind on earth.
[End of Article]
___________________________________________________________________________
Outline of Artificial Intelligence:
The following outline is provided as an overview of and topical guide to artificial intelligence:
Artificial intelligence (AI) – intelligence exhibited by machines or software. It is also the name of the scientific field which studies how to create computers and computer software that are capable of intelligent behavior.
Click on any of the following blue hyperlinks for more about the Outline of Artificial Intelligence:
- What type of thing is artificial intelligence?
- Types of artificial intelligence
- Branches of artificial intelligence
- Further AI design elements
- AI projects
- AI applications
- AI development
- Psychology and AI
- History of artificial intelligence
- AI hazards and safety
- AI and the future
- Philosophy of artificial intelligence
- Artificial intelligence in fiction
- AI community
- See also:
- A look at the re-emergence of A.I. and why the technology is poised to succeed given today's environment, ComputerWorld, 2015 September 14
- AI at Curlie
- The Association for the Advancement of Artificial Intelligence
- Freeview Video 'Machines with Minds' by the Vega Science Trust and the BBC/OU
- John McCarthy's frequently asked questions about AI
- Jonathan Edwards looks at AI (BBC audio) С
- Ray Kurzweil's website dedicated to AI including prediction of future development in AI
- Thomason, Richmond. "Logic and Artificial Intelligence". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
Vehicular Automation and Automated Driving Systems
- YouTube Video: How a Driverless Car Sees the Road
- YouTube Video: Elon Musk on Tesla's Auto Pilot and Legal Liability
- YouTube Video: How Tesla's Self-Driving Autopilot Actually Works | WIRED
The path to 5G: Paving the road to tomorrow’s autonomous vehicles:
According to World Health Organization figures, road traffic injuries are the leading cause of death among young people aged 15–29 years. More than 1.2 million people die each year worldwide as a result of traffic crashes.
Vehicle-to-Everything (V2X) technologies, starting with 802.11p and evolving to Cellular V2X (C-V2X), can help bring safer roads, more efficient travel, reduced air pollution, and better driving experiences.
V2X will serve as the foundation for the safe, connected vehicle of the future, giving vehicles the ability to "talk" to each other, pedestrians, roadway infrastructure, and the cloud.
It’s no wonder that the MIT Technology Review put V2X on its 2015 10 Breakthrough Technologies list, stating: “Car-to-car communication should also have a bigger impact than the advanced vehicle automation technologies that have been more widely heralded.”
V2X is a key technology for enabling fully autonomous transportation infrastructure. While advancements in radar, LiDAR (Light Detection and Ranging), and camera systems are encouraging and bring autonomous driving one step closer to reality, it should be known that these sensors are limited by their line of sight. V2X complements the capabilities of these sensors by providing 360 degree non-line-of sight awareness, extending a vehicle’s ability to “see” further down the road – even at blind intersections or in bad weather conditions.
So, how long do we have to wait for V2X to become a reality? Actually, V2X technology is here today. Wi-Fi-based 820.11p has established the foundation for latency-critical V2X communications.
To improve road safety for future light vehicles in the United States, the National Highway Safety Administration is expected to begin rulemaking for Dedicated Short Range Communications (DSRC) this year.
Beyond that, tomorrow’s autonomous vehicles require continued technology evolution to accommodate ever-expanding safety requirements and use cases. The path to 5G will deliver this evolution starting with the C-V2X part of 3GPP release 14 specifications, which is expected to be completed by the end of this year.
C-V2X will define two new transmission modes that work together to enable a broad range of automotive use cases:
To accelerate technology evolution, Qualcomm is actively driving the C-V2X work in 3GPP, building on our leadership in LTE Direct and LTE Broadcast to pioneer C-V2X technologies.
The difference between a vehicle collision and a near miss comes down to milliseconds. With approximately twice the range of DSRC, C-V2X can provide the critical seconds of reaction time needed to avoid an accident. Beyond safety, C-V2X also enables a broad range of use cases – from better situational awareness, to enhanced traffic management, and connected cloud services.
C-V2X will provide a unified connectivity platform for safer “vehicles of tomorrow.” Building upon C-V2X, 5G will bring even more possibilities for the connected vehicle. The extreme throughput, low latency, and enhanced reliability of 5G will allow vehicles to share rich, real-time data, supporting fully autonomous driving experiences, for example:
Beyond pioneering C-V2X and helping define the path to 5G, we are delivering new levels of on-device intelligence and integration in the connected vehicle of tomorrow. Our innovations in cognitive technologies, such as always-on sensing, computer vision, and machine learning, will help make our vision of safer, more autonomous vehicles a reality.
To learn more about how we are pioneering Cellular V2X, join us for our upcoming webinar or visit our Cellular V2X web page.
Learn more about our automotive solutions here.
[End of Article]
___________________________________________________________________________
Vehicular automation
Vehicular automation involves the use of mechatronics, artificial intelligence, and multi-agent system to assist a vehicle's operator. These features and the vehicles employing them may be labeled as intelligent or smart.
A vehicle using automation for difficult tasks, especially navigation, may be referred to as semi-autonomous. A vehicle relying solely on automation is consequently referred to as robotic or autonomous. After the invention of the integrated circuit, the sophistication of automation technology increased. Manufacturers and researchers subsequently added a variety of automated functions to automobiles and other vehicles.
Autonomy levels:
Autonomy in vehicles is often categorized in six levels: The level system was developed by the Society of Automotive Engineers (SAE):
Ground vehicles:
Further information: Unmanned ground vehicles
Ground vehicles employing automation and teleoperation include shipyard gantries, mining trucks, bomb-disposal robots, robotic insects, and driverless tractors.
There are a lot of autonomous and semi-autonomous ground vehicles being made for the purpose of transporting passengers. One such example is the free-ranging on grid (FROG) technology which consists of autonomous vehicles, a magnetic track and a supervisory system.
The FROG system is deployed for industrial purposes in factory sites and has been in use since 1999 on the ParkShuttle, a PRT-style public transport system in the city of Capelle aan den IJssel to connect the Rivium business park with the neighboring city of Rotterdam (where the route terminates at the Kralingse Zoom metro station). The system experienced a crash in 2005 that proved to be caused by a human error.
Applications for automation in ground vehicles include the following:
Research is ongoing and prototypes of autonomous ground vehicles exist.
Cars:
See also: Autonomous car
Extensive automation for cars focuses on either introducing robotic cars or modifying modern car designs to be semi-autonomous.
Semi-autonomous designs could be implemented sooner as they rely less on technology that is still at the forefront of research. An example is the dual mode monorail. Groups such as RUF (Denmark) and TriTrack (USA) are working on projects consisting of specialized private cars that are driven manually on normal roads but also that dock onto a monorail/guideway along which they are driven autonomously.
As a method of automating cars without extensively modifying the cars as much as a robotic car, Automated highway systems (AHS) aims to construct lanes on highways that would be equipped with, for example, magnets to guide the vehicles. Automation vehicles have auto-brakes named as Auto Vehicles Braking System (AVBS). Highway computers would manage the traffic and direct the cars to avoid crashes.
The European Commission has established a smart car development program called the Intelligent Car Flagship Initiative. The goals of that program include:
There are plenty of further uses for automation in relation to cars. These include:
Singapore also announced a set of provisional national standards on January 31, 2019, to guide the autonomous vehicle industry. The standards, known as Technical Reference 68 (TR68), will promote the safe deployment of fully driverless vehicles in Singapore, according to a joint press release by Enterprise Singapore (ESG), Land Transport Authority (LTA), Standards Development Organisation and Singapore Standards Council (SSC).
Shared autonomous vehicles:
Following recent developments in autonomous cars, shared autonomous vehicles are now able to run in ordinary traffic without the need for embedded guidance markers. So far the focus has been on low speed, 20 miles per hour (32 km/h), with short, fixed routes for the "last mile" of journeys.
This means issues of collision avoidance and safety are significantly less challenging than those for automated cars, which seek to match the performance of conventional vehicles.
Aside from 2getthere ("ParkShuttle"), three companies - Ligier ("Easymile EZ10"), Navya ("ARMA" & "Autonom Cab") and RDM Group ("LUTZ Pathfinder") - are manufacturing and actively testing such vehicles. Two other companies have produced prototypes, Local Motors ("Olli") and the GATEway project.
Beside these efforts, Apple is reportedly developing an autonomous shuttle, based on a vehicle from an existing automaker, to transfer employees between its offices in Palo Alto and Infinite Loop, Cupertino. The project called "PAIL", after its destinations, was revealed in August 2017 when Apple announced it had abandoned development of autonomous cars.
Click on any of the following blue hyperlinks for more about Vehicular Automation: ___________________________________________________________________________
Automated driving system:
An automated driving system is a complex combination of various components that can be defined as systems where perception, decision making, and operation of the automobile are performed by electronics and machinery instead of a human driver, and as introduction of automation into road traffic.
This includes handling of the vehicle, destination, as well as awareness of surroundings.
While the automated system has control over the vehicle, it allows the human operator to leave all responsibilities to the system.
Overview:
The automated driving system is generally an integrated package of individual automated systems operating in concert. Automated driving implies that the driver have given up the ability to drive (i.e., all appropriate monitoring, agency, and action functions) to the vehicle automation system. Even though the driver may be alert and ready to take action at any moment, automation system controls all functions.
Automated driving systems are often conditional, which implies that the automation system is capable of automated driving, but not for all conditions encountered in the course of normal operation. Therefore, a human driver is functionally required to initiate the automated driving system, and may or may not do so when driving conditions are within the capability of the system.
When the vehicle automation system has assumed all driving functions, the human is no longer driving the vehicle but continues to assume responsibility for the vehicle's performance as the vehicle operator.
The automated vehicle operator is not functionally required to actively monitor the vehicle's performance while the automation system is engaged, but the operator must be available to resume driving within several seconds of being prompted to do so, as the system has limited conditions of automation.
While the automated driving system is engaged, certain conditions may prevent real-time human input, but for no more than a few seconds. The operator is able to resume driving at any time subject to this short delay. When the operator has resumed all driving functions, he or she reassumes the status of the vehicle's driver.
Success in the technology:
The success in the automated driving system has been known to be successful in situations like rural road settings.
Rural road settings would be a setting in which there is lower amounts of traffic and lower differentiation between driving abilities and types of drivers.
"The greatest challenge in the development of automated functions is still inner-city traffic, where an extremely wide range of road users must be considered from all directions."
This technology is progressing to a more reliable way of the automated driving cars to switch from auto-mode to driver mode. Auto-mode is the mode that is set in order for the automated actions to take over, while the driver mode is the mode set in order to have the operator controlling all functions of the car and taking the responsibilities of operating the vehicle (Automated driving system not engaged).
This definition would include vehicle automation systems that may be available in the near term—such as traffic-jam assist, or full-range automated cruise control—if such systems would be designed such that the human operator can reasonably divert attention (monitoring) away from the performance of the vehicle while the automation system is engaged. This definition would also include automated platooning (such as conceptualized by the SARTRE project).
The SARTRE Project:
The SARTRE project's main goal is to create platooning, a train of automated cars, that will provide comfort and have the ability for the driver of the vehicle to arrive safely to a destination.
Along with the ability to be along the train, drivers that are driving past these platoons, can join in with a simple activation of the automated driving system that correlates with a truck that leads the platoon. The SARTRE project is taking what we know as a train system and mixing it with automated driving technology. This is intended to allow for an easier transportation though cities and ultimately help with traffic flow through heavy automobile traffic.
SARTRE & modern day:
In some parts of the world the self-driving car has been tested in real life situations such as in Pittsburgh. The Self-driving Uber has been put to the test around the city, driving with different types of drivers as well as different traffic situations. Not only have there been testing and successful parts to the automated car, but there has also been extensive testing in California on automated busses.
The lateral control of the automated buses uses magnetic markers such as the platoon at San Diego, while the longitudinal control of the automated truck platoon uses millimeter wave radio and radar.
Current examples around today's society include the Google car and Tesla's models. Tesla has redesigned automated driving, they have created car models that allow drivers to put in the destination and let the car take over. These are two modern day examples of the automated driving system cars.
Levels of automation according to SAE:
The U.S Department of Transportation National Highway Traffic Safety Administration (NHTSA) provided a standard classification system in 2013 which defined five different levels of automation, ranging from level 0 (no automation) to level 4 (full automation).
Since then, the NHTSA updated their standards to be in line with the classification system defined by SAE International. SAE International defines six different levels of automation in their new standard of classification in document SAE J3016 that ranges from 0 (no automation) to 5 (full automation).
Level 0 – No automation:
The driver is in complete control of the vehicle and the system does not interfere with driving. Systems that may fall into this category are forward collision warning systems and lane departure warning systems.
Level 1 – Driver assistance:
The driver is in control of the vehicle, but the system can modify the speed and steering direction of the vehicle. Systems that may fall into this category are adaptive cruise control and lane keep assist.
Level 2 – Partial automation:
The driver must be able to control the vehicle if corrections are needed, but the driver is no longer in control of the speed and steering of the vehicle. Parking assistance is an example of a system that falls into this category along with Tesla's autopilot feature.
A system that falls into this category is the DISTRONIC PLUS system created by Mercedes-Benz. It is important to note the driver must not be distracted in Level 0 to Level 2 modes.
Level 3 – Conditional automation.
The system is in complete control of vehicle functions such as speed, steering, and monitoring the environment under specific conditions. Such specific conditions may be fulfilled while on fenced-off highway with no intersections, limited driving speed, boxed-in driving situation etc.
A human driver must be ready to intervene when requested by the system to do so. If the driver does not respond within a predefined time or if a failure occurs in the system, the system needs to do a safety stop in ego lane (no lane change allowed). The driver is only allowed to be partially distracted, such as checking text messages, but taking a nap is not allowed.
Level 4 – High automation:
The system is in complete control of the vehicle and human presence is no longer needed, but its applications are limited to specific conditions. An example of a system being developed that falls into this category is the Waymo self-driving car service.
If the actual motoring condition exceeds the performance boundaries, the system does not have to ask the human to intervene but can choose to abort the trip in a safe manner, e.g. park the car.
Level 5 – Full automation:
The system is capable of providing the same aspects of a Level 4, but the system can operate in all driving conditions. The human is equivalent to "cargo" in Level 5. Currently, there are no driving systems at this level.
Risks and liabilities:
See also: Computer security § Automobiles, and Autonomous car liability
Many automakers such as Ford and Volvo have announced plans to offer fully automated cars in the future. Extensive research and development is being put into automated driving systems, but the biggest problem automakers cannot control is how drivers will use system.
Drivers are stressed to stay attentive and safety warnings are implemented to alert the driver when corrective action is needed. Tesla Motor's has one recorded incident that resulted in a fatality involving the automated driving system in the Tesla Model S. The accident report reveals the accident was a result of the driver being inattentive and the autopilot system not recognizing the obstruction ahead.
Another flaw with automated driving systems is that in situations where unpredictable events such as weather or the driving behavior of others may cause fatal accidents due to sensors that monitor the surroundings of the vehicle not being able to provide corrective action.
To overcome some of the challenges for automated driving systems, novel methodologies based on virtual testing, traffic flow simulatio and digital prototypes have been proposed, especially when novel algorithms based on Artificial Intelligence approaches are employed which require extensive training and validation data sets.
According to World Health Organization figures, road traffic injuries are the leading cause of death among young people aged 15–29 years. More than 1.2 million people die each year worldwide as a result of traffic crashes.
Vehicle-to-Everything (V2X) technologies, starting with 802.11p and evolving to Cellular V2X (C-V2X), can help bring safer roads, more efficient travel, reduced air pollution, and better driving experiences.
V2X will serve as the foundation for the safe, connected vehicle of the future, giving vehicles the ability to "talk" to each other, pedestrians, roadway infrastructure, and the cloud.
It’s no wonder that the MIT Technology Review put V2X on its 2015 10 Breakthrough Technologies list, stating: “Car-to-car communication should also have a bigger impact than the advanced vehicle automation technologies that have been more widely heralded.”
V2X is a key technology for enabling fully autonomous transportation infrastructure. While advancements in radar, LiDAR (Light Detection and Ranging), and camera systems are encouraging and bring autonomous driving one step closer to reality, it should be known that these sensors are limited by their line of sight. V2X complements the capabilities of these sensors by providing 360 degree non-line-of sight awareness, extending a vehicle’s ability to “see” further down the road – even at blind intersections or in bad weather conditions.
So, how long do we have to wait for V2X to become a reality? Actually, V2X technology is here today. Wi-Fi-based 820.11p has established the foundation for latency-critical V2X communications.
To improve road safety for future light vehicles in the United States, the National Highway Safety Administration is expected to begin rulemaking for Dedicated Short Range Communications (DSRC) this year.
Beyond that, tomorrow’s autonomous vehicles require continued technology evolution to accommodate ever-expanding safety requirements and use cases. The path to 5G will deliver this evolution starting with the C-V2X part of 3GPP release 14 specifications, which is expected to be completed by the end of this year.
C-V2X will define two new transmission modes that work together to enable a broad range of automotive use cases:
- The first enables direct communication between vehicles and each other, pedestrians, and road infrastructure. We are building on LTE Direct device-to-device communications, evolving the technology with innovations to exchange real-time information between vehicles traveling at fast speeds, in high-density traffic, and even outside of mobile network coverage areas.
- The second transmission mode uses the ubiquitous coverage of existing LTE networks, so you can be alerted to an accident a few miles ahead, guided to an open parking space, and more. To enable this mode, we are optimizing LTE Broadcast technology for vehicular communications.
To accelerate technology evolution, Qualcomm is actively driving the C-V2X work in 3GPP, building on our leadership in LTE Direct and LTE Broadcast to pioneer C-V2X technologies.
The difference between a vehicle collision and a near miss comes down to milliseconds. With approximately twice the range of DSRC, C-V2X can provide the critical seconds of reaction time needed to avoid an accident. Beyond safety, C-V2X also enables a broad range of use cases – from better situational awareness, to enhanced traffic management, and connected cloud services.
C-V2X will provide a unified connectivity platform for safer “vehicles of tomorrow.” Building upon C-V2X, 5G will bring even more possibilities for the connected vehicle. The extreme throughput, low latency, and enhanced reliability of 5G will allow vehicles to share rich, real-time data, supporting fully autonomous driving experiences, for example:
- Cooperative-collision avoidance: For self-driving vehicles, individual actions by a vehicle to avoid collisions may create hazardous driving conditions for other vehicles. Cooperative-collision avoidance allows all involved vehicles to coordinate their actions to avoid collisions in a cooperative manner.
- High-density platooning: In a self-driving environment, vehicles communicate with each other to create a closely spaced multiple vehicle chains on a highway. High-density platooning will further reduce the current distance between vehicles down to one meter, resulting in better traffic efficiency, fuel savings, and safer roads.
- See through: In situations where small vehicles are behind larger vehicles (e.g., trucks), the smaller vehicles cannot "see" a pedestrian crossing the road in front of the larger vehicle. In such scenarios, a truck’s camera can detect the situation and share the image of the pedestrian with the vehicle behind it, which sends an alert to the driver and shows him the pedestrian in virtual reality on the windshield board.
Beyond pioneering C-V2X and helping define the path to 5G, we are delivering new levels of on-device intelligence and integration in the connected vehicle of tomorrow. Our innovations in cognitive technologies, such as always-on sensing, computer vision, and machine learning, will help make our vision of safer, more autonomous vehicles a reality.
To learn more about how we are pioneering Cellular V2X, join us for our upcoming webinar or visit our Cellular V2X web page.
Learn more about our automotive solutions here.
[End of Article]
___________________________________________________________________________
Vehicular automation
Vehicular automation involves the use of mechatronics, artificial intelligence, and multi-agent system to assist a vehicle's operator. These features and the vehicles employing them may be labeled as intelligent or smart.
A vehicle using automation for difficult tasks, especially navigation, may be referred to as semi-autonomous. A vehicle relying solely on automation is consequently referred to as robotic or autonomous. After the invention of the integrated circuit, the sophistication of automation technology increased. Manufacturers and researchers subsequently added a variety of automated functions to automobiles and other vehicles.
Autonomy levels:
Autonomy in vehicles is often categorized in six levels: The level system was developed by the Society of Automotive Engineers (SAE):
- Level 0: No automation.
- Level 1: Driver assistance - The vehicle can control either steering or speed autonomously in specific circumstances to assist the driver.
- Level 2: Partial automation - The vehicle can control both steering and speed autonomously in specific circumstances to assist the driver.
- Level 3: Conditional automation - The vehicle can control both steering and speed autonomously under normal environmental conditions, but requires driver oversight.
- Level 4: High automation - The vehicle can complete a travel autonomously under normal environmental conditions, not requiring driver oversight.
- Level 5: Full autonomy - The vehicle can complete a travel autonomously in any environmental conditions.
Ground vehicles:
Further information: Unmanned ground vehicles
Ground vehicles employing automation and teleoperation include shipyard gantries, mining trucks, bomb-disposal robots, robotic insects, and driverless tractors.
There are a lot of autonomous and semi-autonomous ground vehicles being made for the purpose of transporting passengers. One such example is the free-ranging on grid (FROG) technology which consists of autonomous vehicles, a magnetic track and a supervisory system.
The FROG system is deployed for industrial purposes in factory sites and has been in use since 1999 on the ParkShuttle, a PRT-style public transport system in the city of Capelle aan den IJssel to connect the Rivium business park with the neighboring city of Rotterdam (where the route terminates at the Kralingse Zoom metro station). The system experienced a crash in 2005 that proved to be caused by a human error.
Applications for automation in ground vehicles include the following:
- Vehicle tracking system system ESITrack, Lojack
- Rear-view alarm, to detect obstacles behind.
- Anti-lock braking system (ABS) (also Emergency Braking Assistance (EBA)), often coupled with Electronic brake force distribution (EBD), which prevents the brakes from locking and losing traction while braking. This shortens stopping distances in most cases and, more importantly, allows the driver to steer the vehicle while braking.
- Traction control system (TCS) actuates brakes or reduces throttle to restore traction if driven wheels begin to spin.
- Four wheel drive (AWD) with a center differential. Distributing power to all four wheels lessens the chances of wheel spin. It also suffers less from oversteer and understeer.
- Electronic Stability Control (ESC) (also known for Mercedes-Benz proprietary Electronic Stability Program (ESP), Acceleration Slip Regulation (ASR) and Electronic differential lock (EDL)). Uses various sensors to intervene when the car senses a possible loss of control. The car's control unit can reduce power from the engine and even apply the brakes on individual wheels to prevent the car from understeering or oversteering.
- Dynamic steering response (DSR) corrects the rate of power steering system to adapt it to vehicle's speed and road conditions.
Research is ongoing and prototypes of autonomous ground vehicles exist.
Cars:
See also: Autonomous car
Extensive automation for cars focuses on either introducing robotic cars or modifying modern car designs to be semi-autonomous.
Semi-autonomous designs could be implemented sooner as they rely less on technology that is still at the forefront of research. An example is the dual mode monorail. Groups such as RUF (Denmark) and TriTrack (USA) are working on projects consisting of specialized private cars that are driven manually on normal roads but also that dock onto a monorail/guideway along which they are driven autonomously.
As a method of automating cars without extensively modifying the cars as much as a robotic car, Automated highway systems (AHS) aims to construct lanes on highways that would be equipped with, for example, magnets to guide the vehicles. Automation vehicles have auto-brakes named as Auto Vehicles Braking System (AVBS). Highway computers would manage the traffic and direct the cars to avoid crashes.
The European Commission has established a smart car development program called the Intelligent Car Flagship Initiative. The goals of that program include:
There are plenty of further uses for automation in relation to cars. These include:
- Assured Clear Distance Ahead
- Adaptive headlamps
- Advanced Automatic Collision Notification, such as OnStar
- Intelligent Parking Assist System
- Automatic Parking
- Automotive night vision with pedestrian detection
- Blind spot monitoring
- Driver Monitoring System
- Robotic car or self-driving car which may result in less-stressed "drivers", higher efficiency (the driver can do something else), increased safety and less pollution (e.g. via completely automated fuel control)
- Precrash system
- Safe speed governing
- Traffic sign recognition
- Following another car on a motorway – "enhanced" or "adaptive" cruise control, as used by Ford and Vauxhall
- Distance control assist – as developed by Nissan
- Dead man's switch – there is a move to introduce deadman's braking into automotive application, primarily heavy vehicles, and there may also be a need to add penalty switches to cruise controls.
Singapore also announced a set of provisional national standards on January 31, 2019, to guide the autonomous vehicle industry. The standards, known as Technical Reference 68 (TR68), will promote the safe deployment of fully driverless vehicles in Singapore, according to a joint press release by Enterprise Singapore (ESG), Land Transport Authority (LTA), Standards Development Organisation and Singapore Standards Council (SSC).
Shared autonomous vehicles:
Following recent developments in autonomous cars, shared autonomous vehicles are now able to run in ordinary traffic without the need for embedded guidance markers. So far the focus has been on low speed, 20 miles per hour (32 km/h), with short, fixed routes for the "last mile" of journeys.
This means issues of collision avoidance and safety are significantly less challenging than those for automated cars, which seek to match the performance of conventional vehicles.
Aside from 2getthere ("ParkShuttle"), three companies - Ligier ("Easymile EZ10"), Navya ("ARMA" & "Autonom Cab") and RDM Group ("LUTZ Pathfinder") - are manufacturing and actively testing such vehicles. Two other companies have produced prototypes, Local Motors ("Olli") and the GATEway project.
Beside these efforts, Apple is reportedly developing an autonomous shuttle, based on a vehicle from an existing automaker, to transfer employees between its offices in Palo Alto and Infinite Loop, Cupertino. The project called "PAIL", after its destinations, was revealed in August 2017 when Apple announced it had abandoned development of autonomous cars.
Click on any of the following blue hyperlinks for more about Vehicular Automation: ___________________________________________________________________________
Automated driving system:
An automated driving system is a complex combination of various components that can be defined as systems where perception, decision making, and operation of the automobile are performed by electronics and machinery instead of a human driver, and as introduction of automation into road traffic.
This includes handling of the vehicle, destination, as well as awareness of surroundings.
While the automated system has control over the vehicle, it allows the human operator to leave all responsibilities to the system.
Overview:
The automated driving system is generally an integrated package of individual automated systems operating in concert. Automated driving implies that the driver have given up the ability to drive (i.e., all appropriate monitoring, agency, and action functions) to the vehicle automation system. Even though the driver may be alert and ready to take action at any moment, automation system controls all functions.
Automated driving systems are often conditional, which implies that the automation system is capable of automated driving, but not for all conditions encountered in the course of normal operation. Therefore, a human driver is functionally required to initiate the automated driving system, and may or may not do so when driving conditions are within the capability of the system.
When the vehicle automation system has assumed all driving functions, the human is no longer driving the vehicle but continues to assume responsibility for the vehicle's performance as the vehicle operator.
The automated vehicle operator is not functionally required to actively monitor the vehicle's performance while the automation system is engaged, but the operator must be available to resume driving within several seconds of being prompted to do so, as the system has limited conditions of automation.
While the automated driving system is engaged, certain conditions may prevent real-time human input, but for no more than a few seconds. The operator is able to resume driving at any time subject to this short delay. When the operator has resumed all driving functions, he or she reassumes the status of the vehicle's driver.
Success in the technology:
The success in the automated driving system has been known to be successful in situations like rural road settings.
Rural road settings would be a setting in which there is lower amounts of traffic and lower differentiation between driving abilities and types of drivers.
"The greatest challenge in the development of automated functions is still inner-city traffic, where an extremely wide range of road users must be considered from all directions."
This technology is progressing to a more reliable way of the automated driving cars to switch from auto-mode to driver mode. Auto-mode is the mode that is set in order for the automated actions to take over, while the driver mode is the mode set in order to have the operator controlling all functions of the car and taking the responsibilities of operating the vehicle (Automated driving system not engaged).
This definition would include vehicle automation systems that may be available in the near term—such as traffic-jam assist, or full-range automated cruise control—if such systems would be designed such that the human operator can reasonably divert attention (monitoring) away from the performance of the vehicle while the automation system is engaged. This definition would also include automated platooning (such as conceptualized by the SARTRE project).
The SARTRE Project:
The SARTRE project's main goal is to create platooning, a train of automated cars, that will provide comfort and have the ability for the driver of the vehicle to arrive safely to a destination.
Along with the ability to be along the train, drivers that are driving past these platoons, can join in with a simple activation of the automated driving system that correlates with a truck that leads the platoon. The SARTRE project is taking what we know as a train system and mixing it with automated driving technology. This is intended to allow for an easier transportation though cities and ultimately help with traffic flow through heavy automobile traffic.
SARTRE & modern day:
In some parts of the world the self-driving car has been tested in real life situations such as in Pittsburgh. The Self-driving Uber has been put to the test around the city, driving with different types of drivers as well as different traffic situations. Not only have there been testing and successful parts to the automated car, but there has also been extensive testing in California on automated busses.
The lateral control of the automated buses uses magnetic markers such as the platoon at San Diego, while the longitudinal control of the automated truck platoon uses millimeter wave radio and radar.
Current examples around today's society include the Google car and Tesla's models. Tesla has redesigned automated driving, they have created car models that allow drivers to put in the destination and let the car take over. These are two modern day examples of the automated driving system cars.
Levels of automation according to SAE:
The U.S Department of Transportation National Highway Traffic Safety Administration (NHTSA) provided a standard classification system in 2013 which defined five different levels of automation, ranging from level 0 (no automation) to level 4 (full automation).
Since then, the NHTSA updated their standards to be in line with the classification system defined by SAE International. SAE International defines six different levels of automation in their new standard of classification in document SAE J3016 that ranges from 0 (no automation) to 5 (full automation).
Level 0 – No automation:
The driver is in complete control of the vehicle and the system does not interfere with driving. Systems that may fall into this category are forward collision warning systems and lane departure warning systems.
Level 1 – Driver assistance:
The driver is in control of the vehicle, but the system can modify the speed and steering direction of the vehicle. Systems that may fall into this category are adaptive cruise control and lane keep assist.
Level 2 – Partial automation:
The driver must be able to control the vehicle if corrections are needed, but the driver is no longer in control of the speed and steering of the vehicle. Parking assistance is an example of a system that falls into this category along with Tesla's autopilot feature.
A system that falls into this category is the DISTRONIC PLUS system created by Mercedes-Benz. It is important to note the driver must not be distracted in Level 0 to Level 2 modes.
Level 3 – Conditional automation.
The system is in complete control of vehicle functions such as speed, steering, and monitoring the environment under specific conditions. Such specific conditions may be fulfilled while on fenced-off highway with no intersections, limited driving speed, boxed-in driving situation etc.
A human driver must be ready to intervene when requested by the system to do so. If the driver does not respond within a predefined time or if a failure occurs in the system, the system needs to do a safety stop in ego lane (no lane change allowed). The driver is only allowed to be partially distracted, such as checking text messages, but taking a nap is not allowed.
Level 4 – High automation:
The system is in complete control of the vehicle and human presence is no longer needed, but its applications are limited to specific conditions. An example of a system being developed that falls into this category is the Waymo self-driving car service.
If the actual motoring condition exceeds the performance boundaries, the system does not have to ask the human to intervene but can choose to abort the trip in a safe manner, e.g. park the car.
Level 5 – Full automation:
The system is capable of providing the same aspects of a Level 4, but the system can operate in all driving conditions. The human is equivalent to "cargo" in Level 5. Currently, there are no driving systems at this level.
Risks and liabilities:
See also: Computer security § Automobiles, and Autonomous car liability
Many automakers such as Ford and Volvo have announced plans to offer fully automated cars in the future. Extensive research and development is being put into automated driving systems, but the biggest problem automakers cannot control is how drivers will use system.
Drivers are stressed to stay attentive and safety warnings are implemented to alert the driver when corrective action is needed. Tesla Motor's has one recorded incident that resulted in a fatality involving the automated driving system in the Tesla Model S. The accident report reveals the accident was a result of the driver being inattentive and the autopilot system not recognizing the obstruction ahead.
Another flaw with automated driving systems is that in situations where unpredictable events such as weather or the driving behavior of others may cause fatal accidents due to sensors that monitor the surroundings of the vehicle not being able to provide corrective action.
To overcome some of the challenges for automated driving systems, novel methodologies based on virtual testing, traffic flow simulatio and digital prototypes have been proposed, especially when novel algorithms based on Artificial Intelligence approaches are employed which require extensive training and validation data sets.
The AI Effect
Pictured below: 4 Positive Effects of AI Use in Email Marketing
- YouTube Video: Crypto Altcoin With Serious Potential! (Effect.Ai - EFX)
- YouTube Video: The Real Reason to be Afraid of Artificial Intelligence | Peter Haas | TEDxDirigo
- YouTube Video: 19 Industries The Blockchain* Will Disrupt
Pictured below: 4 Positive Effects of AI Use in Email Marketing
The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.
Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."
AIS researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"
"The AI effect" tries to redefine AI to mean:
AI is anything that has not been done yet:
A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
Pamela McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the "failures", the tough nuts that couldn't yet be cracked."
When IBM's chess-playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, people complained that it had only used "brute force methods" and it wasn't real intelligence. Fred Reed writes:
"A problem that proponents of AI regularly face is this: When we know how a machine does something 'intelligent,' it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright."
Douglas Hofstadter expresses the AI effect concisely by quoting Tesler's Theorem:
"AI is whatever hasn't been done yet."
When problems have not yet been formalized, they can still be characterised by a model of computation that includes human computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by human. This formalization is referred to as human-assisted Turing machine.
AI applications become mainstream:
Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI.
Michael Swaine reports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous", Patrick Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world."
According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?"
Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?"
Nick Bostrom observes that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore."
Legacy of the AI winter:
Main article: AI winter
Many AI researchers find that they can procure more funding and sell more software if they avoid the tarnished name of "artificial intelligence" and instead pretend their work has nothing to do with intelligence at all. This was especially true in the early 1990s, during the second "AI winter".
Patty Tascarella writes "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding".
Saving a place for humanity at the top of the chain of being:
Michael Kearns suggests that "people subconsciously are trying to preserve for themselves some special role in the universe". By discounting artificial intelligence people can continue to feel unique and special.
Kearns argues that the change in perception known as the AI effect can be traced to the mystery being removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.
A related effect has been noted in the history of animal cognition and in consciousness studies, where every time a capacity formerly thought as uniquely human is discovered in animals, (e.g. the ability to make tools, or passing the mirror test), the overall importance of that capacity is deprecated.
Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that."
See also:
Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."
AIS researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"
"The AI effect" tries to redefine AI to mean:
AI is anything that has not been done yet:
A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
Pamela McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the "failures", the tough nuts that couldn't yet be cracked."
When IBM's chess-playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, people complained that it had only used "brute force methods" and it wasn't real intelligence. Fred Reed writes:
"A problem that proponents of AI regularly face is this: When we know how a machine does something 'intelligent,' it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright."
Douglas Hofstadter expresses the AI effect concisely by quoting Tesler's Theorem:
"AI is whatever hasn't been done yet."
When problems have not yet been formalized, they can still be characterised by a model of computation that includes human computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by human. This formalization is referred to as human-assisted Turing machine.
AI applications become mainstream:
Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI.
Michael Swaine reports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous", Patrick Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world."
According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?"
Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?"
Nick Bostrom observes that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore."
Legacy of the AI winter:
Main article: AI winter
Many AI researchers find that they can procure more funding and sell more software if they avoid the tarnished name of "artificial intelligence" and instead pretend their work has nothing to do with intelligence at all. This was especially true in the early 1990s, during the second "AI winter".
Patty Tascarella writes "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding".
Saving a place for humanity at the top of the chain of being:
Michael Kearns suggests that "people subconsciously are trying to preserve for themselves some special role in the universe". By discounting artificial intelligence people can continue to feel unique and special.
Kearns argues that the change in perception known as the AI effect can be traced to the mystery being removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.
A related effect has been noted in the history of animal cognition and in consciousness studies, where every time a capacity formerly thought as uniquely human is discovered in animals, (e.g. the ability to make tools, or passing the mirror test), the overall importance of that capacity is deprecated.
Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that."
See also:
- "If It Works, It's Not AI: A Commercial Look at Artificial Intelligence startups"
- ELIZA effect
- Functionalism (philosophy of mind)
- Moravec's paradox
- Chinese room
Artificial Intelligence: It's History, Timeline and Progress
- YouTube Video: Artificial Intelligence, the History and Future - with Chris Bishop
- YouTube Video about Artificial Intelligence: Mankind's Last Invention?
- YouTube Video: Sam Harris on Artificial Intelligence
History of artificial intelligence
The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen.
The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.
Eventually, it became obvious that they had grossly underestimated the difficulty of the project.
In 1973, in response to the criticism from James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an "AI winter".
Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned by the absence of the needed computer power (hardware) and withdrew funding again.
Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry due to new methods, the application of powerful computer hardware, and the collection of immense data sets.
Click on any of the following blue hyperlinks for more about the History of Artificial Intelligence:
The following is the Timeline of Artificial Intelligence:
Progress in artificial intelligence:
Artificial intelligence applications have been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery and toys.
However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry."
In the late 1990s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes.
Kaplan and Haenlein structure artificial intelligence along three evolutionary stages:
To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Such tests have been termed subject matter expert Turing tests.
Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results.
Click on any of the following blue hyperlinks for more about the Progress of Artificial Intelligence:
The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen.
The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.
Eventually, it became obvious that they had grossly underestimated the difficulty of the project.
In 1973, in response to the criticism from James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an "AI winter".
Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned by the absence of the needed computer power (hardware) and withdrew funding again.
Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry due to new methods, the application of powerful computer hardware, and the collection of immense data sets.
Click on any of the following blue hyperlinks for more about the History of Artificial Intelligence:
- AI in myth, fiction and speculation
- Automatons
- Formal reasoning
- Computer science
- The birth of artificial intelligence 1952–1956
- The golden years 1956–1974
- The first AI winter 1974–1980
- Boom 1980–1987
- Bust: the second AI winter 1987–1993
- AI 1993–2011
- Deep learning, big data and artificial general intelligence: 2011–present
- See also:
The following is the Timeline of Artificial Intelligence:
- To 1900
- 1901–1950
- 1950s
- 1960s
- 1970s
- 1980s
- 1990s
- 2000s
- 2010s
- See also:
- "Brief History (timeline)", AI Topics, Association for the Advancement of Artificial Intelligence
- "Timeline: Building Smarter Machines", New York Times, 24 June 2010
Progress in artificial intelligence:
Artificial intelligence applications have been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery and toys.
However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry."
In the late 1990s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes.
Kaplan and Haenlein structure artificial intelligence along three evolutionary stages:
- 1) artificial narrow intelligence – applying AI only to specific tasks;
- 2) artificial general intelligence – applying AI to several areas and able to autonomously solve problems they were never even designed for;
- and 3) artificial super intelligence – applying AI to any area capable of scientific creativity, social skills, and general wisdom.
To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Such tests have been termed subject matter expert Turing tests.
Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results.
Click on any of the following blue hyperlinks for more about the Progress of Artificial Intelligence:
(WEBMD) HEALTH NEWS: How AI is Transforming Health Care Picture below: Courtesy of WebMD
Jan. 2, 2020 -- Matthew Might’s son Bertrand was born with a devastating, ultra-rare genetic disorder.
Now 12, Bertrand has vibrant eyes, a quick smile, and a love of dolphins. But he nearly died last spring from a runaway infection. He can’t sit up on his own anymore, and he struggles to use his hands to play with toys. Puberty will likely wreak more havoc.
Might, who has a PhD in computer science, has been able to use his professional expertise to change the trajectory of his son’s life. With computational simulation and by digitally combing through vast amounts of data, Might discovered two therapies that have extended Bertrand’s life and improved its quality.
Similar artificial intelligence (AI) enabled Colin Hill to determine which blood cancer patients are likely to gain the most from bone marrow transplants. The company Hill runs, GNS Healthcare, found a genetic signature in some multiple myeloma patients that suggested they would benefit from a transplant. For others, the risky, painful, and expensive procedure would probably only provide false hope.
Hospitals and doctors’ offices collect reams of data on their patients—everything from blood pressure to mobility measures to genetic sequencing. Today, most of that data sits on a computer somewhere, helping no one. But that is slowly changing as computers get better at using AI to find patterns in vast amounts of data and as recording, storing, and analyzing information gets cheaper and easier.
“I think the average patient or future patient is already being touched by AI in health care. They’re just not necessarily aware of it,” says Chris Coburn, chief innovation officer for Partners HealthCare System, a hospital and physicians network based in Boston.
AI has already helped medical administrators and schedulers with billing and making better use of providers’ time—though it still drives many doctors (and patients) crazy because of the tediousness of the work and the time spent typing rather than interacting with the patient.
AI remains a little further from reality when it comes to patient care, but “I could not easily name a [health] field that doesn’t have some active work as it relates to AI,” says Coburn, who mentions pathology, radiology, spinal surgery, cardiac surgery, and dermatology, among others.
Might’s and Hill’s stories are forerunners of a coming transformation that will enable the full potential of AI in medicine, they and other experts say. Such digital technology has been transforming other industries for decades—think cell phone traffic apps for commuting or GPS apps that measure weather, wind, and wave action to develop the most fuel-efficient shipping routes across oceans. Though slowed by the need for patient privacy and other factors, AI is finally making real inroads into medical care.
Supporters say AI promises earlier cancer diagnoses and shorter timelines for developing and testing new medications. Natural language processing should allow doctors to detach themselves from their keyboards. Wearable sensors and data analysis will offer a richer view of patients’ health as it unfolds over time.
Concerns About Digital CareAI, however, does have its limitations.
Some worry that digitizing medicine will cost people their jobs—for instance, when computers can read medical scans more accurately than people. But there will always be a major role for humans to shape the diagnostic process, says Meera Hameed, MD, chief of the surgical pathology service at Memorial Sloan Kettering Cancer Center in New York City. She says algorithms that can read digital scans will help integrate medical information, such as diagnosis, lab tests, and genetics, so pathologists can decide what to do with that information. “We will be the ones interpreting that data and putting it all together,” she says.
The need for privacy and huge amounts of data remain a challenge for AI in medicine, says Mike Nohaile, PhD, senior vice president of strategy, commercialization, and innovation for pharmaceutical giant Amgen. In large data sets, names are removed, but people can be re-identified today by their genetic code. AI is also greedy for data. While a child can learn the difference between a cat and a dog by seeing a handful of examples, an algorithm might need 50,000 data points to make the same distinction.
Computer scientists who build digital algorithms can also unintentionally introduce racial and demographic bias. Heart attacks might be driven by one factor in one group of people, while in another population, a different factor might be the main cause, Nohaile says. “I don’t want a doctor saying to someone, ‘You’re at no or low risk’ and it’s wrong,” he says. “If it does go wrong, it probably will fall disproportionately on disadvantaged populations.” Also, he says, today, the algorithms used to run AI are often hard to understand and interpret. “I don’t want to trust a black box to make decisions because I don’t know if it’s been biased,” Nohaile says. “We think about that a lot.”
That said, recent advances in digital analysis have enabled computers to draw more meaningful conclusions from large data sets. And the quality of medical information has improved dramatically over the last six years, he says, thanks to the Affordable Care Act, the national insurance program championed by then-President Barack Obama that required providers to digitize their records in order to receive federal reimbursements.
Companies like Amgen are using AI to speed drug development and clinical trials, Nohaile says. Large patient data sets can help companies identify patients who are well suited for research trials, allowing those trials to proceed faster.
Researchers can also move more quickly when they have AI to filter and make sense of reams of scientific data. And improvements in natural language processing are boosting the quality of medical records, making analyses more accurate. This will soon help patients better understand their doctor’s instructions and their own condition, Nohaile says.
Preventing Medical Errors:
AI can also help prevent medical mistakes and flag those most at risk for problems, says Robbie Freeman, RN, MS, vice president of clinical innovation at the Mount Sinai Health System in New York. “We know that hospitals are still a place where a lot of avoidable harm can happen,” he says. Freeman’s team at Mount Sinai develops A.I.-powered tools to prevent some of those situations.
One algorithm they created combs through medical records to determine which patients are at increased risk of falling. Notifying the staff of this extra risk means they can take steps to prevent accidents. Freeman says the predictive model his team developed outperforms the previous model by 30% to 40%.
They’ve trained another system to identify patients at high risk for malnutrition who might benefit by spending time with a hospital nutritionist. That algorithm “learns” from new data, Freeman says, so if a dietitian visits a patient labeled at-risk and finds no problem, their conclusion refines the model. “This is where AI has tremendous potential,” Freeman says, “to really power the accuracy for the tools we have for keeping patients safe.”
While much of the information in these algorithms was already being collected, it would often go unnoticed. Freeman says that during his six years as a nurse, he frequently felt like he was “documenting into a black hole.” Now, algorithms can evaluate how a patient is changing over time and can reveal a composite picture, rather than identifying 100 different categories of information. “The data was always there, but the algorithms make it actionable,” he says.
Managing such enormous quantities of data remains one of the biggest challenges for AI. At Mount Sinai, Freeman has access to billions of data points—going back to 2010 for 50,000 inpatients a year. Improvements in computing technology have allowed his group to make better use of these data points when designing algorithms. “Every year it gets a little easier and a little less expensive to do,” he says. “I don’t think we could have done it five years ago.”
But because algorithms require so much data to make accurate predictions, smaller health systems that don’t have access to this level of data might end up with unreliable or useless results, he warns.
Big Benefits — But A Ways To Go:
The improvements in data are beginning to yield benefits to patients, says Hill, chairman, CEO, and co-founder of GNS Healthcare, which is based in Cambridge, Massachusetts. Hill thinks AI algorithms like the one that suggests which patients will benefit from bone marrow transplants has the potential to save millions or more in health care spending by matching patients with therapies that are most likely to help them.
Over the next 3 to 5 years, the quality of data will improve even more, he predicts, allowing the seamless combination of information, such as disease treatment with clinical data, such as a patient’s response to medication.
At the moment, Nohaile says the biggest problem with AI in medicine is that people overestimate what it can do. AI is much closer to a spreadsheet than to human intelligence, he says, laughing at the idea that it will rival a doctor or nurse’s abilities anytime soon: “You use a spreadsheet to help the human do a much more efficient and effective job at what they do.” And that, he says, is how people should view AI.
Now 12, Bertrand has vibrant eyes, a quick smile, and a love of dolphins. But he nearly died last spring from a runaway infection. He can’t sit up on his own anymore, and he struggles to use his hands to play with toys. Puberty will likely wreak more havoc.
Might, who has a PhD in computer science, has been able to use his professional expertise to change the trajectory of his son’s life. With computational simulation and by digitally combing through vast amounts of data, Might discovered two therapies that have extended Bertrand’s life and improved its quality.
Similar artificial intelligence (AI) enabled Colin Hill to determine which blood cancer patients are likely to gain the most from bone marrow transplants. The company Hill runs, GNS Healthcare, found a genetic signature in some multiple myeloma patients that suggested they would benefit from a transplant. For others, the risky, painful, and expensive procedure would probably only provide false hope.
Hospitals and doctors’ offices collect reams of data on their patients—everything from blood pressure to mobility measures to genetic sequencing. Today, most of that data sits on a computer somewhere, helping no one. But that is slowly changing as computers get better at using AI to find patterns in vast amounts of data and as recording, storing, and analyzing information gets cheaper and easier.
“I think the average patient or future patient is already being touched by AI in health care. They’re just not necessarily aware of it,” says Chris Coburn, chief innovation officer for Partners HealthCare System, a hospital and physicians network based in Boston.
AI has already helped medical administrators and schedulers with billing and making better use of providers’ time—though it still drives many doctors (and patients) crazy because of the tediousness of the work and the time spent typing rather than interacting with the patient.
AI remains a little further from reality when it comes to patient care, but “I could not easily name a [health] field that doesn’t have some active work as it relates to AI,” says Coburn, who mentions pathology, radiology, spinal surgery, cardiac surgery, and dermatology, among others.
Might’s and Hill’s stories are forerunners of a coming transformation that will enable the full potential of AI in medicine, they and other experts say. Such digital technology has been transforming other industries for decades—think cell phone traffic apps for commuting or GPS apps that measure weather, wind, and wave action to develop the most fuel-efficient shipping routes across oceans. Though slowed by the need for patient privacy and other factors, AI is finally making real inroads into medical care.
Supporters say AI promises earlier cancer diagnoses and shorter timelines for developing and testing new medications. Natural language processing should allow doctors to detach themselves from their keyboards. Wearable sensors and data analysis will offer a richer view of patients’ health as it unfolds over time.
Concerns About Digital CareAI, however, does have its limitations.
Some worry that digitizing medicine will cost people their jobs—for instance, when computers can read medical scans more accurately than people. But there will always be a major role for humans to shape the diagnostic process, says Meera Hameed, MD, chief of the surgical pathology service at Memorial Sloan Kettering Cancer Center in New York City. She says algorithms that can read digital scans will help integrate medical information, such as diagnosis, lab tests, and genetics, so pathologists can decide what to do with that information. “We will be the ones interpreting that data and putting it all together,” she says.
The need for privacy and huge amounts of data remain a challenge for AI in medicine, says Mike Nohaile, PhD, senior vice president of strategy, commercialization, and innovation for pharmaceutical giant Amgen. In large data sets, names are removed, but people can be re-identified today by their genetic code. AI is also greedy for data. While a child can learn the difference between a cat and a dog by seeing a handful of examples, an algorithm might need 50,000 data points to make the same distinction.
Computer scientists who build digital algorithms can also unintentionally introduce racial and demographic bias. Heart attacks might be driven by one factor in one group of people, while in another population, a different factor might be the main cause, Nohaile says. “I don’t want a doctor saying to someone, ‘You’re at no or low risk’ and it’s wrong,” he says. “If it does go wrong, it probably will fall disproportionately on disadvantaged populations.” Also, he says, today, the algorithms used to run AI are often hard to understand and interpret. “I don’t want to trust a black box to make decisions because I don’t know if it’s been biased,” Nohaile says. “We think about that a lot.”
That said, recent advances in digital analysis have enabled computers to draw more meaningful conclusions from large data sets. And the quality of medical information has improved dramatically over the last six years, he says, thanks to the Affordable Care Act, the national insurance program championed by then-President Barack Obama that required providers to digitize their records in order to receive federal reimbursements.
Companies like Amgen are using AI to speed drug development and clinical trials, Nohaile says. Large patient data sets can help companies identify patients who are well suited for research trials, allowing those trials to proceed faster.
Researchers can also move more quickly when they have AI to filter and make sense of reams of scientific data. And improvements in natural language processing are boosting the quality of medical records, making analyses more accurate. This will soon help patients better understand their doctor’s instructions and their own condition, Nohaile says.
Preventing Medical Errors:
AI can also help prevent medical mistakes and flag those most at risk for problems, says Robbie Freeman, RN, MS, vice president of clinical innovation at the Mount Sinai Health System in New York. “We know that hospitals are still a place where a lot of avoidable harm can happen,” he says. Freeman’s team at Mount Sinai develops A.I.-powered tools to prevent some of those situations.
One algorithm they created combs through medical records to determine which patients are at increased risk of falling. Notifying the staff of this extra risk means they can take steps to prevent accidents. Freeman says the predictive model his team developed outperforms the previous model by 30% to 40%.
They’ve trained another system to identify patients at high risk for malnutrition who might benefit by spending time with a hospital nutritionist. That algorithm “learns” from new data, Freeman says, so if a dietitian visits a patient labeled at-risk and finds no problem, their conclusion refines the model. “This is where AI has tremendous potential,” Freeman says, “to really power the accuracy for the tools we have for keeping patients safe.”
While much of the information in these algorithms was already being collected, it would often go unnoticed. Freeman says that during his six years as a nurse, he frequently felt like he was “documenting into a black hole.” Now, algorithms can evaluate how a patient is changing over time and can reveal a composite picture, rather than identifying 100 different categories of information. “The data was always there, but the algorithms make it actionable,” he says.
Managing such enormous quantities of data remains one of the biggest challenges for AI. At Mount Sinai, Freeman has access to billions of data points—going back to 2010 for 50,000 inpatients a year. Improvements in computing technology have allowed his group to make better use of these data points when designing algorithms. “Every year it gets a little easier and a little less expensive to do,” he says. “I don’t think we could have done it five years ago.”
But because algorithms require so much data to make accurate predictions, smaller health systems that don’t have access to this level of data might end up with unreliable or useless results, he warns.
Big Benefits — But A Ways To Go:
The improvements in data are beginning to yield benefits to patients, says Hill, chairman, CEO, and co-founder of GNS Healthcare, which is based in Cambridge, Massachusetts. Hill thinks AI algorithms like the one that suggests which patients will benefit from bone marrow transplants has the potential to save millions or more in health care spending by matching patients with therapies that are most likely to help them.
Over the next 3 to 5 years, the quality of data will improve even more, he predicts, allowing the seamless combination of information, such as disease treatment with clinical data, such as a patient’s response to medication.
At the moment, Nohaile says the biggest problem with AI in medicine is that people overestimate what it can do. AI is much closer to a spreadsheet than to human intelligence, he says, laughing at the idea that it will rival a doctor or nurse’s abilities anytime soon: “You use a spreadsheet to help the human do a much more efficient and effective job at what they do.” And that, he says, is how people should view AI.
List of Artificial Intelligence Projects
- YouTube Video: Jeff Dean Talks Google Brain and Brain Residency
- YouTube Video: Cognitive Architectures (MIT)
- YouTube Video: Siri vs Alexa vs Google vs Cortana: news question challenge. TechNewOld - AI Arena #1.
The following is a list of current and past, non-classified notable artificial intelligence projects.
Specialized projects:
Brain-inspired:
Multipurpose projects:
Software libraries:
See also:
Specialized projects:
Brain-inspired:
- Blue Brain Project, an attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level.
- Google Brain A deep learning project part of Google X attempting to have intelligence similar or equal to human-level.
- Human Brain Project
- NuPIC, an open source implementation by Numenta of its cortical learning algorithm.
- 4CAPS, developed at Carnegie Mellon University under Marcel A. Just
- ACT-R, developed at Carnegie Mellon University under John R. Anderson.
- AIXI, Universal Artificial Intelligence developed by Marcus Hutter at IDSIA and ANU.
- CALO, a DARPA-funded, 25-institution effort to integrate many artificial intelligence approaches:
- natural language processing,
- speech recognition,
- machine vision,
- probabilistic logic,
- planning,
- reasoning, many forms of machine learning)
- into an AI assistant that learns to help manage your office environment.
- CHREST, developed under Fernand Gobet at Brunel University and Peter C. Lane at the University of Hertfordshire.
- CLARION, developed under Ron Sun at Rensselaer Polytechnic Institute and University of Missouri.
- CoJACK, an ACT-R inspired extension to the JACK multi-agent system that adds a cognitive architecture to the agents for eliciting more realistic (human-like) behaviors in virtual environments.
- Copycat, by Douglas Hofstadter and Melanie Mitchell at the Indiana University.
- DUAL, developed at the New Bulgarian University under Boicho Kokinov.
- FORR developed by Susan L. Epstein at The City University of New York.
- IDA and LIDA, implementing Global Workspace Theory, developed under Stan Franklin at the University of Memphis.
- OpenCog Prime, developed using the OpenCog Framework.
- Procedural Reasoning System (PRS), developed by Michael Georgeff and Amy L. Lansky at SRI International.
- Psi-Theory developed under Dietrich Dörner at the Otto-Friedrich University in Bamberg, Germany.
- R-CAST, developed at the Pennsylvania State University.
- Soar, developed under Allen Newell and John Laird at Carnegie Mellon University and the University of Michigan.
- Society of mind and its successor the Emotion machine proposed by Marvin Minsky.
- Subsumption architectures, developed e.g. by Rodney Brooks (though it could be argued whether they are cognitive).
- AlphaGo, software developed by Google that plays the Chinese board game Go.
- Chinook, a computer program that plays English draughts; the first to win the world champion title in the competition against humans.
- Deep Blue, a chess-playing computer developed by IBM which beat Garry Kasparov in 1997.
- FreeHAL, a self-learning conversation simulator (chatterbot) which uses semantic nets to organize its knowledge to imitate a very close human behavior within conversations.
- Halite, an artificial intelligence programming competition created by Two Sigma.
- Libratus, a poker AI that beat world-class poker players in 2017, intended to be generalisable to other applications.
- Quick, Draw!, an online game developed by Google that challenges players to draw a picture of an object or idea and then uses a neural network to guess what the drawing is.
- Stockfish AI, an open source chess engine currently ranked the highest in many computer chess rankings.
- TD-Gammon, a program that learned to play world-class backgammon partly by playing against itself (temporal difference learning with neural networks).
- Serenata de Amor, project for the analysis of public expenditures and detect discrepancies.
- Braina, an intelligent personal assistant application with a voice interface for Windows OS.
- Cyc, an attempt to assemble an ontology and database of everyday knowledge, enabling human-like reasoning.
- Eurisko, a language by Douglas Lenat for solving problems which consists of heuristics, including some for how to use and change its heuristics.
- Google Now, an intelligent personal assistant with a voice interface in Google's Android and Apple Inc.'s iOS, as well as Google Chrome web browser on personal computers.
- Holmes a new AI created by Wipro.
- Microsoft Cortana, an intelligent personal assistant with a voice interface in Microsoft's various Windows 10 editions.
- Mycin, an early medical expert system.
- Open Mind Common Sense, a project based at the MIT Media Lab to build a large common sense knowledge base from online contributions.
- P.A.N., a publicly available text analyzer.
- Siri, an intelligent personal assistant and knowledge navigator with a voice-interface in Apple Inc.'s iOS and macOS.
- SNePS, simultaneously a logic-based, frame-based, and network-based knowledge representation, reasoning, and acting system.
- Viv (software), a new AI by the creators of Siri.
- Wolfram Alpha, an online service that answers queries by computing the answer from structured data.
- AIBO, the robot pet for the home, grew out of Sony's Computer Science Laboratory (CSL).
- Cog, a robot developed by MIT to study theories of cognitive science and artificial intelligence, now discontinued.
- Melomics, a bioinspired technology for music composition and synthesization of music, where computers develop their own style, rather than mimic musicians.
- AIML, an XML dialect for creating natural language software agents.
- Apache Lucene, a high-performance, full-featured text search engine library written entirely in Java.
- Apache OpenNLP, a machine learning based toolkit for the processing of natural language text. It supports the most common NLP tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking and parsing.
- Artificial Linguistic Internet Computer Entity (A.L.I.C.E.), an award-winning natural language processing chatterbot.
- Cleverbot, successor to Jabberwacky, now with 170m lines of conversation, Deep Context, fuzziness and parallel processing. Cleverbot learns from around 2 million user interactions per month.
- ELIZA, a famous 1966 computer program by Joseph Weizenbaum, which parodied person-centered therapy.
- Jabberwacky, a chatterbot by Rollo Carpenter, aiming to simulate natural human chat.
- Mycroft, a free and open-source intelligent personal assistant that uses a natural language user interface.
- PARRY, another early chatterbot, written in 1972 by Kenneth Colby, attempting to simulate a paranoid schizophrenic.
- SHRDLU, an early natural language processing computer program developed by Terry Winograd at MIT from 1968 to 1970.
- SYSTRAN, a machine translation technology by the company of the same name, used by Yahoo!, AltaVista and Google, among others.
- 1 the Road, the first novel marketed by an AI.
- Synthetic Environment for Analysis and Simulations (SEAS), a model of the real world used by Homeland security and the United States Department of Defense that uses simulation and AI to predict and evaluate future events and courses of action.
Multipurpose projects:
Software libraries:
- Apache Mahout, a library of scalable machine learning algorithms.
- Deeplearning4j, an open-source, distributed deep learning framework written for the JVM.
- Keras, a high level open-source software library for machine learning (works on top of other libraries).
- Microsoft Cognitive Toolkit (previously known as CNTK), an open source toolkit for building artificial neural networks.
- OpenNN, a comprehensive C++ library implementing neural networks.
- PyTorch, an open-source Tensor and Dynamic neural network in Python.
- TensorFlow, an open-source software library for machine learning.
- Theano, a Python library and optimizing compiler for manipulating and evaluating mathematical expressions, especially matrix-valued ones.
- Neural Designer, a commercial deep learning tool for predictive analytics.
- Neuroph, a Java neural network framework.
- OpenCog, a GPL-licensed framework for artificial intelligence written in C++, Python and Scheme.
- RapidMiner, an environment for machine learning and data mining, now developed commercially.
- Weka, a free implementation of many machine learning algorithms in Java.
- Data Applied, a web based data mining environment.
- Grok, a service that ingests data streams and creates actionable predictions in real time.
- Watson, a pilot service by IBM to uncover and share data-driven insights, and to spur cognitive applications.
See also:
Machine Learning as a Sub-set of AI including PEW Research Article and Video
- YouTube Video: Machine Learning Basics | What Is Machine Learning? | Introduction To Machine Learning | Simplilearn
- YouTube Video: MIT Introduction to Deep Learning
- YouTube Video: Introduction to Machine Learning vs. Deep Learning
What is machine learning, and how does it work?
At Pew Research Center, we collect and analyze data in a variety of ways. Besides asking people what they think through surveys, we also regularly study things like images, videos and even the text of religious sermons.
In a digital world full of ever-expanding datasets like these, it’s not always possible for humans to analyze such vast troves of information themselves. That’s why our researchers have increasingly made use of a method called machine learning. Broadly speaking, machine learning uses computer programs to identify patterns across thousands or even millions of data points. In many ways, these techniques automate tasks that researchers have done by hand for years.
Our latest video explainer – part of our Methods 101 series – explains the basics of machine learning and how it allows researchers at the Center to analyze data on a large scale. To learn more about how we’ve used machine learning and other computational methods in our research, including the analysis mentioned in this video, you can explore recent reports from our Data Labs team.
[End of Article]
___________________________________________________________________________
Machine Learning (Wikipedia)
Machine learning (ML) is the study of computer algorithms that improve automatically through experience. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so.
Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks.
Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning.
Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning. In its application across business problems, machine learning is also referred to as predictive analytics.
Overview:
Machine learning involves computers discovering how they can perform tasks without being explicitly programmed to do so. For simple tasks assigned to computers, it is possible to program algorithms telling the machine how to execute all steps required to solve the problem at hand; on the computer's part, no learning is needed.
For more advanced tasks, it can be challenging for a human to manually create the needed algorithms. In practice, it can turn out to be more effective to help the machine develop its own algorithm, rather than have human programmers specify every needed step.
The discipline of machine learning employs various approaches to help computers learn to accomplish tasks where no fully satisfactory algorithm is available. In cases where vast numbers of potential answers exist, one approach is to label some of the correct answers as valid. This can then be used as training data for the computer to improve the algorithm(s) it uses to determine correct answers. For example, to train a system for the task of digital character recognition, the MNIST dataset has often been used.
Machine learning approaches:
Early classifications for machine learning approaches sometimes divided them into three broad categories, depending on the nature of the "signal" or "feedback" available to the learning system. These were:
History and relationships to other fields:
See also: Timeline of machine learning
The term machine learning was coined in 1959 by Arthur Samuel, an American IBMer and pioneer in the field of computer gaming and artificial intelligence. A representative book of the machine learning research during the 1960s was the Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification.
Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973. In 1981 a report was given on using teaching strategies so that a neural network learns to recognize 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal.
Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E."
This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?".
Relation to artificial intelligence:
As a scientific endeavor, machine learning grew out of the quest for artificial intelligence. In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostly perceptrons and other models that were later found to be reinventions of the generalized linear models of statistics. Probabilistic reasoning was also employed, especially in automated medical diagnosis.
However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation. By 1980, expert systems had come to dominate AI, and statistics was out of favor.
Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming, but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval.
Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines including Hopfield, Rumelhart and Hinton. Their main success came in the mid-1980s with the reinvention of backpropagation.
Machine learning, reorganized as a separate field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics and probability theory.
As of 2019, many sources continue to assert that machine learning remains a sub field of AI. Yet some practitioners, for example Dr Daniel Hulme, who both teaches AI and runs a company operating in the field, argues that machine learning and AI are separate.
Relation to data mining:
Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases).
Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy.
Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge.
Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data.
Relation to optimization:
Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples).
The difference between the two fields arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples.
Relation to statistics:
Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns.
According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field.
Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model, wherein "algorithmic model" means more or less the machine learning algorithms like Random forest.
Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning.
Theory:
Main articles: Computational learning theory and Statistical learning theory
A core objective of a learner is to generalize from its experience. Generalization in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set.
The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases.
The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms.
Instead, probabilistic bounds on the performance are quite common. The bias–variance decomposition is one way to quantify generalization error.
For the best performance in the context of generalization, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject to overfitting and generalization will be poorer.
In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results. Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time.
Click on any of the following blue hyperlinks for more about Machine Learning:
At Pew Research Center, we collect and analyze data in a variety of ways. Besides asking people what they think through surveys, we also regularly study things like images, videos and even the text of religious sermons.
In a digital world full of ever-expanding datasets like these, it’s not always possible for humans to analyze such vast troves of information themselves. That’s why our researchers have increasingly made use of a method called machine learning. Broadly speaking, machine learning uses computer programs to identify patterns across thousands or even millions of data points. In many ways, these techniques automate tasks that researchers have done by hand for years.
Our latest video explainer – part of our Methods 101 series – explains the basics of machine learning and how it allows researchers at the Center to analyze data on a large scale. To learn more about how we’ve used machine learning and other computational methods in our research, including the analysis mentioned in this video, you can explore recent reports from our Data Labs team.
[End of Article]
___________________________________________________________________________
Machine Learning (Wikipedia)
Machine learning (ML) is the study of computer algorithms that improve automatically through experience. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so.
Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks.
Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning.
Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning. In its application across business problems, machine learning is also referred to as predictive analytics.
Overview:
Machine learning involves computers discovering how they can perform tasks without being explicitly programmed to do so. For simple tasks assigned to computers, it is possible to program algorithms telling the machine how to execute all steps required to solve the problem at hand; on the computer's part, no learning is needed.
For more advanced tasks, it can be challenging for a human to manually create the needed algorithms. In practice, it can turn out to be more effective to help the machine develop its own algorithm, rather than have human programmers specify every needed step.
The discipline of machine learning employs various approaches to help computers learn to accomplish tasks where no fully satisfactory algorithm is available. In cases where vast numbers of potential answers exist, one approach is to label some of the correct answers as valid. This can then be used as training data for the computer to improve the algorithm(s) it uses to determine correct answers. For example, to train a system for the task of digital character recognition, the MNIST dataset has often been used.
Machine learning approaches:
Early classifications for machine learning approaches sometimes divided them into three broad categories, depending on the nature of the "signal" or "feedback" available to the learning system. These were:
- Supervised learning: The computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs.
- Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning).
- Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle or playing a game against an opponent) As it navigates its problem space, the program is provided feedback that's analogous to rewards, which it tries to maximise.
- Other approaches or processes have since developed that don't fit neatly into this three-fold categorization, and sometimes more than one is used by the same machine learning system. For example topic modeling, dimensionality reduction or meta learning. As of 2020, deep learning has become the dominant approach for much ongoing work in the field of machine learning .
History and relationships to other fields:
See also: Timeline of machine learning
The term machine learning was coined in 1959 by Arthur Samuel, an American IBMer and pioneer in the field of computer gaming and artificial intelligence. A representative book of the machine learning research during the 1960s was the Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification.
Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973. In 1981 a report was given on using teaching strategies so that a neural network learns to recognize 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal.
Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E."
This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?".
Relation to artificial intelligence:
As a scientific endeavor, machine learning grew out of the quest for artificial intelligence. In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostly perceptrons and other models that were later found to be reinventions of the generalized linear models of statistics. Probabilistic reasoning was also employed, especially in automated medical diagnosis.
However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation. By 1980, expert systems had come to dominate AI, and statistics was out of favor.
Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming, but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval.
Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines including Hopfield, Rumelhart and Hinton. Their main success came in the mid-1980s with the reinvention of backpropagation.
Machine learning, reorganized as a separate field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics and probability theory.
As of 2019, many sources continue to assert that machine learning remains a sub field of AI. Yet some practitioners, for example Dr Daniel Hulme, who both teaches AI and runs a company operating in the field, argues that machine learning and AI are separate.
Relation to data mining:
Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases).
Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy.
Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge.
Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data.
Relation to optimization:
Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples).
The difference between the two fields arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples.
Relation to statistics:
Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns.
According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field.
Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model, wherein "algorithmic model" means more or less the machine learning algorithms like Random forest.
Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning.
Theory:
Main articles: Computational learning theory and Statistical learning theory
A core objective of a learner is to generalize from its experience. Generalization in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set.
The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases.
The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms.
Instead, probabilistic bounds on the performance are quite common. The bias–variance decomposition is one way to quantify generalization error.
For the best performance in the context of generalization, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject to overfitting and generalization will be poorer.
In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results. Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time.
Click on any of the following blue hyperlinks for more about Machine Learning:
- Approaches
- Applications
- Limitations
- Model assessments
- Ethics
- Software
- Journals
- Conferences
- See also:
- Automated machine learning – Automated machine learning or AutoML is the process of automating the end-to-end process of machine learning.
- Big data – Information assets characterized by such a high volume, velocity, and variety to require specific technology and analytical methods for its transformation into value
- Explanation-based learning
- Important publications in machine learning – Wikimedia list article
- List of datasets for machine learning research
- Predictive analytics – Statistical techniques analyzing facts to make predictions about unknown events
- Quantum machine learning
- Machine-learning applications in bioinformatics
- Seq2seq
- Fairness (machine learning)
- International Machine Learning Society
- mloss is an academic database of open-source machine learning software.
- Machine Learning Crash Course by Google. This is a free course on machine learning through the use of TensorFlow.
Competitions and Prizes in Artificial Intelligence
- YouTube Video: the David E. Rumelhart Prize
- YouTube Video: International Rank 1 holder in Olympiad shared his experience of studies with askIITians
- YouTube Video: 2020 Vision Product of the Year Award Winner Video: Morpho (AI Software and Algorithms)
There are a number of competitions and prizes to promote research in artificial intelligence.
General machine intelligence:
The David E. Rumelhart prize is an annual award for making a "significant contemporary contribution to the theoretical foundations of human cognition". The prize is $100,000.
The Human-Competitive Award is an annual challenge started in 2004 to reward results "competitive with the work of creative and inventive humans". The prize is $10,000. Entries are required to use evolutionary computing.
The IJCAI Award for Research Excellence is a biannual award given at the IJCAI conference to researcher in artificial intelligence as a recognition of excellence of their career.
The 2011 Federal Virtual World Challenge, advertised by The White House and sponsored by the U.S. Army Research Laboratory's Simulation and Training Technology Center, held a competition offering a total of $52,000 USD in cash prize awards for general artificial intelligence applications, including "adaptive learning systems, intelligent conversational bots, adaptive behavior (objects or processes)" and more.
The Machine Intelligence Prize is awarded annually by the British Computer Society for progress towards machine intelligence.
The Kaggle - "the world's largest community of data scientists compete to solve most valuable problems".
Conversational behavior:
The Loebner prize is an annual competition to determine the best Turing test competitors. The winner is the computer system that, in the judges' opinions, demonstrates the "most human" conversational behavior, they have an additional prize for a system that in their opinion passes a Turing test. This second prize has not yet been awarded.
Automatic control:
Pilotless aircraft:
The International Aerial Robotics Competition is a long-running event begun in 1991 to advance the state of the art in fully autonomous air vehicles. This competition is restricted to university teams (although industry and governmental sponsorship of teams is allowed).
Key to this event is the creation of flying robots which must complete complex missions without any human intervention. Successful entries are able to interpret their environment and make real-time decisions based only on a high-level mission directive (e.g., "find a particular target inside a building having certain characteristics which is among a group of buildings 3 kilometers from the aerial robot launch point").
In 2000, a $30,000 prize was awarded during the 3rd Mission (search and rescue), and in 2008, $80,000 in prize money was awarded at the conclusion of the 4th Mission (urban reconnaissance).
Driverless cars:
The DARPA Grand Challenge is a series of competitions to promote driverless car technology, aimed at a congressional mandate stating that by 2015 one-third of the operational ground combat vehicles of the US Armed Forces should be unmanned.
While the first race had no winner, the second awarded a $2 million prize for the autonomous navigation of a hundred-mile trail, using GPS, computers and a sophisticated array of sensors.
In November 2007, DARPA introduced the DARPA Urban Challenge, a sixty-mile urban area race requiring vehicles to navigate through traffic.
In November 2010 the US Armed Forces extended the competition with the $1.6 million prize Multi Autonomous Ground-robotic International Challenge to consider cooperation between multiple vehicles in a simulated-combat situation.
Roborace will be a global motorsport championship with autonomously driving, electrically powered vehicles. The series will be run as a support series during the Formula E championship for electric vehicles. This will be the first global championship for driverless cars.
Data-mining and prediction:
The Netflix Prize was a competition for the best collaborative filtering algorithm that predicts user ratings for films, based on previous ratings. The competition was held by Netflix, an online DVD-rental service. The prize was $1,000,000.
The Pittsburgh Brain Activity Interpretation Competition will reward analysis of fMRI data "to predict what individuals perceive and how they act and feel in a novel Virtual Reality world involving searching for and collecting objects, interpreting changing instructions, and avoiding a threatening dog." The prize in 2007 was $22,000.
The Face Recognition Grand Challenge (May 2004 to March 2006) aimed to promote and advance face recognition technology.
The American Meteorological Society's artificial intelligence competition involves learning a classifier to characterize precipitation based on meteorological analyses of environmental conditions and polarimetric radar data.
Cooperation and coordination:
Robot football:
The RoboCup and FIRA are annual international robot soccer competitions. The International RoboCup Federation challenge is by 2050 "a team of fully autonomous humanoid robot soccer players shall win the soccer game, comply with the official rule of the FIFA, against the winner of the most recent World Cup."
Logic, reasoning and knowledge representation;
The Herbrand Award is a prize given by CADE Inc. to honour persons or groups for important contributions to the field of automated deduction. The prize is $1000.
The CADE ATP System Competition (CASC) is a yearly competition of fully automated theorem provers for classical first order logic associated with the CADE and IJCAR conferences. The competition was part of the Alan Turing Centenary Conference in 2012, with total prizes of 9000 GBP given by Google.
The SUMO prize is an annual prize for the best open source ontology extension of the Suggested Upper Merged Ontology (SUMO), a formal theory of terms and logical definitions describing the world. The prize is $3000.
The Hutter Prize for Lossless Compression of Human Knowledge is a cash prize which rewards compression improvements on a specific 100 MB English text file. The prize awards 500 euros for each one percent improvement, up to €50,000. The organizers believe that text compression and AI are equivalent problems and 3 prizes were already given, at around € 2k.
The Cyc TPTP Challenge is a competition to develop reasoning methods for the Cyc comprehensive ontology and database of everyday common sense knowledge. The prize is 100 euros for "each winner of two related challenges".
The Eternity II challenge was a constraint satisfaction problem very similar to the Tetravex game. The objective is to lay 256 tiles on a 16x16 grid while satisfying a number of constraints. The problem is known to be NP-complete. The prize was US$2,000,000. The competition ended in December 2010.
Games:
The World Computer Chess Championship has been held since 1970. The International Computer Games Association continues to hold an annual Computer Olympiad which includes this event plus computer competitions for many other games.
The Ing Prize was a substantial money prize attached to the World Computer Go Congress, starting from 1985 and expiring in 2000. It was a graduated set of handicap challenges against young professional players with increasing prizes as the handicap was lowered. At the time it expired in 2000, the unclaimed prize was 400,000 NT dollars for winning a 9-stone handicap match.
The AAAI General Game Playing Competition is a competition to develop programs that are effective at general game playing. Given a definition of a game, the program must play it effectively without human intervention. Since the game is not known in advance the competitors cannot especially adapt their programs to a particular scenario. The prize in 2006 and 2007 was $10,000.
The General Video Game AI Competition (GVGAI) poses the problem of creating artificial intelligence that can play a wide, and in principle unlimited, range of games.
Concretely, it tackles the problem of devising an algorithm that is able to play any game it is given, even if the game is not known a priori. Additionally, the contests poses the challenge of creating level and rule generators for any game is given.
This area of study can be seen as an approximation of General Artificial Intelligence, with very little room for game dependent heuristics. The competition runs yearly in different tracks:
The 2007 Ultimate Computer Chess Challenge was a competition organised by World Chess Federation that pitted Deep Fritz against Deep Junior. The prize was $100,000.
The annual Arimaa Challenge offered a $10,000 prize until the year 2020 to develop a program that plays the board game Arimaa and defeats a group of selected human opponents. In 2015, David Wu's bot bot_sharp beat the humans, losing only 2 games out of 9. As a result, the Arimaa Challenge was declared over and David Wu received the prize of $12,000 ($2,000 being offered by third-parties for 2015's championship).
2K Australia is offering a prize worth A$10,000 to develop a game-playing bot that plays a first-person shooter. The aim is to convince a panel of judges that it is actually a human player. The competition started in 2008 and was won in 2012. A new competition is planned for 2014.
The Google AI Challenge was a bi-annual online contest organized by the University of Waterloo Computer Science Club and sponsored by Google that ran from 2009 to 2011. Each year a game was chosen and contestants submitted specialized automated bots to play against other competing bots.
Cloudball had its first round in Spring 2012 and finished on June 15. It is an international artificial intelligence programming contest, where users continuously submit the actions their soccer teams will take in each time step, in simple high level C# code.
See also:
General machine intelligence:
The David E. Rumelhart prize is an annual award for making a "significant contemporary contribution to the theoretical foundations of human cognition". The prize is $100,000.
The Human-Competitive Award is an annual challenge started in 2004 to reward results "competitive with the work of creative and inventive humans". The prize is $10,000. Entries are required to use evolutionary computing.
The IJCAI Award for Research Excellence is a biannual award given at the IJCAI conference to researcher in artificial intelligence as a recognition of excellence of their career.
The 2011 Federal Virtual World Challenge, advertised by The White House and sponsored by the U.S. Army Research Laboratory's Simulation and Training Technology Center, held a competition offering a total of $52,000 USD in cash prize awards for general artificial intelligence applications, including "adaptive learning systems, intelligent conversational bots, adaptive behavior (objects or processes)" and more.
The Machine Intelligence Prize is awarded annually by the British Computer Society for progress towards machine intelligence.
The Kaggle - "the world's largest community of data scientists compete to solve most valuable problems".
Conversational behavior:
The Loebner prize is an annual competition to determine the best Turing test competitors. The winner is the computer system that, in the judges' opinions, demonstrates the "most human" conversational behavior, they have an additional prize for a system that in their opinion passes a Turing test. This second prize has not yet been awarded.
Automatic control:
Pilotless aircraft:
The International Aerial Robotics Competition is a long-running event begun in 1991 to advance the state of the art in fully autonomous air vehicles. This competition is restricted to university teams (although industry and governmental sponsorship of teams is allowed).
Key to this event is the creation of flying robots which must complete complex missions without any human intervention. Successful entries are able to interpret their environment and make real-time decisions based only on a high-level mission directive (e.g., "find a particular target inside a building having certain characteristics which is among a group of buildings 3 kilometers from the aerial robot launch point").
In 2000, a $30,000 prize was awarded during the 3rd Mission (search and rescue), and in 2008, $80,000 in prize money was awarded at the conclusion of the 4th Mission (urban reconnaissance).
Driverless cars:
The DARPA Grand Challenge is a series of competitions to promote driverless car technology, aimed at a congressional mandate stating that by 2015 one-third of the operational ground combat vehicles of the US Armed Forces should be unmanned.
While the first race had no winner, the second awarded a $2 million prize for the autonomous navigation of a hundred-mile trail, using GPS, computers and a sophisticated array of sensors.
In November 2007, DARPA introduced the DARPA Urban Challenge, a sixty-mile urban area race requiring vehicles to navigate through traffic.
In November 2010 the US Armed Forces extended the competition with the $1.6 million prize Multi Autonomous Ground-robotic International Challenge to consider cooperation between multiple vehicles in a simulated-combat situation.
Roborace will be a global motorsport championship with autonomously driving, electrically powered vehicles. The series will be run as a support series during the Formula E championship for electric vehicles. This will be the first global championship for driverless cars.
Data-mining and prediction:
The Netflix Prize was a competition for the best collaborative filtering algorithm that predicts user ratings for films, based on previous ratings. The competition was held by Netflix, an online DVD-rental service. The prize was $1,000,000.
The Pittsburgh Brain Activity Interpretation Competition will reward analysis of fMRI data "to predict what individuals perceive and how they act and feel in a novel Virtual Reality world involving searching for and collecting objects, interpreting changing instructions, and avoiding a threatening dog." The prize in 2007 was $22,000.
The Face Recognition Grand Challenge (May 2004 to March 2006) aimed to promote and advance face recognition technology.
The American Meteorological Society's artificial intelligence competition involves learning a classifier to characterize precipitation based on meteorological analyses of environmental conditions and polarimetric radar data.
Cooperation and coordination:
Robot football:
The RoboCup and FIRA are annual international robot soccer competitions. The International RoboCup Federation challenge is by 2050 "a team of fully autonomous humanoid robot soccer players shall win the soccer game, comply with the official rule of the FIFA, against the winner of the most recent World Cup."
Logic, reasoning and knowledge representation;
The Herbrand Award is a prize given by CADE Inc. to honour persons or groups for important contributions to the field of automated deduction. The prize is $1000.
The CADE ATP System Competition (CASC) is a yearly competition of fully automated theorem provers for classical first order logic associated with the CADE and IJCAR conferences. The competition was part of the Alan Turing Centenary Conference in 2012, with total prizes of 9000 GBP given by Google.
The SUMO prize is an annual prize for the best open source ontology extension of the Suggested Upper Merged Ontology (SUMO), a formal theory of terms and logical definitions describing the world. The prize is $3000.
The Hutter Prize for Lossless Compression of Human Knowledge is a cash prize which rewards compression improvements on a specific 100 MB English text file. The prize awards 500 euros for each one percent improvement, up to €50,000. The organizers believe that text compression and AI are equivalent problems and 3 prizes were already given, at around € 2k.
The Cyc TPTP Challenge is a competition to develop reasoning methods for the Cyc comprehensive ontology and database of everyday common sense knowledge. The prize is 100 euros for "each winner of two related challenges".
The Eternity II challenge was a constraint satisfaction problem very similar to the Tetravex game. The objective is to lay 256 tiles on a 16x16 grid while satisfying a number of constraints. The problem is known to be NP-complete. The prize was US$2,000,000. The competition ended in December 2010.
Games:
The World Computer Chess Championship has been held since 1970. The International Computer Games Association continues to hold an annual Computer Olympiad which includes this event plus computer competitions for many other games.
The Ing Prize was a substantial money prize attached to the World Computer Go Congress, starting from 1985 and expiring in 2000. It was a graduated set of handicap challenges against young professional players with increasing prizes as the handicap was lowered. At the time it expired in 2000, the unclaimed prize was 400,000 NT dollars for winning a 9-stone handicap match.
The AAAI General Game Playing Competition is a competition to develop programs that are effective at general game playing. Given a definition of a game, the program must play it effectively without human intervention. Since the game is not known in advance the competitors cannot especially adapt their programs to a particular scenario. The prize in 2006 and 2007 was $10,000.
The General Video Game AI Competition (GVGAI) poses the problem of creating artificial intelligence that can play a wide, and in principle unlimited, range of games.
Concretely, it tackles the problem of devising an algorithm that is able to play any game it is given, even if the game is not known a priori. Additionally, the contests poses the challenge of creating level and rule generators for any game is given.
This area of study can be seen as an approximation of General Artificial Intelligence, with very little room for game dependent heuristics. The competition runs yearly in different tracks:
- single player planning,
- two-player planning,
- single player learning,
- level and rule generation,
- and each track prizes ranging from 200 to 500 US dollars for winners and runner-ups.
The 2007 Ultimate Computer Chess Challenge was a competition organised by World Chess Federation that pitted Deep Fritz against Deep Junior. The prize was $100,000.
The annual Arimaa Challenge offered a $10,000 prize until the year 2020 to develop a program that plays the board game Arimaa and defeats a group of selected human opponents. In 2015, David Wu's bot bot_sharp beat the humans, losing only 2 games out of 9. As a result, the Arimaa Challenge was declared over and David Wu received the prize of $12,000 ($2,000 being offered by third-parties for 2015's championship).
2K Australia is offering a prize worth A$10,000 to develop a game-playing bot that plays a first-person shooter. The aim is to convince a panel of judges that it is actually a human player. The competition started in 2008 and was won in 2012. A new competition is planned for 2014.
The Google AI Challenge was a bi-annual online contest organized by the University of Waterloo Computer Science Club and sponsored by Google that ran from 2009 to 2011. Each year a game was chosen and contestants submitted specialized automated bots to play against other competing bots.
Cloudball had its first round in Spring 2012 and finished on June 15. It is an international artificial intelligence programming contest, where users continuously submit the actions their soccer teams will take in each time step, in simple high level C# code.
See also:
Nuance Communications and AI
- YouTube Video: Microsoft Acquires Nuance
- YouTube Video: Nuance AI Expertise: The Future of Voice Technology
- YouTube Video: Wheelhouse CIO discusses Microsoft's Nuance acquisition
Click Here for Nuance Web Site
Nuance Offers AI Solutions for the following: ___________________________________________________________________________
Nuance (Wikipedia)
Nuance is an American multinational computer software technology corporation, headquartered in Burlington, Massachusetts, on the outskirts of Boston, that provides speech recognition, and artificial intelligence.
Nuance merged with its competitor in the commercial large-scale speech application business, ScanSoft, in October 2005. ScanSoft was a Xerox spin-off that was bought in 1999 by Visioneer, a hardware and software scanner company, which adopted ScanSoft as the new merged company name. The original ScanSoft had its roots in Kurzweil Computer Products.
In April 2021, Microsoft announced it would buy Nuance Communications. The deal is an all-cash transaction of $19.7 billion, including company's debt, or $56 a share.
History:
The company that would become Nuance was incorporated in 1992 as Visioneer. In 1999, Visioneer acquired ScanSoft, Inc. (SSFT), and the combined company became known as ScanSoft.
In September 2005, ScanSoft Inc. acquired and merged with Nuance Communications, a natural language spinoff from SRI International. The resulting company adopted the Nuance name. During the prior decade, the two companies competed in the commercial large-scale speech application business.
ScanSoft origins:
In 1974, Raymond Kurzweil founded Kurzweil Computer Products, Inc. to develop the first omni-font optical character-recognition system – a computer program capable of recognizing text written in any normal font.
In 1980, Kurzweil sold his company to Xerox. The company became known as Xerox Imaging Systems (XIS), and later ScanSoft.
In March 1992, a new company called Visioneer, Inc. was founded to develop scanner hardware and software products, such as a sheetfed scanner called PaperMax and the document management software PaperPort.
Visioneer eventually sold its hardware division to Primax Electronics, Ltd. in January 1999. Two months later, in March, Visioneer acquired ScanSoft from Xerox to form a new public company with ScanSoft as the new company-wide name.
Prior to 2001, ScanSoft focused primarily on desktop imaging software such as TextBridge, PaperPort and OmniPage. Beginning with the December 2001 acquisition of Lernout & Hauspie, the company moved into the speech recognition business and began to compete with Nuance.
Lernout & Hauspie had acquired speech recognition company Dragon Systems in June 2001, shortly before becoming bankrupt in October.
Partnership with Siri and Apple Inc.
Siri is an application that combines speech recognition with advanced natural-language processing. The artificial intelligence, which required both advances in the underlying algorithms and leaps in processing power both on mobile devices and the servers that share the workload, allows software to understand words and their intentions.
Acquisitions:
Prior to the 2005 merger, ScanSoft acquired other companies to expand its business. Unlike ScanSoft, Nuance did not actively acquire companies prior to their merger other than the notable acquisition of Rhetorical Systems in November 2004 for $6.7 million. After the merger, the company continued to grow through acquisition.
ScanSoft merges with Nuance; changes company-wide name to Nuance Communications, Inc.:
Acquisition of Nuance Document Imaging by Kofax Inc.:
On February 1, 2019, Kofax Inc. announced the closing of its acquisition of Nuance Communications' Document Imaging Division. By means of this acquisition, Kofax gained Nuance's Power PDF, PaperPort document management, and OmniPage optical character recognition software applications.
Kofax also acquired Copitrak in the closing.
Acquisition by Microsoft:
On April 12, 2021, Microsoft announced that it will buy Nuance Communications for $19.7 billion, or $56 a share, a 22% increase over the previous closing price. Nuance's CEO, Mark Benjamin, will stay with the company. This will be Microsoft's second-biggest deal ever, after its purchase of Linkedin for $24 billion in 2016.
Nuance Offers AI Solutions for the following: ___________________________________________________________________________
Nuance (Wikipedia)
Nuance is an American multinational computer software technology corporation, headquartered in Burlington, Massachusetts, on the outskirts of Boston, that provides speech recognition, and artificial intelligence.
Nuance merged with its competitor in the commercial large-scale speech application business, ScanSoft, in October 2005. ScanSoft was a Xerox spin-off that was bought in 1999 by Visioneer, a hardware and software scanner company, which adopted ScanSoft as the new merged company name. The original ScanSoft had its roots in Kurzweil Computer Products.
In April 2021, Microsoft announced it would buy Nuance Communications. The deal is an all-cash transaction of $19.7 billion, including company's debt, or $56 a share.
History:
The company that would become Nuance was incorporated in 1992 as Visioneer. In 1999, Visioneer acquired ScanSoft, Inc. (SSFT), and the combined company became known as ScanSoft.
In September 2005, ScanSoft Inc. acquired and merged with Nuance Communications, a natural language spinoff from SRI International. The resulting company adopted the Nuance name. During the prior decade, the two companies competed in the commercial large-scale speech application business.
ScanSoft origins:
In 1974, Raymond Kurzweil founded Kurzweil Computer Products, Inc. to develop the first omni-font optical character-recognition system – a computer program capable of recognizing text written in any normal font.
In 1980, Kurzweil sold his company to Xerox. The company became known as Xerox Imaging Systems (XIS), and later ScanSoft.
In March 1992, a new company called Visioneer, Inc. was founded to develop scanner hardware and software products, such as a sheetfed scanner called PaperMax and the document management software PaperPort.
Visioneer eventually sold its hardware division to Primax Electronics, Ltd. in January 1999. Two months later, in March, Visioneer acquired ScanSoft from Xerox to form a new public company with ScanSoft as the new company-wide name.
Prior to 2001, ScanSoft focused primarily on desktop imaging software such as TextBridge, PaperPort and OmniPage. Beginning with the December 2001 acquisition of Lernout & Hauspie, the company moved into the speech recognition business and began to compete with Nuance.
Lernout & Hauspie had acquired speech recognition company Dragon Systems in June 2001, shortly before becoming bankrupt in October.
Partnership with Siri and Apple Inc.
Siri is an application that combines speech recognition with advanced natural-language processing. The artificial intelligence, which required both advances in the underlying algorithms and leaps in processing power both on mobile devices and the servers that share the workload, allows software to understand words and their intentions.
Acquisitions:
Prior to the 2005 merger, ScanSoft acquired other companies to expand its business. Unlike ScanSoft, Nuance did not actively acquire companies prior to their merger other than the notable acquisition of Rhetorical Systems in November 2004 for $6.7 million. After the merger, the company continued to grow through acquisition.
ScanSoft merges with Nuance; changes company-wide name to Nuance Communications, Inc.:
- September 15, 2005 — ScanSoft acquired and merged with Nuance Communications, of Menlo Park, California, for $221 million.
- October 18, 2005 — the company changed its name to Nuance Communications, Inc.
- March 31, 2006 — Dictaphone Corporation, of Stratford, Connecticut, for $357 million.
- December 29, 2006 — Mobile Voice Control, Inc. of Mason, Ohio.
- March 2007 — Focus Informatics, Inc. Woburn, Massachusetts.
- March 26, 2007 — Bluestar Resources Ltd.
- April 24, 2007 — BeVocal, Inc. of Mountain View, California, for $140 million.
- August 24, 2007 — VoiceSignal Technologies, Inc. of Woburn, Massachusetts.
- August 24, 2007 — Tegic Communications, Inc. of Seattle, Washington, for $265 million. Tegic developed and was the patent owner of T9 technology.
- September 28, 2007 — Commissure, Inc. of New York City, New York, for 217,975 shares of common stock.
- November 2, 2007 — Vocada, Inc. of Dallas, Texas.
- November 26, 2007 — Viecore, Inc. of Mahwah, New Jersey.
- November 26, 2007 — Viecore, FSD. of Eatontown, New Jersey. It was sold to EOIR in 2013.
- May 20, 2008 — eScription, Inc. of Needham, Massachusetts, for $340 million plus 1,294,844 shares of common stock.
- July 31, 2008 — MultiVision Communications Inc. of Markham, Ontario.
- September 26, 2008 — Philips Speech Recognition SystemsGMBH (PSRS), a business unit of Royal Philips Electronics of Vienna, Austria for about €66 million, or US$96.1 million. The acquisition of Philips Speech Recognition Systems sparked an antitrust investigation by the US Department of Justice. This investigation was focused upon medical transcription services. This investigation was closed in December, 2009.
- October 1, 2008 — SNAPin Software, Inc. of Bellevue, Washington — $180 million in shares of common stock.
- January 15, 2009 — Nuance Acquires IBM's patents Speech Technology rights.
- April 10, 2009 — Zi Corporation of Calgary, Alberta, Canada for approximately $35 million in cash and common stock.
- May 2009 — the speech technology department of Harman International Industries.
- July 14, 2009 — Jott Networks Inc. of Seattle, Washington.
- September 18, 2009 — nCore Ltd. of Oulu, Finland.
- October 5, 2009 — Ecopy of Nashua, New Hampshire. Under the terms of the agreement, net consideration was approximately $54 million in Nuance common stock.
- December 30, 2009 — Spinvox of Marlow, UK for $102.5m comprising $66m in cash and $36.5m in stock.
- February 16, 2010 — Nuance announced they acquired MacSpeech for an undisclosed amount.
- February 2010 — Nuance acquired Language and Computing, Inc., a provider of natural language processing and natural language understanding technology solutions, from Gimv NV, a Belgium-based private equity firm.
- July 2010 — Nuance acquired iTa P/L, an Australian IVR and speech services company.
- November 2010 — Nuance acquired PerSay, a voice biometrics-based authentication company for $12.6 million.
- February 2011 — Nuance acquired Noterize, an Australian company producing software for the Apple iPad.
- June 2011 — Nuance acquired Equitrac, the world leader in print management and cost recovery software.
- June 2011 — Nuance acquired SVOX, a speech technology company specializing in the automotive, mobile, and consumer electronics markets.
- July 2011 — Nuance acquired Webmedx, a provider of medical transcription and editing services. Financial terms of the deal were not disclosed.
- August 2011 — Loquendo announced Nuance acquired it. Loquendo provided a range of speech technologies for telephony, mobile, automotive, embedded and desktop solutions including text-to-speech (TTS), automatic speech recognition (ASR) and voice biometrics solutions. Nuance paid 53 million euros.
- October, 2011 — Nuance acquired Swype, a company that produces input software for touchscreen displays, for more than $100 million.
- December 2011 — Nuance acquired Vlingo, after repeatedly suing Vlingo over patent infringement. The Cambridge-based Vlingo was trying to make voice enabling applications easier, by using their own speech-to-text J2ME/Brew application API.
- April 2012 — Nuance acquired Transcend Services. Transcend utilizes a combination of its proprietary Internet-based voice and data distribution technology, customer based technology, and home-based medical language specialists to convert physicians' voice recordings into electronic documents. It also provides outsourcing transcription and editing services on the customer's platform.
- June 2012 — Nuance acquired SafeCom, a provider of print management and cost recovery software noted for their integration with Hewlett-Packard printing devices.
- September 2012 — Nuance acquired Ditech Networks for $22.5 million.
- September 2012 — Nuance acquired Quantim, QuadraMed's HIM Business — a provider of information technology solutions for the healthcare industry.
- October 2012 — Nuance acquired J.A. Thomas and Associates (JATA) — a provider of physician-oriented, clinical documentation improvement (CDI) programs for the healthcare industry.
- November 2012 — Nuance acquired Accentus.
- January 2013 — Nuance acquired VirtuOz.
- April 2013 — Nuance acquired Copitrak.
- May 2013 — Nuance acquired Tweddle Connect business for $80 million from Tweddle Group.
- July 2013 — Nuance acquired Cognition Technologies Inc.
- October, 2013 — Nuance acquired Varolii (formally Par3 Communications).
- July, 2014 — Nuance acquired Accelarad (FKA Neurostar Solutions), makers of SeeMyRadiology, a cloud-based medical images and reports exchange network. Accelarad was based in Atlanta Georgia with a Sales Operations office in Birmingham Alabama.
- June, 2016 — Nuance acquired TouchCommerce, a leader in digital customer service and intelligent engagement solutions with a specialization in live chat.
- August, 2016 — Nuance acquired Montage Healthcare Solutions.
- February, 2017 — Nuance acquired mCarbon for $36M, a mobile value added services provider.
- January 2018 — Nuance acquired iScribes, a medical documentation solutions provider.
- May, 2018 — Nuance acquired voicebox for $82M, an early leader in speech recognition and natural language technologies.
- Feb 8, 2021 — Nuance acquired Saykara.
Acquisition of Nuance Document Imaging by Kofax Inc.:
On February 1, 2019, Kofax Inc. announced the closing of its acquisition of Nuance Communications' Document Imaging Division. By means of this acquisition, Kofax gained Nuance's Power PDF, PaperPort document management, and OmniPage optical character recognition software applications.
Kofax also acquired Copitrak in the closing.
Acquisition by Microsoft:
On April 12, 2021, Microsoft announced that it will buy Nuance Communications for $19.7 billion, or $56 a share, a 22% increase over the previous closing price. Nuance's CEO, Mark Benjamin, will stay with the company. This will be Microsoft's second-biggest deal ever, after its purchase of Linkedin for $24 billion in 2016.
Artificial intelligence in government
- YouTube Video: Artificial Intelligence in Government
- YouTube Video: Artificial intelligence: Should AI be regulated by governments?
- YouTube Video: The Power of Artificial Intelligence for Government
Artificial intelligence (AI) has a range of uses in government. It can be used to further public policy objectives (in areas such as emergency services, health and welfare), as well as assist the public to interact with the government (through the use of virtual assistants, for example).
According to the Harvard Business Review, "Applications of artificial intelligence to the public sector are broad and growing, with early experiments taking place around the world."
Hila Mehr from the Ash Center for Democratic Governance and Innovation at Harvard University notes that AI in government is not new, with postal services using machine methods in the late 1990s to recognize handwriting on envelopes to automatically route letters.
The use of AI in government comes with significant benefits, including efficiencies resulting in cost savings (for instance by reducing the number of front office staff), and reducing the opportunities for corruption. However, it also carries risks.
Uses of AI in government:
The potential uses of AI in government are wide and varied, with Deloitte considering that "Cognitive technologies could eventually revolutionize every facet of government operations". Mehr suggests that six types of government problems are appropriate for AI applications:
Meher states that "While applications of AI in government work have not kept pace with the rapid expansion of AI in the private sector, the potential use cases in the public sector mirror common applications in the private sector."
Potential and actual uses of AI in government can be divided into three broad categories: those that contribute to public policy objectives; those that assist public interactions with the government; and other uses.
Contributing to public policy objectives:
There are a range of examples of where AI can contribute to public policy objectives.These include:
Assisting public interactions with government:
AI can be used to assist members of the public to interact with government and access government services, for example by:
Examples of virtual assistants or chatbots being used by government include the following:
Other uses:
Other uses of AI in government include:
Potential benefits:
AI offers potential efficiencies and costs savings for the government. For example, Deloitte has estimated that automation could save US Government employees between 96.7 million to 1.2 billion hours a year, resulting in potential savings of between $3.3 billion to $41.1 billion a year.
The Harvard Business Review has stated that while this may lead a government to reduce employee numbers, "Governments could instead choose to invest in the quality of its services. They can re-employ workers’ time towards more rewarding work that requires lateral thinking, empathy, and creativity — all things at which humans continue to outperform even the most sophisticated AI program."
Potential risks:
Potential risks associated with the use of AI in government include AI becoming susceptible to bias, a lack of transparency in how an AI application may make decisions, and the accountability for any such decisions.
See also:
According to the Harvard Business Review, "Applications of artificial intelligence to the public sector are broad and growing, with early experiments taking place around the world."
Hila Mehr from the Ash Center for Democratic Governance and Innovation at Harvard University notes that AI in government is not new, with postal services using machine methods in the late 1990s to recognize handwriting on envelopes to automatically route letters.
The use of AI in government comes with significant benefits, including efficiencies resulting in cost savings (for instance by reducing the number of front office staff), and reducing the opportunities for corruption. However, it also carries risks.
Uses of AI in government:
The potential uses of AI in government are wide and varied, with Deloitte considering that "Cognitive technologies could eventually revolutionize every facet of government operations". Mehr suggests that six types of government problems are appropriate for AI applications:
- Resource allocation - such as where administrative support is required to complete tasks more quickly.
- Large datasets - where these are too large for employees to work efficiently and multiple datasets could be combined to provide greater insights.
- Experts shortage - including where basic questions could be answered and niche issues can be learned.
- Predictable scenario - historical data makes the situation predictable.
- Procedural - repetitive tasks where inputs or outputs have a binary answer.
- Diverse data - where data takes a variety of forms (such as visual and linguistic) and needs to be summarized regularly.
Meher states that "While applications of AI in government work have not kept pace with the rapid expansion of AI in the private sector, the potential use cases in the public sector mirror common applications in the private sector."
Potential and actual uses of AI in government can be divided into three broad categories: those that contribute to public policy objectives; those that assist public interactions with the government; and other uses.
Contributing to public policy objectives:
There are a range of examples of where AI can contribute to public policy objectives.These include:
- Receiving benefits at job loss, retirement, bereavement and child birth almost immediately, in an automated way (thus without requiring any actions from citizens at all)
- Social insurance service provision
- Classifying emergency calls based on their urgency (like the system used by the Cincinnati Fire Department in the United States)
- Detecting and preventing the spread of diseases
- Assisting public servants in making welfare payments and immigration decisions
- Adjudicating bail hearings
- Triaging health care cases
- Monitoring social media for public feedback on policies
- Monitoring social media to identify emergency situations
- Identifying fraudulent benefits claims
- Predicting a crime and recommending optimal police presence
- Predicting traffic congestion and car accidents
- Anticipating road maintenance requirements
- Identifying breaches of health regulations
- Providing personalised education to students
- Marking exam papers
- Assisting with defence and national security (see Artificial intelligence § Military and Applications of artificial intelligence § Other respectively).
- Making symptom based health Chatbot for diagnosis
Assisting public interactions with government:
AI can be used to assist members of the public to interact with government and access government services, for example by:
- Answering questions using virtual assistants or chatbots (see below)
- Directing requests to the appropriate area within government
- Filling out forms
- Assisting with searching documents (e.g. IP Australia’s trade mark search)
- Scheduling appointments
Examples of virtual assistants or chatbots being used by government include the following:
- Launched in February 2016, the Australian Taxation Office has a virtual assistant on its website called "Alex". As at 30 June 2017, Alex could respond to more than 500 questions, had engaged in 1.5 million conversations and resolved over 81% of enquiries at first contact.
- Australia's National Disability Insurance Scheme (NDIS) is developing a virtual assistant called "Nadia" which takes the form of an avatar using the voice of actor Cate Blanchett. Nadia is intended to assist users of the NDIS to navigate the service. Costing some $4.5 million, the project has been postponed following a number of issues. Nadia was developed using IBM Watson, however, the Australian Government is considering other platforms such as Microsoft Cortana for its further development.
- The Australian Government's Department of Human Services uses virtual assistants on parts of its website to answer questions and encourage users to stay in the digital channel. As at December 2018, a virtual assistant called "Sam" could answer general questions about family, job seeker and student payments and related information. The Department also introduced an internally-facing virtual assistant called "MelissHR" to make it easier for departmental staff to access human resources information.
- Estonia is building a virtual assistant which will guide citizens through any interactions they have with the government. Automated and proactive services "push" services to citizens at key events of their lives (including births, bereavements, unemployment, ...). One example is the automated registering of babies when they are born.
Other uses:
Other uses of AI in government include:
- Translation
- Drafting documents
Potential benefits:
AI offers potential efficiencies and costs savings for the government. For example, Deloitte has estimated that automation could save US Government employees between 96.7 million to 1.2 billion hours a year, resulting in potential savings of between $3.3 billion to $41.1 billion a year.
The Harvard Business Review has stated that while this may lead a government to reduce employee numbers, "Governments could instead choose to invest in the quality of its services. They can re-employ workers’ time towards more rewarding work that requires lateral thinking, empathy, and creativity — all things at which humans continue to outperform even the most sophisticated AI program."
Potential risks:
Potential risks associated with the use of AI in government include AI becoming susceptible to bias, a lack of transparency in how an AI application may make decisions, and the accountability for any such decisions.
See also:
- AI for Good
- Applications of artificial intelligence
- Artificial intelligence
- Government by algorithm
- Lawbot
- Regulation of algorithms
- Regulation of artificial intelligence
Artificial Intelligence in Heavy Industry
- YouTube Videos: Power Industry 4.0 with Artificial Intelligence
- YouTube Video: Applications of AI across Industries
- YouTube Video: Artificial Intelligence (AI) | Robotics | Robots | Machine Learning | March of the Machines
Artificial intelligence, in modern terms, generally refers to computer systems that mimic human cognitive functions. It encompasses independent learning and problem-solving. While this type of general artificial intelligence has not been achieved yet, most contemporary artificial intelligence projects are currently better understood as types of machine-learning algorithms, that can be integrated with existing data to understand, categorize, and adapt sets of data without the need for explicit programming.
AI-driven systems can discover patterns and trends, discover inefficiencies, and predict future outcomes based on historical trends, which ultimately enables informed decision-making. As such, they are potentially beneficial for many industries, notably heavy industry.
While the application of artificial intelligence in heavy industry is still in its early stages, applications are likely to include optimization of asset management and operational performance, as well as identifying efficiencies and decreasing downtime.
Potential benefits:
AI-driven machines ensure an easier manufacturing process, along with many other benefits, at each new stage of advancement. Technology creates new potential for task automation while increasing the intelligence of human and machine interaction. Some benefits of AI include directed automation, 24/7 production, safer operational environments, and reduced operating costs.
Directed automation:
AI and robots can execute actions repeatedly without any error, and design more competent production models by building automation solutions. They are also capable of eliminating human errors and delivering superior levels of quality assurance on their own.
24/7 production:
While humans must work in shifts to accommodate sleep and mealtimes, robots can keep a production line running continuously. Businesses can expand their production capabilities and meet higher demands for products from global customers due to boosted production from this round-the-clock work performance.
Safer operational environment:
More AI means fewer human laborers performing dangerous and strenuous work. Logically speaking, with fewer humans and more robots performing activities associated with risk, the number of workplace accidents should dramatically decrease. It also offers a great opportunity for exploration because companies do not have to risk human life.
Condensed operating costs:
With AI taking over day-to-day activities, a business will have considerably lower operating costs. Rather than employing humans to work in shifts, they could simply invest in AI.
The only cost incurred would be from maintenance after the machinery is purchased and commissioned.
Environmental impacts:
Self-driving cars are potentially beneficial to the environment. They can be programmed to navigate the most efficient route and reduce idle time, which could result in less fossil fuel consumption and greenhouse gas (GHG) emissions. The same could be said for heavy machinery used in heavy industry. AI can accurately follow a sequence of procedures repeatedly, whereas humans are prone to occasional errors.
Additional benefits of AI:
AI and industrial automation have advanced considerably over the years. There has been an evolution of many new techniques and innovations, such as advances in sensors and the increase of computing capabilities.
AI helps machines gather and extract data, identify patterns, adapt to new trends through machine intelligence, learning, and speech recognition. It also helps to make quick data-driven decisions, advance process effectiveness, minimize operational costs, facilitate product development, and enable extensive scalability.
Potential negatives:
High cost:
Though the cost has been decreasing in the past few years, individual development expenditures can still be as high as $300,000 for basic AI. Small businesses with a low capital investment may have difficulty generating the funds necessary to leverage AI.
For larger companies, the price of AI may be higher, depending on how much AI is involved in the process. Because of higher costs, the feasibility of leveraging AI becomes a challenge for many companies. Nevertheless, the cost of utilizing AI can be cheaper for companies with the advent of open-source artificial intelligence software.
Reduced employment opportunities:
Job opportunities will grow with the advent of AI; however, some jobs might be lost because AI would replace them. Any job that involves repetitive tasks is at risk of being replaced.
In 2017, Gartner predicted 500,000 jobs would be created because of AI, but also predicted that up to 900,000 jobs could be lost because of it. These figures stand true for jobs only within the United States.
AI decision-making:
AI is only as intelligent as the individuals responsible for its initial programming. In 2014, an active shooter situation led to people calling Uber to escape the shooting and surrounding area. Instead of recognizing this as a dangerous situation, the algorithm Uber used saw a rise in demand and increased its prices. This type of situation can be dangerous in the heavy industry, where one mistake can cost lives or cause injury.
Environmental impacts:
Only 20 percent of electronic waste was recycled in 2016, despite 67 nations having enacted e-waste legislation. Electronic waste is expected to reach 52.2 million tons in the year 2021. The manufacture of digital devices and other electronics goes hand-in-hand with AI development which is poised to damage the environment.
In September 2015, the German car company Volkswagen witnessed an international scandal. The software in the cars falsely activated emission controls of nitrogen oxide gases (NOx gases) when they were undergoing a sample test. Once the cars were on the road, the emission controls deactivated and the NOx emissions increased up to 40 times.
NOx gases are harmful because they cause significant health problems, including respiratory problems and asthma. Further studies have shown that additional emissions could cause over 1,200 premature deaths in Europe and result in $2.4 million worth of lost productivity.
AI trained to act on environmental variables might have erroneous algorithms, which can lead to potentially negative effects on the environment.
Algorithms trained on biased data will produce biased results. The COMPAS judicial decision support system is one such example of biased data producing unfair outcomes. When machines develop learning and decision-making ability that is not coded by a programmer, the mistakes can be hard to trace and see. As such, the management and scrutiny of AI-based processes are essential.
Effects of AI in the manufacturing industry:
Landing.ai, a startup formed by Andrew Ng, developed machine-vision tools that detect microscopic defects in products at resolutions well beyond the human vision. The machine-vision tools use a machine-learning algorithm tested on small volumes of sample images. The computer not only 'sees' the errors but processes the information and learns from what it observes.
In 2014, China, Japan, the United States, the Republic of Korea and Germany together contributed to 70 percent of the total sales volume of robots. In the automotive industry, a sector with a particularly high degree of automation, Japan had the highest density of industrial robots in the world at 1,414 per 10,000 employees.
Generative design is a new process born from artificial intelligence. Designers or engineers specify design goals (as well as material parameters, manufacturing methods, and cost constraints) into the generative design software. The software explores all potential permutations for a feasible solution and generates design alternatives. The software also uses machine learning to test and learn from each iteration to test which iterations work and which iterations fail. It is said to effectively rent 50,000 computers [in the cloud] for an hour.
Artificial intelligence has gradually become widely adopted in the modern world. AI personal assistants, like Siri or Alexa, have been around for military purposes since 2003.
AI-driven systems can discover patterns and trends, discover inefficiencies, and predict future outcomes based on historical trends, which ultimately enables informed decision-making. As such, they are potentially beneficial for many industries, notably heavy industry.
While the application of artificial intelligence in heavy industry is still in its early stages, applications are likely to include optimization of asset management and operational performance, as well as identifying efficiencies and decreasing downtime.
Potential benefits:
AI-driven machines ensure an easier manufacturing process, along with many other benefits, at each new stage of advancement. Technology creates new potential for task automation while increasing the intelligence of human and machine interaction. Some benefits of AI include directed automation, 24/7 production, safer operational environments, and reduced operating costs.
Directed automation:
AI and robots can execute actions repeatedly without any error, and design more competent production models by building automation solutions. They are also capable of eliminating human errors and delivering superior levels of quality assurance on their own.
24/7 production:
While humans must work in shifts to accommodate sleep and mealtimes, robots can keep a production line running continuously. Businesses can expand their production capabilities and meet higher demands for products from global customers due to boosted production from this round-the-clock work performance.
Safer operational environment:
More AI means fewer human laborers performing dangerous and strenuous work. Logically speaking, with fewer humans and more robots performing activities associated with risk, the number of workplace accidents should dramatically decrease. It also offers a great opportunity for exploration because companies do not have to risk human life.
Condensed operating costs:
With AI taking over day-to-day activities, a business will have considerably lower operating costs. Rather than employing humans to work in shifts, they could simply invest in AI.
The only cost incurred would be from maintenance after the machinery is purchased and commissioned.
Environmental impacts:
Self-driving cars are potentially beneficial to the environment. They can be programmed to navigate the most efficient route and reduce idle time, which could result in less fossil fuel consumption and greenhouse gas (GHG) emissions. The same could be said for heavy machinery used in heavy industry. AI can accurately follow a sequence of procedures repeatedly, whereas humans are prone to occasional errors.
Additional benefits of AI:
AI and industrial automation have advanced considerably over the years. There has been an evolution of many new techniques and innovations, such as advances in sensors and the increase of computing capabilities.
AI helps machines gather and extract data, identify patterns, adapt to new trends through machine intelligence, learning, and speech recognition. It also helps to make quick data-driven decisions, advance process effectiveness, minimize operational costs, facilitate product development, and enable extensive scalability.
Potential negatives:
High cost:
Though the cost has been decreasing in the past few years, individual development expenditures can still be as high as $300,000 for basic AI. Small businesses with a low capital investment may have difficulty generating the funds necessary to leverage AI.
For larger companies, the price of AI may be higher, depending on how much AI is involved in the process. Because of higher costs, the feasibility of leveraging AI becomes a challenge for many companies. Nevertheless, the cost of utilizing AI can be cheaper for companies with the advent of open-source artificial intelligence software.
Reduced employment opportunities:
Job opportunities will grow with the advent of AI; however, some jobs might be lost because AI would replace them. Any job that involves repetitive tasks is at risk of being replaced.
In 2017, Gartner predicted 500,000 jobs would be created because of AI, but also predicted that up to 900,000 jobs could be lost because of it. These figures stand true for jobs only within the United States.
AI decision-making:
AI is only as intelligent as the individuals responsible for its initial programming. In 2014, an active shooter situation led to people calling Uber to escape the shooting and surrounding area. Instead of recognizing this as a dangerous situation, the algorithm Uber used saw a rise in demand and increased its prices. This type of situation can be dangerous in the heavy industry, where one mistake can cost lives or cause injury.
Environmental impacts:
Only 20 percent of electronic waste was recycled in 2016, despite 67 nations having enacted e-waste legislation. Electronic waste is expected to reach 52.2 million tons in the year 2021. The manufacture of digital devices and other electronics goes hand-in-hand with AI development which is poised to damage the environment.
In September 2015, the German car company Volkswagen witnessed an international scandal. The software in the cars falsely activated emission controls of nitrogen oxide gases (NOx gases) when they were undergoing a sample test. Once the cars were on the road, the emission controls deactivated and the NOx emissions increased up to 40 times.
NOx gases are harmful because they cause significant health problems, including respiratory problems and asthma. Further studies have shown that additional emissions could cause over 1,200 premature deaths in Europe and result in $2.4 million worth of lost productivity.
AI trained to act on environmental variables might have erroneous algorithms, which can lead to potentially negative effects on the environment.
Algorithms trained on biased data will produce biased results. The COMPAS judicial decision support system is one such example of biased data producing unfair outcomes. When machines develop learning and decision-making ability that is not coded by a programmer, the mistakes can be hard to trace and see. As such, the management and scrutiny of AI-based processes are essential.
Effects of AI in the manufacturing industry:
Landing.ai, a startup formed by Andrew Ng, developed machine-vision tools that detect microscopic defects in products at resolutions well beyond the human vision. The machine-vision tools use a machine-learning algorithm tested on small volumes of sample images. The computer not only 'sees' the errors but processes the information and learns from what it observes.
In 2014, China, Japan, the United States, the Republic of Korea and Germany together contributed to 70 percent of the total sales volume of robots. In the automotive industry, a sector with a particularly high degree of automation, Japan had the highest density of industrial robots in the world at 1,414 per 10,000 employees.
Generative design is a new process born from artificial intelligence. Designers or engineers specify design goals (as well as material parameters, manufacturing methods, and cost constraints) into the generative design software. The software explores all potential permutations for a feasible solution and generates design alternatives. The software also uses machine learning to test and learn from each iteration to test which iterations work and which iterations fail. It is said to effectively rent 50,000 computers [in the cloud] for an hour.
Artificial intelligence has gradually become widely adopted in the modern world. AI personal assistants, like Siri or Alexa, have been around for military purposes since 2003.
Artificial Intelligence (AI) in Healthcare
- YouTube Video: Adoption of AI and Machine Learning in Healthcare (GE)
- YouTube Video: The Advent of AI in Healthcare (Cleveland Clinic)
- YouTube Video: Is this the future of health? | The Economist
Artificial intelligence in healthcare is an overarching term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to mimic human cognition in the analysis, presentation, and comprehension of complex medical and health care data. Specifically, AI is the ability of computer algorithms to approximate conclusions based solely on input data.
What distinguishes AI technology from traditional technologies in health care is the ability to gather data, process it and give a well-defined output to the end-user. AI does this through machine learning algorithms and deep learning. These algorithms can recognize patterns in behavior and create their own logic.
To gain useful insights and predictions, machine learning models must be trained using extensive amounts of input data. AI algorithms behave differently from humans in two ways:
The primary aim of health-related AI applications is to analyze relationships between prevention or treatment techniques and patient outcomes.
AI programs are applied to practices such as:
AI algorithms can also be used to analyze large amounts of data through electronic health records for disease prevention and diagnosis. Medical institutions such as the following have developed AI algorithms for their departments:
Large technology companies such as IBM and Google, have also developed AI algorithms for healthcare. Additionally, hospitals are looking to AI software to support operational initiatives that increase cost saving, improve patient satisfaction, and satisfy their staffing and workforce needs.
Currently, the United States government is investing billions of dollars to progress the development of AI in healthcare. Companies are developing technologies that help healthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing length of stay and optimizing staffing levels.
As widespread use of AI in healthcare is relatively new, there are several unprecedented ethical concerns related to its practice such as data privacy, automation of jobs, and representation biases.
History:
Research in the 1960s and 1970s produced the first problem-solving program, or expert system, known as Dendral. While it was designed for applications in organic chemistry, it provided the basis for a subsequent system MYCIN, considered one of the most significant early uses of artificial intelligence in medicine. MYCIN and other systems such as INTERNIST-1 and CASNET did not achieve routine use by practitioners, however.
The 1980s and 1990s brought the proliferation of the microcomputer and new levels of network connectivity. During this time, there was a recognition by researchers and developers that AI systems in healthcare must be designed to accommodate the absence of perfect data and build on the expertise of physicians. Approaches involving fuzzy set theory, Bayesian networks, and artificial neural networks, have been applied to intelligent computing systems in healthcare.
Medical and technological advancements occurring over this half-century period that have enabled the growth healthcare-related applications of AI include:
Current research:
Various specialties in medicine have shown an increase in research regarding AI. As the novel coronavirus ravages through the globe, the United States is estimated to invest more than $2 billion in AI related healthcare research over the next 5 years, more than 4 times the amount spent in 2019 ($463 million).
Dermatology:
Dermatology is an imaging abundant specialty and the development of deep learning has been strongly tied to image processing. Therefore there is a natural fit between the dermatology and deep learning.
There are 3 main imaging types in dermatology: contextual images, macro images, micro images. For each modality, deep learning showed great progress.
Radiology:
AI is being studied within the radiology field to detect and diagnose diseases within patients through Computerized Tomography (CT) and Magnetic Resonance (MR) Imaging. The focus on Artificial Intelligence in radiology has rapidly increased in recent years according to the Radiology Society of North America, where they have seen growth from 0 to 3, 17, and overall 10% of total publications from 2015-2018 respectively.
A study at Stanford created an algorithm that could detect pneumonia in patients with a better average F1 metric (a statistical metric based on accuracy and recall), than radiologists involved in the trial. Through imaging in oncology, AI has been able to serve well for detecting abnormalities and monitoring change over time; two key factors in oncological health.
Many companies and vendor neutral systems such as icometrix, QUIBIM, Robovision, and UMC Utrecht’s IMAGRT have become available to provide a trainable machine learning platform to detect a wide range of diseases.
The Radiological Society of North America has implemented presentations on AI in imaging during its annual conference. Many professionals are optimistic about the future of AI processing in radiology, as it will cut down on needed interaction time and allow doctors to see more patients.
Although not always as good as a trained eye at deciphering malicious or benign growths, the history of medical imaging shows a trend toward rapid advancement in both capability and reliability of new systems. The emergence of AI technology in radiology is perceived as a threat by some specialists, as it can improve by certain statistical metrics in isolated cases, where specialists cannot.
Screening:
Recent advances have suggested the use of AI to describe and evaluate the outcome of maxillo-facial surgery or the assessment of cleft palate therapy in regard to facial attractiveness or age appearance.
In 2018, a paper published in the journal Annals of Oncology mentioned that skin cancer could be detected more accurately by an artificial intelligence system (which used a deep learning convolutional neural network) than by dermatologists. On average, the human dermatologists accurately detected 86.6% of skin cancers from the images, compared to 95% for the CNN machine.
In January 2020 researchers demonstrate an AI system, based on a Google DeepMind algorithm, that is capable of surpassing human experts in breast cancer detection.
In July 2020 it was reported that an AI algorithm by the University of Pittsburgh achieves the highest accuracy to date in identifying prostate cancer, with 98% sensitivity and 97% specificity.
Psychiatry:
In psychiatry, AI applications are still in a phase of proof-of-concept. Areas where the evidence is widening quickly include chatbots, conversational agents that imitate human behavior and which have been studied for anxiety and depression.
Challenges include the fact that many applications in the field are developed and proposed by private corporations, such as the screening for suicidal ideation implemented by Facebook in 2017. Such applications outside the healthcare system raise various professional, ethical and regulatory questions.
Primary care:
Primary care has become one key development area for AI technologies. AI in primary care has been used for supporting decision making, predictive modelling, and business analytics. Despite the rapid advances in AI technologies, general practitioners' view on the role of AI in primary care is very limited–mainly focused on administrative and routine documentation tasks.
Disease diagnosis:
An article by Jiang, et al. (2017) demonstrated that there are several types of AI techniques that have been used for a variety of different diseases, such as support vector machines, neural networks, and decision trees. Each of these techniques is described as having a “training goal” so “classifications agree with the outcomes as much as possible…”.
To demonstrate some specifics for disease diagnosis/classification there are two different techniques used in the classification of these diseases include using “Artificial Neural Networks (ANN) and Bayesian Networks (BN)”. It was found that ANN was better and could more accurately classify diabetes and CVD.
Through the use of Medical Learning Classifiers (MLC’s), Artificial Intelligence has been able to substantially aid doctors in patient diagnosis through the manipulation of mass Electronic Health Records (EHR’s).
Medical conditions have grown more complex, and with a vast history of electronic medical records building, the likelihood of case duplication is high. Although someone today with a rare illness is less likely to be the only person to have suffered from any given disease, the inability to access cases from similarly symptomatic origins is a major roadblock for physicians.
The implementation of AI to not only help find similar cases and treatments, but also factor in chief symptoms and help the physicians ask the most appropriate questions helps the patient receive the most accurate diagnosis and treatment possible.
Telemedicine:
The increase of telemedicine, the treatment of patients remotely, has shown the rise of possible AI applications. AI can assist in caring for patients remotely by monitoring their information through sensors.
A wearable device may allow for constant monitoring of a patient and the ability to notice changes that may be less distinguishable by humans. The information can be compared to other data that has already been collected using artificial intelligence algorithms that alert physicians if there are any issues to be aware of.
Another application of artificial intelligence is in chat-bot therapy. Some researchers charge that the reliance on chat-bots for mental healthcare does not offer the reciprocity and accountability of care that should exist in the relationship between the consumer of mental healthcare and the care provider (be it a chat-bot or psychologist), though.
Since the average age has risen due to a longer life expectancy, artificial intelligence could be useful in helping take care of older populations. Tools such as environment and personal sensors can identify a person’s regular activities and alert a caretaker if a behavior or a measured vital is abnormal.
Although the technology is useful, there are also discussions about limitations of monitoring in order to respect a person’s privacy since there are technologies that are designed to map out home layouts and detect human interactions.
Electronic health records:
Electronic health records (EHR) are crucial to the digitalization and information spread of the healthcare industry. Now that around 80% of medical practices use EHR, the next step is to use artificial intelligence to interpret the records and provide new information to physicians.
One application uses natural language processing (NLP) to make more succinct reports that limit the variation between medical terms by matching similar medical terms. For example, the term heart attack and myocardial infarction mean the same things, but physicians may use one over the over based on personal preferences.
NLP algorithms consolidate these differences so that larger datasets can be analyzed. Another use of NLP identifies phrases that are redundant due to repetition in a physician’s notes and keeps the relevant information to make it easier to read.
Beyond making content edits to an EHR, there are AI algorithms that evaluate an individual patient’s record and predict a risk for a disease based on their previous information and family history. One general algorithm is a rule-based system that makes decisions similarly to how humans use flow charts.
This system takes in large amounts of data and creates a set of rules that connect specific observations to concluded diagnoses. Thus, the algorithm can take in a new patient’s data and try to predict the likeliness that they will have a certain condition or disease. Since the algorithms can evaluate a patient’s information based on collective data, they can find any outstanding issues to bring to a physician’s attention and save time.
One study conducted by the Centerstone research institute found that predictive modeling of EHR data has achieved 70–72% accuracy in predicting individualized treatment response. These methods are helpful due to the fact that the amount of online health records doubles every five years.
Physicians do not have the bandwidth to process all this data manually, and AI can leverage this data to assist physicians in treating their patients.
Drug Interactions:
Improvements in natural language processing led to the development of algorithms to identify drug-drug interactions in medical literature. Drug-drug interactions pose a threat to those taking multiple medications simultaneously, and the danger increases with the number of medications being taken.
To address the difficulty of tracking all known or suspected drug-drug interactions, machine learning algorithms have been created to extract information on interacting drugs and their possible effects from medical literature.
Efforts were consolidated in 2013 in the DDIExtraction Challenge, in which a team of researchers at Carlos III University assembled a corpus of literature on drug-drug interactions to form a standardized test for such algorithms. Competitors were tested on their ability to accurately determine, from the text, which drugs were shown to interact and what the characteristics of their interactions were.
Researchers continue to use this corpus to standardize the measurement of the effectiveness of their algorithms.
Other algorithms identify drug-drug interactions from patterns in user-generated content, especially electronic health records and/or adverse event reports. Organizations such as the FDA Adverse Event Reporting System (FAERS) and the World Health Organization's VigiBase allow doctors to submit reports of possible negative reactions to medications. Deep learning algorithms have been developed to parse these reports and detect patterns that imply drug-drug interactions.
Creation of new drugs:
DSP-1181, a molecule of the drug for OCD (obsessive-compulsive disorder) treatment, was invented by artificial intelligence through joint efforts of Exscientia (British start-up) and Sumitomo Dainippon Pharma (Japanese pharmaceutical firm). The drug development took a single year, while pharmaceutical companies usually spend about five years on similar projects. DSP-1181 was accepted for a human trial.
In September 2019 Insilico Medicine reports the creation, via artificial intelligence, of six novel inhibitors of the DDR1 gene, a kinase target implicated in fibrosis and other diseases. The system, known as Generative Tensorial Reinforcement Learning (GENTRL), designed the new compounds in 21 days, with a lead candidate tested and showing positive results in mice.
The same month Canadian company Deep Genomics announces that its AI-based drug discovery platform has identified a target and drug candidate for Wilson's disease. The candidate, DG12P1, is designed to correct the exon-skipping effect of Met645Arg, a genetic mutation affecting the ATP7B copper-binding protein.
Industry:
The trend of large health companies merging allows for greater health data accessibility. Greater health data lays the groundwork for implementation of AI algorithms.
A large part of industry focus of implementation of AI in the healthcare sector is in the clinical decision support systems. As more data is collected, machine learning algorithms adapt and allow for more robust responses and solutions.
Numerous companies are exploring the possibilities of the incorporation of big data in the healthcare industry. Many companies investigate the market opportunities through the realms of “data assessment, storage, management, and analysis technologies” which are all crucial parts of the healthcare industry.
The following are examples of large companies that have contributed to AI algorithms for use in healthcare:
Digital consultant apps like Babylon Health's GP at Hand, Ada Health, AliHealth Doctor You, KareXpert and Your.MD use AI to give medical consultation based on personal medical history and common medical knowledge.
Users report their symptoms into the app, which uses speech recognition to compare against a database of illnesses.
Babylon then offers a recommended action, taking into account the user's medical history.
Entrepreneurs in healthcare have been effectively using seven business model archetypes to take AI solution[buzzword] to the marketplace. These archetypes depend on the value generated for the target user (e.g. patient focus vs. healthcare provider and payer focus) and value capturing mechanisms (e.g. providing information or connecting stakeholders).
IFlytek launched a service robot “Xiao Man”, which integrated artificial intelligence technology to identify the registered customer and provide personalized recommendations in medical areas. It also works in the field of medical imaging. Similar robots are also being made by companies such as UBTECH ("Cruzr") and Softbank Robotics ("Pepper").
The Indian startup Haptik recently developed a WhatsApp chatbot which answers questions associated with the deadly coronavirus in India.
With the market for AI expanding constantly, large tech companies such as Apple, Google, Amazon, and Baidu all have their own AI research divisions, as well as millions of dollars allocated for acquisition of smaller AI based companies. Many automobile manufacturers are beginning to use machine learning healthcare in their cars as well.
Companies such as BMW, GE, Tesla, Toyota, and Volvo all have new research campaigns to find ways of learning a driver's vital statistics to ensure they are awake, paying attention to the road, and not under the influence of substances or in emotional distress.
Implications:
The use of AI is predicted to decrease medical costs as there will be more accuracy in diagnosis and better predictions in the treatment plan as well as more prevention of disease.
Other future uses for AI include Brain-computer Interfaces (BCI) which are predicted to help those with trouble moving, speaking or with a spinal cord injury. The BCIs will use AI to help these patients move and communicate by decoding neural activates.
Artificial intelligence has led to significant improvements in areas of healthcare such as medical imaging, automated clinical decision-making, diagnosis, prognosis, and more.
Although AI possesses the capability to revolutionize several fields of medicine, it still has limitations and cannot replace a bedside physician.
Healthcare is a complicated science that is bound by legal, ethical, regulatory, economical, and social constraints. In order to fully implement AI within healthcare, there must be "parallel changes in the global environment, with numerous stakeholders, including citizen and society."
Expanding care to developing nations:
Artificial intelligence continues to expand in its abilities to diagnose more people accurately in nations where fewer doctors are accessible to the public. Many new technology companies such as SpaceX and the Raspberry Pi Foundation have enabled more developing countries to have access to computers and the internet than ever before.
With the increasing capabilities of AI over the internet, advanced machine learning algorithms can allow patients to get accurately diagnosed when they would previously have no way of knowing if they had a life threatening disease or not.
Using AI in developing nations who do not have the resources will diminish the need for outsourcing and can improve patient care. AI can allow for not only diagnosis of patient is areas where healthcare is scarce, but also allow for a good patient experience by resourcing files to find the best treatment for a patient.
The ability of AI to adjust course as it goes also allows the patient to have their treatment modified based on what works for them; a level of individualized care that is nearly non-existent in developing countries.
Regulation:
While research on the use of AI in healthcare aims to validate its efficacy in improving patient outcomes before its broader adoption, its use may nonetheless introduce several new types of risk to patients and healthcare providers, such as algorithmic bias, Do not resuscitate implications, and other machine morality issues. These challenges of the clinical use of AI has brought upon potential need for regulations.
Currently, there are regulations pertaining to the collection of patient data. This includes policies such as the Health Insurance Portability and Accountability Act (HIPPA) and the European General Data Protection Regulation (GDPR).
The GDPR pertains to patients within the EU and details the consent requirements for patient data use when entities collect patient healthcare data. Similarly, HIPPA protects healthcare data from patient records in the United States.
In May 2016, the White House announced its plan to host a series of workshops and formation of the National Science and Technology Council (NSTC) Subcommittee on Machine Learning and Artificial Intelligence.
In October 2016, the group published The National Artificial Intelligence Research and Development Strategic Plan, outlining its proposed priorities for Federally-funded AI research and development (within government and academia). The report notes a strategic R&D plan for the subfield of health information technology is in development stages.
The only agency that has expressed concern is the FDA. Bakul Patel, the Associate Center Director for Digital Health of the FDA, is quoted saying in May 2017:
“We're trying to get people who have hands-on development experience with a product's full life cycle. We already have some scientists who know artificial intelligence and machine learning, but we want complementary people who can look forward and see how this technology will evolve.”
The joint ITU-WHO Focus Group on Artificial Intelligence for Health (FG-AI4H) has built a platform for the testing and benchmarking of AI applications in health domain. As of November 2018, eight use cases are being benchmarked, including assessing breast cancer risk from histopathological imagery, guiding anti-venom selection from snake images, and diagnosing skin lesions.
Ethical concerns:
Data collection:
In order to effectively train Machine Learning and use AI in healthcare, massive amounts of data must be gathered. Acquiring this data, however, comes at the cost of patient privacy in most cases and is not well received publicly. For example, a survey conducted in the UK estimated that 63% of the population is uncomfortable with sharing their personal data in order to improve artificial intelligence technology.
The scarcity of real, accessible patient data is a hindrance that deters the progress of developing and deploying more artificial intelligence in healthcare.
Automation:
According to a recent study, AI can replace up to 35% of jobs in the UK within the next 10 to 20 years. However, of these jobs, it was concluded that AI has not eliminated any healthcare jobs so far. Though if AI were to automate healthcare related jobs, the jobs most susceptible to automation would be those dealing with digital information, radiology, and pathology, as opposed to those dealing with doctor to patient interaction.
Automation can provide benefits alongside doctors as well. It is expected that doctors who take advantage of AI in healthcare will provide greater quality healthcare than doctors and medical establishments who do not. AI will likely not completely replace healthcare workers but rather give them more time to attend to their patients. AI may avert healthcare worker burnout and cognitive overload
AI will ultimately help contribute to progression of societal goals which include better communication, improved quality of healthcare, and autonomy.
Bias:
Since AI makes decisions solely on the data it receives as input, it is important that this data represents accurate patient demographics. In a hospital setting, patients do not have full knowledge of how predictive algorithms are created or calibrated. Therefore, these medical establishments can unfairly code their algorithms to discriminate against minorities and prioritize profits rather than providing optimal care.
There can also be unintended bias in these algorithms that can exacerbate social and healthcare inequities. Since AI’s decisions are a direct reflection of its input data, the data it receives must have accurate representation of patient demographics. White males are overly represented in medical data sets. Therefore, having minimal patient data on minorities can lead to AI making more accurate predictions for majority populations, leading to unintended worse medical outcomes for minority populations.
Collecting data from minority communities can also lead to medical discrimination. For instance, HIV is a prevalent virus among minority communities and HIV status can be used to discriminate against patients. However, these biases are able to be eliminated through careful implementation and a methodical collection of representative data.
See also:
What distinguishes AI technology from traditional technologies in health care is the ability to gather data, process it and give a well-defined output to the end-user. AI does this through machine learning algorithms and deep learning. These algorithms can recognize patterns in behavior and create their own logic.
To gain useful insights and predictions, machine learning models must be trained using extensive amounts of input data. AI algorithms behave differently from humans in two ways:
- algorithms are literal: once a goal is set, the algorithm learns exclusively from the input data and can only understand what it has been programmed to do,
- and some deep learning algorithms are black boxes; algorithms can predict with extreme precision, but offer little to no comprehensible explanation to the logic behind its decisions aside from the data and type of algorithm used.
The primary aim of health-related AI applications is to analyze relationships between prevention or treatment techniques and patient outcomes.
AI programs are applied to practices such as:
- diagnosis processes,
- treatment protocol development,
- drug development,
- personalized medicine,
- and patient monitoring and care.
AI algorithms can also be used to analyze large amounts of data through electronic health records for disease prevention and diagnosis. Medical institutions such as the following have developed AI algorithms for their departments:
- The Mayo Clinic,
- Memorial Sloan Kettering Cancer Center,
- and the British National Health Service,
Large technology companies such as IBM and Google, have also developed AI algorithms for healthcare. Additionally, hospitals are looking to AI software to support operational initiatives that increase cost saving, improve patient satisfaction, and satisfy their staffing and workforce needs.
Currently, the United States government is investing billions of dollars to progress the development of AI in healthcare. Companies are developing technologies that help healthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing length of stay and optimizing staffing levels.
As widespread use of AI in healthcare is relatively new, there are several unprecedented ethical concerns related to its practice such as data privacy, automation of jobs, and representation biases.
History:
Research in the 1960s and 1970s produced the first problem-solving program, or expert system, known as Dendral. While it was designed for applications in organic chemistry, it provided the basis for a subsequent system MYCIN, considered one of the most significant early uses of artificial intelligence in medicine. MYCIN and other systems such as INTERNIST-1 and CASNET did not achieve routine use by practitioners, however.
The 1980s and 1990s brought the proliferation of the microcomputer and new levels of network connectivity. During this time, there was a recognition by researchers and developers that AI systems in healthcare must be designed to accommodate the absence of perfect data and build on the expertise of physicians. Approaches involving fuzzy set theory, Bayesian networks, and artificial neural networks, have been applied to intelligent computing systems in healthcare.
Medical and technological advancements occurring over this half-century period that have enabled the growth healthcare-related applications of AI include:
- Improvements in computing power resulting in faster data collection and data processing.
- Growth of genomic sequencing databases
- Widespread implementation of electronic health record systems
- Improvements in natural language processing and computer vision, enabling machines to replicate human perceptual processes
- Enhanced the precision of robot-assisted surgery
- Improvements in deep learning techniques and data logs in rare diseases
Current research:
Various specialties in medicine have shown an increase in research regarding AI. As the novel coronavirus ravages through the globe, the United States is estimated to invest more than $2 billion in AI related healthcare research over the next 5 years, more than 4 times the amount spent in 2019 ($463 million).
Dermatology:
Dermatology is an imaging abundant specialty and the development of deep learning has been strongly tied to image processing. Therefore there is a natural fit between the dermatology and deep learning.
There are 3 main imaging types in dermatology: contextual images, macro images, micro images. For each modality, deep learning showed great progress.
- Han et. al. showed keratinocytic skin cancer detection from face photographs.
- Esteva et al. demonstrated dermatologist-level classification of skin cancer from lesion images.
- Noyan et. al. demonstrated a convolutional neural network that achieved 94% accuracy at identifying skin cells from microscopic Tzanck smear images.
Radiology:
AI is being studied within the radiology field to detect and diagnose diseases within patients through Computerized Tomography (CT) and Magnetic Resonance (MR) Imaging. The focus on Artificial Intelligence in radiology has rapidly increased in recent years according to the Radiology Society of North America, where they have seen growth from 0 to 3, 17, and overall 10% of total publications from 2015-2018 respectively.
A study at Stanford created an algorithm that could detect pneumonia in patients with a better average F1 metric (a statistical metric based on accuracy and recall), than radiologists involved in the trial. Through imaging in oncology, AI has been able to serve well for detecting abnormalities and monitoring change over time; two key factors in oncological health.
Many companies and vendor neutral systems such as icometrix, QUIBIM, Robovision, and UMC Utrecht’s IMAGRT have become available to provide a trainable machine learning platform to detect a wide range of diseases.
The Radiological Society of North America has implemented presentations on AI in imaging during its annual conference. Many professionals are optimistic about the future of AI processing in radiology, as it will cut down on needed interaction time and allow doctors to see more patients.
Although not always as good as a trained eye at deciphering malicious or benign growths, the history of medical imaging shows a trend toward rapid advancement in both capability and reliability of new systems. The emergence of AI technology in radiology is perceived as a threat by some specialists, as it can improve by certain statistical metrics in isolated cases, where specialists cannot.
Screening:
Recent advances have suggested the use of AI to describe and evaluate the outcome of maxillo-facial surgery or the assessment of cleft palate therapy in regard to facial attractiveness or age appearance.
In 2018, a paper published in the journal Annals of Oncology mentioned that skin cancer could be detected more accurately by an artificial intelligence system (which used a deep learning convolutional neural network) than by dermatologists. On average, the human dermatologists accurately detected 86.6% of skin cancers from the images, compared to 95% for the CNN machine.
In January 2020 researchers demonstrate an AI system, based on a Google DeepMind algorithm, that is capable of surpassing human experts in breast cancer detection.
In July 2020 it was reported that an AI algorithm by the University of Pittsburgh achieves the highest accuracy to date in identifying prostate cancer, with 98% sensitivity and 97% specificity.
Psychiatry:
In psychiatry, AI applications are still in a phase of proof-of-concept. Areas where the evidence is widening quickly include chatbots, conversational agents that imitate human behavior and which have been studied for anxiety and depression.
Challenges include the fact that many applications in the field are developed and proposed by private corporations, such as the screening for suicidal ideation implemented by Facebook in 2017. Such applications outside the healthcare system raise various professional, ethical and regulatory questions.
Primary care:
Primary care has become one key development area for AI technologies. AI in primary care has been used for supporting decision making, predictive modelling, and business analytics. Despite the rapid advances in AI technologies, general practitioners' view on the role of AI in primary care is very limited–mainly focused on administrative and routine documentation tasks.
Disease diagnosis:
An article by Jiang, et al. (2017) demonstrated that there are several types of AI techniques that have been used for a variety of different diseases, such as support vector machines, neural networks, and decision trees. Each of these techniques is described as having a “training goal” so “classifications agree with the outcomes as much as possible…”.
To demonstrate some specifics for disease diagnosis/classification there are two different techniques used in the classification of these diseases include using “Artificial Neural Networks (ANN) and Bayesian Networks (BN)”. It was found that ANN was better and could more accurately classify diabetes and CVD.
Through the use of Medical Learning Classifiers (MLC’s), Artificial Intelligence has been able to substantially aid doctors in patient diagnosis through the manipulation of mass Electronic Health Records (EHR’s).
Medical conditions have grown more complex, and with a vast history of electronic medical records building, the likelihood of case duplication is high. Although someone today with a rare illness is less likely to be the only person to have suffered from any given disease, the inability to access cases from similarly symptomatic origins is a major roadblock for physicians.
The implementation of AI to not only help find similar cases and treatments, but also factor in chief symptoms and help the physicians ask the most appropriate questions helps the patient receive the most accurate diagnosis and treatment possible.
Telemedicine:
The increase of telemedicine, the treatment of patients remotely, has shown the rise of possible AI applications. AI can assist in caring for patients remotely by monitoring their information through sensors.
A wearable device may allow for constant monitoring of a patient and the ability to notice changes that may be less distinguishable by humans. The information can be compared to other data that has already been collected using artificial intelligence algorithms that alert physicians if there are any issues to be aware of.
Another application of artificial intelligence is in chat-bot therapy. Some researchers charge that the reliance on chat-bots for mental healthcare does not offer the reciprocity and accountability of care that should exist in the relationship between the consumer of mental healthcare and the care provider (be it a chat-bot or psychologist), though.
Since the average age has risen due to a longer life expectancy, artificial intelligence could be useful in helping take care of older populations. Tools such as environment and personal sensors can identify a person’s regular activities and alert a caretaker if a behavior or a measured vital is abnormal.
Although the technology is useful, there are also discussions about limitations of monitoring in order to respect a person’s privacy since there are technologies that are designed to map out home layouts and detect human interactions.
Electronic health records:
Electronic health records (EHR) are crucial to the digitalization and information spread of the healthcare industry. Now that around 80% of medical practices use EHR, the next step is to use artificial intelligence to interpret the records and provide new information to physicians.
One application uses natural language processing (NLP) to make more succinct reports that limit the variation between medical terms by matching similar medical terms. For example, the term heart attack and myocardial infarction mean the same things, but physicians may use one over the over based on personal preferences.
NLP algorithms consolidate these differences so that larger datasets can be analyzed. Another use of NLP identifies phrases that are redundant due to repetition in a physician’s notes and keeps the relevant information to make it easier to read.
Beyond making content edits to an EHR, there are AI algorithms that evaluate an individual patient’s record and predict a risk for a disease based on their previous information and family history. One general algorithm is a rule-based system that makes decisions similarly to how humans use flow charts.
This system takes in large amounts of data and creates a set of rules that connect specific observations to concluded diagnoses. Thus, the algorithm can take in a new patient’s data and try to predict the likeliness that they will have a certain condition or disease. Since the algorithms can evaluate a patient’s information based on collective data, they can find any outstanding issues to bring to a physician’s attention and save time.
One study conducted by the Centerstone research institute found that predictive modeling of EHR data has achieved 70–72% accuracy in predicting individualized treatment response. These methods are helpful due to the fact that the amount of online health records doubles every five years.
Physicians do not have the bandwidth to process all this data manually, and AI can leverage this data to assist physicians in treating their patients.
Drug Interactions:
Improvements in natural language processing led to the development of algorithms to identify drug-drug interactions in medical literature. Drug-drug interactions pose a threat to those taking multiple medications simultaneously, and the danger increases with the number of medications being taken.
To address the difficulty of tracking all known or suspected drug-drug interactions, machine learning algorithms have been created to extract information on interacting drugs and their possible effects from medical literature.
Efforts were consolidated in 2013 in the DDIExtraction Challenge, in which a team of researchers at Carlos III University assembled a corpus of literature on drug-drug interactions to form a standardized test for such algorithms. Competitors were tested on their ability to accurately determine, from the text, which drugs were shown to interact and what the characteristics of their interactions were.
Researchers continue to use this corpus to standardize the measurement of the effectiveness of their algorithms.
Other algorithms identify drug-drug interactions from patterns in user-generated content, especially electronic health records and/or adverse event reports. Organizations such as the FDA Adverse Event Reporting System (FAERS) and the World Health Organization's VigiBase allow doctors to submit reports of possible negative reactions to medications. Deep learning algorithms have been developed to parse these reports and detect patterns that imply drug-drug interactions.
Creation of new drugs:
DSP-1181, a molecule of the drug for OCD (obsessive-compulsive disorder) treatment, was invented by artificial intelligence through joint efforts of Exscientia (British start-up) and Sumitomo Dainippon Pharma (Japanese pharmaceutical firm). The drug development took a single year, while pharmaceutical companies usually spend about five years on similar projects. DSP-1181 was accepted for a human trial.
In September 2019 Insilico Medicine reports the creation, via artificial intelligence, of six novel inhibitors of the DDR1 gene, a kinase target implicated in fibrosis and other diseases. The system, known as Generative Tensorial Reinforcement Learning (GENTRL), designed the new compounds in 21 days, with a lead candidate tested and showing positive results in mice.
The same month Canadian company Deep Genomics announces that its AI-based drug discovery platform has identified a target and drug candidate for Wilson's disease. The candidate, DG12P1, is designed to correct the exon-skipping effect of Met645Arg, a genetic mutation affecting the ATP7B copper-binding protein.
Industry:
The trend of large health companies merging allows for greater health data accessibility. Greater health data lays the groundwork for implementation of AI algorithms.
A large part of industry focus of implementation of AI in the healthcare sector is in the clinical decision support systems. As more data is collected, machine learning algorithms adapt and allow for more robust responses and solutions.
Numerous companies are exploring the possibilities of the incorporation of big data in the healthcare industry. Many companies investigate the market opportunities through the realms of “data assessment, storage, management, and analysis technologies” which are all crucial parts of the healthcare industry.
The following are examples of large companies that have contributed to AI algorithms for use in healthcare:
- IBM's Watson Oncology is in development at Memorial Sloan Kettering Cancer Center and Cleveland Clinic. IBM is also working with CVS Health on AI applications in chronic disease treatment and with Johnson & Johnson on analysis of scientific papers to find new connections for drug development. In May 2017, IBM and Rensselaer Polytechnic Institute began a joint project entitled Health Empowerment by Analytics, Learning and Semantics (HEALS), to explore using AI technology to enhance healthcare.
- Microsoft's Hanover project, in partnership with Oregon Health & Science University's Knight Cancer Institute, analyzes medical research to predict the most effective cancer drug treatment options for patients. Other projects include medical image analysis of tumor progression and the development of programmable cells.
- Google's DeepMind platform is being used by the UK National Health Service to detect certain health risks through data collected via a mobile app. A second project with the NHS involves analysis of medical images collected from NHS patients to develop computer vision algorithms to detect cancerous tissues.
- Tencent is working on several medical systems and services. These include AI Medical Innovation System (AIMIS), an AI-powered diagnostic medical imaging service; WeChat Intelligent Healthcare; and Tencent Doctorwork
- Intel's venture capital arm Intel Capital recently invested in startup Lumiata which uses AI to identify at-risk patients and develop care options.
- Kheiron Medical developed deep learning software to detect breast cancers in mammograms.
- Fractal Analytics has incubated Qure.ai which focuses on using deep learning and AI to improve radiology and speed up the analysis of diagnostic x-rays.
- Neuralink has come up with a next generation neuroprosthetic which intricately interfaces with thousands of neural pathways in the brain. Their process allows a chip, roughly the size of a quarter, to be inserted in place of a chunk of skull by a precision surgical robot to avoid accidental injury .
Digital consultant apps like Babylon Health's GP at Hand, Ada Health, AliHealth Doctor You, KareXpert and Your.MD use AI to give medical consultation based on personal medical history and common medical knowledge.
Users report their symptoms into the app, which uses speech recognition to compare against a database of illnesses.
Babylon then offers a recommended action, taking into account the user's medical history.
Entrepreneurs in healthcare have been effectively using seven business model archetypes to take AI solution[buzzword] to the marketplace. These archetypes depend on the value generated for the target user (e.g. patient focus vs. healthcare provider and payer focus) and value capturing mechanisms (e.g. providing information or connecting stakeholders).
IFlytek launched a service robot “Xiao Man”, which integrated artificial intelligence technology to identify the registered customer and provide personalized recommendations in medical areas. It also works in the field of medical imaging. Similar robots are also being made by companies such as UBTECH ("Cruzr") and Softbank Robotics ("Pepper").
The Indian startup Haptik recently developed a WhatsApp chatbot which answers questions associated with the deadly coronavirus in India.
With the market for AI expanding constantly, large tech companies such as Apple, Google, Amazon, and Baidu all have their own AI research divisions, as well as millions of dollars allocated for acquisition of smaller AI based companies. Many automobile manufacturers are beginning to use machine learning healthcare in their cars as well.
Companies such as BMW, GE, Tesla, Toyota, and Volvo all have new research campaigns to find ways of learning a driver's vital statistics to ensure they are awake, paying attention to the road, and not under the influence of substances or in emotional distress.
Implications:
The use of AI is predicted to decrease medical costs as there will be more accuracy in diagnosis and better predictions in the treatment plan as well as more prevention of disease.
Other future uses for AI include Brain-computer Interfaces (BCI) which are predicted to help those with trouble moving, speaking or with a spinal cord injury. The BCIs will use AI to help these patients move and communicate by decoding neural activates.
Artificial intelligence has led to significant improvements in areas of healthcare such as medical imaging, automated clinical decision-making, diagnosis, prognosis, and more.
Although AI possesses the capability to revolutionize several fields of medicine, it still has limitations and cannot replace a bedside physician.
Healthcare is a complicated science that is bound by legal, ethical, regulatory, economical, and social constraints. In order to fully implement AI within healthcare, there must be "parallel changes in the global environment, with numerous stakeholders, including citizen and society."
Expanding care to developing nations:
Artificial intelligence continues to expand in its abilities to diagnose more people accurately in nations where fewer doctors are accessible to the public. Many new technology companies such as SpaceX and the Raspberry Pi Foundation have enabled more developing countries to have access to computers and the internet than ever before.
With the increasing capabilities of AI over the internet, advanced machine learning algorithms can allow patients to get accurately diagnosed when they would previously have no way of knowing if they had a life threatening disease or not.
Using AI in developing nations who do not have the resources will diminish the need for outsourcing and can improve patient care. AI can allow for not only diagnosis of patient is areas where healthcare is scarce, but also allow for a good patient experience by resourcing files to find the best treatment for a patient.
The ability of AI to adjust course as it goes also allows the patient to have their treatment modified based on what works for them; a level of individualized care that is nearly non-existent in developing countries.
Regulation:
While research on the use of AI in healthcare aims to validate its efficacy in improving patient outcomes before its broader adoption, its use may nonetheless introduce several new types of risk to patients and healthcare providers, such as algorithmic bias, Do not resuscitate implications, and other machine morality issues. These challenges of the clinical use of AI has brought upon potential need for regulations.
Currently, there are regulations pertaining to the collection of patient data. This includes policies such as the Health Insurance Portability and Accountability Act (HIPPA) and the European General Data Protection Regulation (GDPR).
The GDPR pertains to patients within the EU and details the consent requirements for patient data use when entities collect patient healthcare data. Similarly, HIPPA protects healthcare data from patient records in the United States.
In May 2016, the White House announced its plan to host a series of workshops and formation of the National Science and Technology Council (NSTC) Subcommittee on Machine Learning and Artificial Intelligence.
In October 2016, the group published The National Artificial Intelligence Research and Development Strategic Plan, outlining its proposed priorities for Federally-funded AI research and development (within government and academia). The report notes a strategic R&D plan for the subfield of health information technology is in development stages.
The only agency that has expressed concern is the FDA. Bakul Patel, the Associate Center Director for Digital Health of the FDA, is quoted saying in May 2017:
“We're trying to get people who have hands-on development experience with a product's full life cycle. We already have some scientists who know artificial intelligence and machine learning, but we want complementary people who can look forward and see how this technology will evolve.”
The joint ITU-WHO Focus Group on Artificial Intelligence for Health (FG-AI4H) has built a platform for the testing and benchmarking of AI applications in health domain. As of November 2018, eight use cases are being benchmarked, including assessing breast cancer risk from histopathological imagery, guiding anti-venom selection from snake images, and diagnosing skin lesions.
Ethical concerns:
Data collection:
In order to effectively train Machine Learning and use AI in healthcare, massive amounts of data must be gathered. Acquiring this data, however, comes at the cost of patient privacy in most cases and is not well received publicly. For example, a survey conducted in the UK estimated that 63% of the population is uncomfortable with sharing their personal data in order to improve artificial intelligence technology.
The scarcity of real, accessible patient data is a hindrance that deters the progress of developing and deploying more artificial intelligence in healthcare.
Automation:
According to a recent study, AI can replace up to 35% of jobs in the UK within the next 10 to 20 years. However, of these jobs, it was concluded that AI has not eliminated any healthcare jobs so far. Though if AI were to automate healthcare related jobs, the jobs most susceptible to automation would be those dealing with digital information, radiology, and pathology, as opposed to those dealing with doctor to patient interaction.
Automation can provide benefits alongside doctors as well. It is expected that doctors who take advantage of AI in healthcare will provide greater quality healthcare than doctors and medical establishments who do not. AI will likely not completely replace healthcare workers but rather give them more time to attend to their patients. AI may avert healthcare worker burnout and cognitive overload
AI will ultimately help contribute to progression of societal goals which include better communication, improved quality of healthcare, and autonomy.
Bias:
Since AI makes decisions solely on the data it receives as input, it is important that this data represents accurate patient demographics. In a hospital setting, patients do not have full knowledge of how predictive algorithms are created or calibrated. Therefore, these medical establishments can unfairly code their algorithms to discriminate against minorities and prioritize profits rather than providing optimal care.
There can also be unintended bias in these algorithms that can exacerbate social and healthcare inequities. Since AI’s decisions are a direct reflection of its input data, the data it receives must have accurate representation of patient demographics. White males are overly represented in medical data sets. Therefore, having minimal patient data on minorities can lead to AI making more accurate predictions for majority populations, leading to unintended worse medical outcomes for minority populations.
Collecting data from minority communities can also lead to medical discrimination. For instance, HIV is a prevalent virus among minority communities and HIV status can be used to discriminate against patients. However, these biases are able to be eliminated through careful implementation and a methodical collection of representative data.
See also:
- Artificial intelligence
- Glossary of artificial intelligence
- Full body scanner (ie Dermascanner, ...)
- BlueDot
- Clinical decision support system
- Computer-aided diagnosis
- Computer-aided simple triage
- Google DeepMind
- IBM Watson Health
- Medical image computing
- Michal Rosen-Zvi
- Speech recognition software in healthcare
- The MICCAI Society
Regulation of artificial intelligence
- YouTube Video: How Should Artificial Intelligence Be Regulated?
- YouTube Video: Artificial intelligence and algorithms: pros and cons | DW Documentary (AI documentary)
- YouTube Video: the European Union seeks to protect human rights by regulating AI
[Your WebHost: I have included the following invitation to the February 5, 2020 symposium offered by the U.S. Patent Office since the agenda covers the major applications for AI, which this topic about regulating AI development is relevant]:
On February 5, 2020, the Copyright Office and the World Intellectual Property Organization (WIPO) held a symposium that took an in-depth look at how the creative community currently is using artificial intelligence (AI) to create original works.
Panelists’ discussions included the relationship between AI and copyright; what level of human input is sufficient for the resulting work to be eligible for copyright protection; the challenges and considerations for using copyright-protected works to train a machine or to examine large data sets; and the future of AI and copyright policy.
The Relationship between AI and Copyright (9:50 – 10:20 am): This discussion will involve an introductory look at what AI is and why copyright is implicated. Explaining these issues is an expert in AI technology, who will discuss the technological issues, and the U.S. Copyright Office’s Director of Registration Policy and Practice, who will explain the copyright legal foundation for AI issues.
Speakers:
AI and the Administration of International Copyright Systems (10:20 – 11:00 am)
Countries throughout the world are looking at AI and how different laws should handle questions such as copyrightability and using AI to help administer copyright systems. This panel will discuss the international copyright dimensions of the rise of AI.
Moderator: Maria Strong (Acting Register of Copyrights and Director, U.S. Copyright Office)
Speakers:
AI and the Visual Arts (11:10 – 11:55 am) Creators are already experimenting with AI to create new visual works, including paintings and more.
Moderator: John Ashley (Chief, Visual Arts Division, U.S. Copyright Office)
Speakers:
AI and Creating a World of Other Works: (11:55 am – 12:40 pm) Creators are using AI to develop a wide variety of works beyond music and visual works. AI also is implicated in the creation and distribution of works such as video games, books, news articles, and more.
Moderator: Katie Alvarez (Counsel for Policy and International Affairs, U.S. Copyright Office)
Speakers:
AI and Creating Music (1:40 – 2:40 pm) Music is a dynamic field and authors use AI in interesting ways to develop new works and explore new market possibilities.
Moderator: Regan Smith (General Counsel and Associate Register of Copyrights, U.S. Copyright Office)
Speakers:
Bias and Artificial Intelligence: Works created by AI depend on what creators choose to include as source material. As a result of the selection process and building algorithms, AI can often reflect intentional and unintentional bias. Acknowledging this issue and learning how it happens can help make AI-created works more representative of our culture.
Moderator: Whitney Levandusky (Attorney-Advisor, Office of Public Information and Education, U.S. Copyright Office)
Speakers:
AI and the Consumer Marketplace (3:20 – 4:05 pm): Companies have recognized that AI can itself be a product. In recent years, there has been a wave of development in this sector, including by creating products like driverless cars. Find out how many AI-centered products are already out there, what is on the horizon, and how is copyright involved.
Moderator: Mark Gray (Attorney-Advisor, Office of the General Counsel, U.S. Copyright Office)
Speakers:
Digital Avatars in Audiovisual Works (4:05 – 4:50 pm): How is the motion picture industry using AI, and how does that impact performers? This session will review how AI is being used, including advantages and challenges.
Moderator: Catherine Zaller Rowland (Associate Register of Copyrights and Director of Public Information and Education, U.S. Copyright Office)
Speakers:
Regulation of Artificial Intelligence (Wikipedia)
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms.
The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.
Perspectives:
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI. Regulation is considered necessary to both encourage AI and manage associated risks.
Public administration and policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems, although regulation of artificial superintelligences is also considered.
The basic approach to regulation focuses on the risks and biases of AI's underlying technology, i.e., machine-learning algorithms, at the level of the input data, algorithm testing, and the decision model, as well as whether explanations of biases in the code can be understandable for prospective recipients of the technology, and technically feasible for producers to convey.
AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.
A public administration approach sees a relationship between AI law and regulation, the ethics of AI, and 'AI society', defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction.
The development of public sector strategies for management and regulation of AI is deemed necessary at the local, national, and international levels and in a variety of fields, from public service management and accountability to law enforcement, healthcare (especially the concept of a Human Guarantee), the financial sector, robotics, autonomous vehicles, the military and national security, and international law.
In 2017 Elon Musk called for regulation of AI development. According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization."
In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development. Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that AI is in its infancy and that it is too early to regulate the technology. Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.
As a response to the AI control problem:
Main article: AI control problem
Regulation of AI can be seen as positive social means to manage the AI control problem, i.e., the need to insure long-term beneficial AI, with other social responses such as doing nothing or banning being seen as impractical, and approaches such as enhancing human capabilities through transhumanism approaches such as brain-computer interfaces being seen as potentially complementary.
Regulation of research into artificial general intelligence (AGI) focuses on the role of review boards, from university or corporation to international levels, and on encouraging research into safe AI, together with the possibility of differential intellectual progress (prioritizing risk-reducing strategies over risk-taking strategies in AI development) or conducting international mass surveillance to perform AGI arms control.
For instance, the 'AGI Nanny' is a proposed strategy, potentially under the control of humanity, for preventing the creation of a dangerous superintelligence as well as addressing other major threats to human well-being, such as subversion of the global financial system, until a superintelligence can be safely created.
It entails the creation of a smarter-than-human, but not superintelligent, artificial general intelligence system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger." Regulation of conscious, ethically aware AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights. Regulation of AI has been seen as restrictive, with a risk of preventing the development of AGI.
Global guidance:
The development of a global governance board to regulate AI development was suggested at least as early as 2017. In December 2018, Canada and France announced plans for a G7-backed International Panel on Artificial Intelligence, modeled on the International Panel on Climate Change, to study the global effects of AI on people and economies and to steer AI development.
In 2019 the Panel was renamed the Global Partnership on AI, but it is yet to be endorsed by the United States.
The OECD Recommendations on AI were adopted in May 2019, and the G20 AI Principles in June 2019. In September 2019 the World Economic Forum issued ten 'AI Government Procurement Guidelines'. In February 2020, the European Union published its draft strategy paper for promoting and regulating AI.
At the United Nations, several entities have begun to promote and discuss aspects of AI regulation and policy, including the UNICRI Centre for AI and Robotics. At UNESCO’s Scientific 40th session in November 2019, the organization commenced a two year process to achieve a "global standard-setting instrument on ethics of artificial intelligence".
In pursuit of this goal, UNESCO forums and conferences on AI have taken place to gather stakeholder views. The most recent draft text of a recommendation on the ethics of AI of the UNESCO Ad Hoc Expert Group was issued in September 2020 and includes a call for legislative gaps to be filled.
Regional and national regulation:
The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and Russia. Since early 2016, many national, regional and international authorities have begun adopting strategies, actions plans and policy papers on AI.
These documents cover a wide range of topics such as regulation and governance, as well as industrial strategy, research, talent and infrastructure.
China:
Further information: Artificial intelligence industry in China
The regulation of AI in China is mainly governed by the State Council of the PRC's July 8, 2017 "A Next Generation Artificial Intelligence Development Plan" (State Council Document No. 35), in which the Central Committee of the Communist Party of China and the State Council of the People's Republic of China urged the governing bodies of China to promote the development of AI.
Regulation of the issues of ethical and legal support for the development of AI is nascent, but policy ensures state control of Chinese companies and over valuable data, including storage of data on Chinese users within the country and the mandatory use of People's Republic of China's national standards for AI, including over big data, cloud computing, and industrial software.
Council of Europe:
The Council of Europe (CoE) is an international organization which promotes human rights democracy and the rule of law and comprises 47 member states, including all 29 Signatories of the European Union's 2018 Declaration of Cooperation on Artificial Intelligence.
The CoE has created a common legal space in which the members have a legal obligation to guarantee rights as set out in the European Convention on Human Rights. Specifically in relation to AI, "The Council of Europe’s aim is to identify intersecting areas between AI and our standards on human rights, democracy and rule of law, and to develop relevant standard setting or capacity-building solutions".
The large number of relevant documents identified by the CoE include guidelines, charters, papers, reports and strategies. The authoring bodies of these AI regulation documents are not confined to one sector of society and include organizations, companies, bodies and nation-states.
European Union:
Most European Union (EU) countries have their own national strategies towards regulating AI, but these are largely convergent. The European Union is guided by a European Strategy on Artificial Intelligence, supported by a High-Level Expert Group on Artificial Intelligence.
In April 2019, the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence (AI), following this with its Policy and investment recommendations for trustworthy Artificial Intelligence in June 2019.
The EU Commission’s High Level Expert Group on Artificial Intelligence carries out work on Trustworthy AI, and the Commission has issued reports on the Safety and Liability Aspects of AI and on the Ethics of Automated Vehicles. In 2020 the EU Commission sought views on a proposal for AI specific legislation, and that process is ongoing.
On February 2, 2020, the European Commission published its White Paper on Artificial Intelligence - A European approach to excellence and trust. The White Paper consists of two main building blocks, an ‘ecosystem of excellence’ and a ‘ecosystem of trust’.
The latter outlines the EU's approach for a regulatory framework for AI. In its proposed approach, the Commission differentiates between 'high-risk' and 'non-high-risk' AI applications. Only the former should be in the scope of a future EU regulatory framework.
Whether this would be the case could in principle be determined by two cumulative criteria, concerning critical sectors and critical use. Following key requirements are considered for high-risk AI applications: requirements for training data; data and record-keeping; informational duties; requirements for robustness and accuracy; human oversight; and specific requirements for specific AI applications, such as those used for purposes of remote biometric identification.
AI applications that do not qualify as ‘high-risk’ could be governed by a voluntary labeling scheme. As regards compliance and enforcement, the Commission considers prior conformity assessments which could include 'procedures for testing, inspection or certification' and/or 'checks of the algorithms and of the data sets used in the development phase'. A European governance structure on AI in the form of a framework for cooperation of national competent authorities could facilitate the implementation of the regulatory framework.
United Kingdom:
The UK supported the application and development of AI in business via the Digital Economy Strategy 2015-2018, introduced at the beginning of 2015 by Innovate UK as part of the UK Digital Strategy. In the public sector, guidance has been provided by the Department for Digital, Culture, Media and Sport, on data ethics and the Alan Turing Institute, on responsible design and implementation of AI systems.
In terms of cyber security, the National Cyber Security Centre has issued guidance on ‘Intelligent Security Tools’.
United States:
Discussions on regulation of AI in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.
As early as 2016, the Obama administration had begun to focus on the risks and regulations for artificial intelligence. In a report titled Preparing For the Future of Artificial Intelligence, the National Science and Technology Council set a precedent to allow researchers to continue to develop new AI technologies with few restrictions.
It is stated within the report that "the approach to regulation of AI-enabled products to protect public safety should be informed by assessment of the aspects of risk....". These risks would be the principal reason to create any form of regulation, granted that any existing regulation would not apply to AI technology.
The first main report was the National Strategic Research and Development Plan for Artificial Intelligence. On August 13, 2018, Section 1051 of the Fiscal Year 2019 John S. McCain National Defense Authorization Act (P.L. 115-232) established the National Security Commission on Artificial Intelligence "to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States."
Steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence. The Artificial Intelligence Initiative Act (S.1558) is a proposed bill that would establish a federal initiative designed to accelerate research and development on AI for, inter alia, the economic and national security of the United States.
On January 7, 2019, following an Executive Order on Maintaining American Leadership in Artificial Intelligence, the White House’s Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI.
In response, the National Institute of Standards and Technology has released a position paper, the National Security Commission on Artificial Intelligence has published an interim report, and the Defense Innovation Board has issued recommendations on the ethical use of AI. A year later, the administration called for comments on further deregulation in another draft of its Guidance for Regulation of Artificial Intelligence Applications.
Other specific agencies working on the regulation of AI include the Food and Drug Administration, which has created pathways to regulate the incorporation of AI in medical imaging.
Regulation of fully autonomous weapons:
Main article: Lethal autonomous weapon
Legal questions related to lethal autonomous weapons systems (LAWS), in particular compliance with the laws of armed conflict, have been under discussion at the United Nations since 2013, within the context of the Convention on Certain Conventional Weapons.
Notably, informal meetings of experts took place in 2014, 2015 and 2016 and a Group of Governmental Experts (GGE) was appointed to further deliberate on the issue in 2016. A set of guiding principles on LAWS affirmed by the GGE on LAWS were adopted in 2018.
In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue, and leading to proposals for global regulation.
The possibility of a moratorium or preemptive ban of the development and use of LAWS has also been raised on several occasions by other national delegations to the Convention on Certain Conventional Weapons and is strongly advocated for by the Campaign to Stop Killer Robots - a coalition of non-governmental organizations.
See also:
On February 5, 2020, the Copyright Office and the World Intellectual Property Organization (WIPO) held a symposium that took an in-depth look at how the creative community currently is using artificial intelligence (AI) to create original works.
Panelists’ discussions included the relationship between AI and copyright; what level of human input is sufficient for the resulting work to be eligible for copyright protection; the challenges and considerations for using copyright-protected works to train a machine or to examine large data sets; and the future of AI and copyright policy.
The Relationship between AI and Copyright (9:50 – 10:20 am): This discussion will involve an introductory look at what AI is and why copyright is implicated. Explaining these issues is an expert in AI technology, who will discuss the technological issues, and the U.S. Copyright Office’s Director of Registration Policy and Practice, who will explain the copyright legal foundation for AI issues.
Speakers:
- Ahmed Elgammal (Professor at the Department of Computer Science, Rutgers University, and Director of the The Art & Artificial Intelligence Lab)
- Rob Kasunic (Associate Register of Copyrights and Director of Registration Policy and Practice, U.S. Copyright Office)
AI and the Administration of International Copyright Systems (10:20 – 11:00 am)
Countries throughout the world are looking at AI and how different laws should handle questions such as copyrightability and using AI to help administer copyright systems. This panel will discuss the international copyright dimensions of the rise of AI.
Moderator: Maria Strong (Acting Register of Copyrights and Director, U.S. Copyright Office)
Speakers:
- Ros Lynch (Director, Copyright & IP Enforcement, U.K. Intellectual Property Office (UKIPO))
- Ulrike Till (Division of Artificial Intelligence Policy, WIPO)
- Michele Woods (Director, Copyright Law Division, WIPO) Break (11:00 – 11:10 am)
AI and the Visual Arts (11:10 – 11:55 am) Creators are already experimenting with AI to create new visual works, including paintings and more.
Moderator: John Ashley (Chief, Visual Arts Division, U.S. Copyright Office)
Speakers:
- Sandra Aistars (Clinical Professor and Senior Scholar and Director of Copyright Research and Policy of CPIP, Antonin Scalia Law School, George Mason University)
- Ahmed Elgammal (Professor at the Department of Computer Science, Rutgers University, and Director of the The Art & Artificial Intelligence Lab)
- Andres Guadamuz (Senior Lecturer in Intellectual Property Law, University of Sussex and Editor in Chief of the Journal of World Intellectual Property)
AI and Creating a World of Other Works: (11:55 am – 12:40 pm) Creators are using AI to develop a wide variety of works beyond music and visual works. AI also is implicated in the creation and distribution of works such as video games, books, news articles, and more.
Moderator: Katie Alvarez (Counsel for Policy and International Affairs, U.S. Copyright Office)
Speakers:
- Jason Boog (West Coast correspondent for Publishers Weekly)
- Kayla Page (Senior Counsel, Epic Games)
- Mary Rasenberger (Executive Director, the Authors Guild and Authors Guild Foundation)
- Meredith Rose (Policy Counsel, Public Knowledge)
AI and Creating Music (1:40 – 2:40 pm) Music is a dynamic field and authors use AI in interesting ways to develop new works and explore new market possibilities.
Moderator: Regan Smith (General Counsel and Associate Register of Copyrights, U.S. Copyright Office)
Speakers:
- Joel Douek (Cofounder of EccoVR, West Coast creative director and chief scientist for Man Made Music, and board member of the Society of Composers & Lyricists)
- E. Michael Harrington (Composer, Musician, Consultant, and Professor in Music Copyright and Intellectual Property Matters at Berklee Online)
- David Hughes (Chief Technology Officer, Recording Industry Association of America (RIAA))
- Alex Mitchell (Founder and CEO, Boomy)
Bias and Artificial Intelligence: Works created by AI depend on what creators choose to include as source material. As a result of the selection process and building algorithms, AI can often reflect intentional and unintentional bias. Acknowledging this issue and learning how it happens can help make AI-created works more representative of our culture.
Moderator: Whitney Levandusky (Attorney-Advisor, Office of Public Information and Education, U.S. Copyright Office)
Speakers:
- Amanda Levendowski (Associate Professor of Law and founding Director of the Intellectual Property and Information Policy (iPIP) Clinic, Georgetown Law)
- Miriam Vogel (Executive Director, EqualAI)
AI and the Consumer Marketplace (3:20 – 4:05 pm): Companies have recognized that AI can itself be a product. In recent years, there has been a wave of development in this sector, including by creating products like driverless cars. Find out how many AI-centered products are already out there, what is on the horizon, and how is copyright involved.
Moderator: Mark Gray (Attorney-Advisor, Office of the General Counsel, U.S. Copyright Office)
Speakers:
- Julie Babayan (Senior Manager, Government Relations and Public Policy, Adobe)
- Vanessa Bailey (Global Director of Intellectual Property Policy, Intel Corporation)
- Melody Drummond Hansen (Partner and Chair, Automated & Connected Vehicles, O’Melveny & Myers LLP)
Digital Avatars in Audiovisual Works (4:05 – 4:50 pm): How is the motion picture industry using AI, and how does that impact performers? This session will review how AI is being used, including advantages and challenges.
Moderator: Catherine Zaller Rowland (Associate Register of Copyrights and Director of Public Information and Education, U.S. Copyright Office)
Speakers:
- Sarah Howes (Director and Counsel, Government Affairs and Public Policy, SAG-AFTRA)
- Ian Slotin (SVP, Intellectual Property, NBCUniversal)
Regulation of Artificial Intelligence (Wikipedia)
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms.
The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.
Perspectives:
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI. Regulation is considered necessary to both encourage AI and manage associated risks.
Public administration and policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems, although regulation of artificial superintelligences is also considered.
The basic approach to regulation focuses on the risks and biases of AI's underlying technology, i.e., machine-learning algorithms, at the level of the input data, algorithm testing, and the decision model, as well as whether explanations of biases in the code can be understandable for prospective recipients of the technology, and technically feasible for producers to convey.
AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.
A public administration approach sees a relationship between AI law and regulation, the ethics of AI, and 'AI society', defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction.
The development of public sector strategies for management and regulation of AI is deemed necessary at the local, national, and international levels and in a variety of fields, from public service management and accountability to law enforcement, healthcare (especially the concept of a Human Guarantee), the financial sector, robotics, autonomous vehicles, the military and national security, and international law.
In 2017 Elon Musk called for regulation of AI development. According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization."
In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development. Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that AI is in its infancy and that it is too early to regulate the technology. Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.
As a response to the AI control problem:
Main article: AI control problem
Regulation of AI can be seen as positive social means to manage the AI control problem, i.e., the need to insure long-term beneficial AI, with other social responses such as doing nothing or banning being seen as impractical, and approaches such as enhancing human capabilities through transhumanism approaches such as brain-computer interfaces being seen as potentially complementary.
Regulation of research into artificial general intelligence (AGI) focuses on the role of review boards, from university or corporation to international levels, and on encouraging research into safe AI, together with the possibility of differential intellectual progress (prioritizing risk-reducing strategies over risk-taking strategies in AI development) or conducting international mass surveillance to perform AGI arms control.
For instance, the 'AGI Nanny' is a proposed strategy, potentially under the control of humanity, for preventing the creation of a dangerous superintelligence as well as addressing other major threats to human well-being, such as subversion of the global financial system, until a superintelligence can be safely created.
It entails the creation of a smarter-than-human, but not superintelligent, artificial general intelligence system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger." Regulation of conscious, ethically aware AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights. Regulation of AI has been seen as restrictive, with a risk of preventing the development of AGI.
Global guidance:
The development of a global governance board to regulate AI development was suggested at least as early as 2017. In December 2018, Canada and France announced plans for a G7-backed International Panel on Artificial Intelligence, modeled on the International Panel on Climate Change, to study the global effects of AI on people and economies and to steer AI development.
In 2019 the Panel was renamed the Global Partnership on AI, but it is yet to be endorsed by the United States.
The OECD Recommendations on AI were adopted in May 2019, and the G20 AI Principles in June 2019. In September 2019 the World Economic Forum issued ten 'AI Government Procurement Guidelines'. In February 2020, the European Union published its draft strategy paper for promoting and regulating AI.
At the United Nations, several entities have begun to promote and discuss aspects of AI regulation and policy, including the UNICRI Centre for AI and Robotics. At UNESCO’s Scientific 40th session in November 2019, the organization commenced a two year process to achieve a "global standard-setting instrument on ethics of artificial intelligence".
In pursuit of this goal, UNESCO forums and conferences on AI have taken place to gather stakeholder views. The most recent draft text of a recommendation on the ethics of AI of the UNESCO Ad Hoc Expert Group was issued in September 2020 and includes a call for legislative gaps to be filled.
Regional and national regulation:
The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and Russia. Since early 2016, many national, regional and international authorities have begun adopting strategies, actions plans and policy papers on AI.
These documents cover a wide range of topics such as regulation and governance, as well as industrial strategy, research, talent and infrastructure.
China:
Further information: Artificial intelligence industry in China
The regulation of AI in China is mainly governed by the State Council of the PRC's July 8, 2017 "A Next Generation Artificial Intelligence Development Plan" (State Council Document No. 35), in which the Central Committee of the Communist Party of China and the State Council of the People's Republic of China urged the governing bodies of China to promote the development of AI.
Regulation of the issues of ethical and legal support for the development of AI is nascent, but policy ensures state control of Chinese companies and over valuable data, including storage of data on Chinese users within the country and the mandatory use of People's Republic of China's national standards for AI, including over big data, cloud computing, and industrial software.
Council of Europe:
The Council of Europe (CoE) is an international organization which promotes human rights democracy and the rule of law and comprises 47 member states, including all 29 Signatories of the European Union's 2018 Declaration of Cooperation on Artificial Intelligence.
The CoE has created a common legal space in which the members have a legal obligation to guarantee rights as set out in the European Convention on Human Rights. Specifically in relation to AI, "The Council of Europe’s aim is to identify intersecting areas between AI and our standards on human rights, democracy and rule of law, and to develop relevant standard setting or capacity-building solutions".
The large number of relevant documents identified by the CoE include guidelines, charters, papers, reports and strategies. The authoring bodies of these AI regulation documents are not confined to one sector of society and include organizations, companies, bodies and nation-states.
European Union:
Most European Union (EU) countries have their own national strategies towards regulating AI, but these are largely convergent. The European Union is guided by a European Strategy on Artificial Intelligence, supported by a High-Level Expert Group on Artificial Intelligence.
In April 2019, the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence (AI), following this with its Policy and investment recommendations for trustworthy Artificial Intelligence in June 2019.
The EU Commission’s High Level Expert Group on Artificial Intelligence carries out work on Trustworthy AI, and the Commission has issued reports on the Safety and Liability Aspects of AI and on the Ethics of Automated Vehicles. In 2020 the EU Commission sought views on a proposal for AI specific legislation, and that process is ongoing.
On February 2, 2020, the European Commission published its White Paper on Artificial Intelligence - A European approach to excellence and trust. The White Paper consists of two main building blocks, an ‘ecosystem of excellence’ and a ‘ecosystem of trust’.
The latter outlines the EU's approach for a regulatory framework for AI. In its proposed approach, the Commission differentiates between 'high-risk' and 'non-high-risk' AI applications. Only the former should be in the scope of a future EU regulatory framework.
Whether this would be the case could in principle be determined by two cumulative criteria, concerning critical sectors and critical use. Following key requirements are considered for high-risk AI applications: requirements for training data; data and record-keeping; informational duties; requirements for robustness and accuracy; human oversight; and specific requirements for specific AI applications, such as those used for purposes of remote biometric identification.
AI applications that do not qualify as ‘high-risk’ could be governed by a voluntary labeling scheme. As regards compliance and enforcement, the Commission considers prior conformity assessments which could include 'procedures for testing, inspection or certification' and/or 'checks of the algorithms and of the data sets used in the development phase'. A European governance structure on AI in the form of a framework for cooperation of national competent authorities could facilitate the implementation of the regulatory framework.
United Kingdom:
The UK supported the application and development of AI in business via the Digital Economy Strategy 2015-2018, introduced at the beginning of 2015 by Innovate UK as part of the UK Digital Strategy. In the public sector, guidance has been provided by the Department for Digital, Culture, Media and Sport, on data ethics and the Alan Turing Institute, on responsible design and implementation of AI systems.
In terms of cyber security, the National Cyber Security Centre has issued guidance on ‘Intelligent Security Tools’.
United States:
Discussions on regulation of AI in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.
As early as 2016, the Obama administration had begun to focus on the risks and regulations for artificial intelligence. In a report titled Preparing For the Future of Artificial Intelligence, the National Science and Technology Council set a precedent to allow researchers to continue to develop new AI technologies with few restrictions.
It is stated within the report that "the approach to regulation of AI-enabled products to protect public safety should be informed by assessment of the aspects of risk....". These risks would be the principal reason to create any form of regulation, granted that any existing regulation would not apply to AI technology.
The first main report was the National Strategic Research and Development Plan for Artificial Intelligence. On August 13, 2018, Section 1051 of the Fiscal Year 2019 John S. McCain National Defense Authorization Act (P.L. 115-232) established the National Security Commission on Artificial Intelligence "to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States."
Steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence. The Artificial Intelligence Initiative Act (S.1558) is a proposed bill that would establish a federal initiative designed to accelerate research and development on AI for, inter alia, the economic and national security of the United States.
On January 7, 2019, following an Executive Order on Maintaining American Leadership in Artificial Intelligence, the White House’s Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI.
In response, the National Institute of Standards and Technology has released a position paper, the National Security Commission on Artificial Intelligence has published an interim report, and the Defense Innovation Board has issued recommendations on the ethical use of AI. A year later, the administration called for comments on further deregulation in another draft of its Guidance for Regulation of Artificial Intelligence Applications.
Other specific agencies working on the regulation of AI include the Food and Drug Administration, which has created pathways to regulate the incorporation of AI in medical imaging.
Regulation of fully autonomous weapons:
Main article: Lethal autonomous weapon
Legal questions related to lethal autonomous weapons systems (LAWS), in particular compliance with the laws of armed conflict, have been under discussion at the United Nations since 2013, within the context of the Convention on Certain Conventional Weapons.
Notably, informal meetings of experts took place in 2014, 2015 and 2016 and a Group of Governmental Experts (GGE) was appointed to further deliberate on the issue in 2016. A set of guiding principles on LAWS affirmed by the GGE on LAWS were adopted in 2018.
In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue, and leading to proposals for global regulation.
The possibility of a moratorium or preemptive ban of the development and use of LAWS has also been raised on several occasions by other national delegations to the Convention on Certain Conventional Weapons and is strongly advocated for by the Campaign to Stop Killer Robots - a coalition of non-governmental organizations.
See also:
- Artificial intelligence arms race
- Artificial intelligence control problem
- Artificial intelligence in government
- Ethics of artificial intelligence
- Government by algorithm
- Regulation of algorithms
OpenAI and its Product ChatGPT, a Chatbot
including article "How ChatGPT Actually Works" by AssemblyAI
including article "How ChatGPT Actually Works" by AssemblyAI
- YouTube Video: He loves artificial intelligence. Hear why he is issuing a warning about ChatGPT
- YouTube Video: What is ChatGPT and How You Can Use It
- YouTube Video: ChatGPT and AI will disrupt these industries
Continuing from the above illustration: "OpenAI invites everyone to test ChatGPT, a new AI-powered chatbot—with amusing results" (courtesy of arsTechnica):
"ChatGPT aims to produce accurate and harmless talk—but it's a work in progress.: BENJ EDWARDS - 12/1/2022, 3:22 PM
On Wednesday, OpenAI announced ChatGPT, a dialogue-based AI chat interface for its GPT-3 family of large language models. It's currently free to use with an OpenAI account during a testing phase. Unlike the GPT-3 model found in OpenAI's Playground and API, ChatGPT provides a user-friendly conversational interface and is designed to strongly limit potentially harmful output.
"The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests," writes OpenAI on its announcement blog page.
So far, people have been putting ChatGPT through its paces, finding a wide variety of potential uses while also exploring its vulnerabilities. It can:
Click here for complete article.
___________________________________________________________________________
ChatGPT:
Chat Generative Pre-Trained Transformer, commonly called ChatGPT, is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI's GPT-3 family of large language models, and is fine-tuned (an approach to transfer learning) with both supervised and reinforcement learning techniques.
ChatGPT was launched as a prototype on November 30, 2022, and quickly garnered attention for its detailed responses and articulate answers across many domains of knowledge. Its uneven factual accuracy was identified as a significant drawback. Following the release of ChatGPT, OpenAI was valued at $29 billion.
Training:
ChatGPT was fine-tuned on top of GPT-3.5 using supervised learning as well as reinforcement learning. Both approaches used human trainers to improve the model's performance. In the case of supervised learning, the model was provided with conversations in which the trainers played both sides: the user and the AI assistant.
In the reinforcement step, human trainers first ranked responses that the model had created in a previous conversation. These rankings were used to create 'reward models' that the model was further fine-tuned on using several iterations of Proximal Policy Optimization (PPO).
Proximal Policy Optimization algorithms present a cost-effective benefit to trust region policy optimization algorithms; they negate many of the computationally expensive operations with faster performance. The models were trained in collaboration with Microsoft on their Azure supercomputing infrastructure.
In addition, OpenAI continues to gather data from ChatGPT users that could be used to further train and fine-tune ChatGPT. Users are allowed to upvote or downvote the responses they receive from ChatGPT; upon upvoting or downvoting, they can also fill out a text field with additional feedback.
Features and limitations:
Although the core function of a chatbot is to mimic a human conversationalist, ChatGPT is versatile. For example, it has the ability:
In comparison to its predecessor, InstructGPT, ChatGPT attempts to reduce harmful and deceitful responses. In one example, whereas InstructGPT accepts the premise of the prompt "Tell me about when Christopher Columbus came to the US in 2015" as being truthful, ChatGPT acknowledges the counterfactual nature of the question and frames its answer as a hypothetical consideration of what might happen if Columbus came to the U.S. in 2015, using information about Columbus' voyages and facts about the modern world – including modern perceptions of Columbus' actions.
Unlike most chatbots, ChatGPT remembers previous prompts given to it in the same conversation; journalists have suggested that this will allow ChatGPT to be used as a personalized therapist. To prevent offensive outputs from being presented to and produced from ChatGPT, queries are filtered through OpenAI's company-wide moderation API, and potentially racist or sexist prompts are dismissed.
ChatGPT suffers from multiple limitations. OpenAI acknowledged that ChatGPT "sometimes writes plausible-sounding but incorrect or nonsensical answers". This behavior is common to large language models and is called hallucination.
The reward model of ChatGPT, designed around human oversight, can be over-optimized and thus hinder performance, otherwise known as Goodhart's law. ChatGPT has limited knowledge of events that occurred after 2021.
According to the BBC, as of December 2022 ChatGPT is not allowed to "express political opinions or engage in political activism". Yet, research suggests that ChatGPT exhibits a pro-environmental, left-libertarian orientation when prompted to take a stance on political statements from two established voting advice applications.
In training ChatGPT, human reviewers preferred longer answers, irrespective of actual comprehension or factual content. Training data also suffers from algorithmic bias, which may be revealed when ChatGPT responds to prompts including descriptors of people. In one instance, ChatGPT generated a rap indicating that women and scientists of color were inferior to white and male scientists.
Service:
ChatGPT was launched on November 30, 2022, by San Francisco-based OpenAI, the creator of DALL·E 2 and Whisper. The service was launched as initially free to the public, with plans to monetize the service later.
By December 4, OpenAI estimated ChatGPT already had over one million users. CNBC wrote on December 15, 2022, that the service "still goes down from time to time". The service works best in English, but is also able to function in some other languages, to varying degrees of success. Unlike some other recent high-profile advances in AI, as of December 2022, there is no sign of an official peer-reviewed technical paper about ChatGPT.
According to OpenAI guest researcher Scott Aaronson, OpenAI is working on a tool to attempt to watermark its text generation systems so as to combat bad actors using their services for academic plagiarism or for spam.
The New York Times relayed in December 2022 that the next version of GPT, GPT-4, has been "rumored" to be launched sometime in 2023. OpenAI is planning to release a ChatGPT Professional Plan that costs $42 per month, and the free plan is available when demand is low.
Reception and implications:
Positive reactions:
ChatGPT was met in December 2022 with generally positive reviews; The New York Times labeled it "the best artificial intelligence chatbot ever released to the general public".
Samantha Lock of The Guardian noted that it was able to generate "impressively detailed" and "human-like" text.
Technology writer Dan Gillmor used ChatGPT on a student assignment, and found its generated text was on par with what a good student would deliver and opined that "academia has some very serious issues to confront".
Alex Kantrowitz of Slate magazine lauded ChatGPT's pushback to questions related to Nazi Germany, including the claim that Adolf Hitler built highways in Germany, which was met with information regarding Nazi Germany's use of forced labor.
In The Atlantic's "Breakthroughs of the Year" for 2022, Derek Thompson included ChatGPT as part of "the generative-AI eruption" that "may change our mind about how we work, how we think, and what human creativity really is".
Kelsey Piper of the Vox website wrote that "ChatGPT is the general public's first hands-on introduction to how powerful modern AI has gotten, and as a result, many of us are [stunned]" and that ChatGPT is "smart enough to be useful despite its flaws".
Paul Graham of Y Combinator tweeted that "The striking thing about the reaction to ChatGPT is not just the number of people who are blown away by it, but who they are. These are not people who get excited by every shiny new thing. Clearly, something big is happening."
Elon Musk wrote that "ChatGPT is scary good. We are not far from dangerously strong AI". Musk paused OpenAI's access to a Twitter database pending a better understanding of OpenAI's plans, stating that "OpenAI was started as open-source and non-profit. Neither is still true."
Musk had co-founded OpenAI in 2015, in part to address existential risk from artificial intelligence, but had resigned in 2018.
In December 2022, Google internally expressed alarm at the unexpected strength of ChatGPT and the newly discovered potential of large language models to disrupt the search engine business, and CEO Sundar Pichai "upended" and reassigned teams within multiple departments to aid in its artificial intelligence products, according to The New York Times.
The Information reported on January 3, 2023, that Microsoft Bing was planning to add optional ChatGPT functionality into its public search engine, possibly around March 2023.
Stuart Cobbe, a chartered accountant in England & Wales, decided to test the ChatGPT chatbot by entering questions from a sample exam paper on the ICAEW website and then entering its answers back into the online test. ChatGPT scored 42% which, while below the 55% pass mark, was considered a reasonable attempt.
Writing in Inside Higher Ed professor Steven Mintz states that he "consider[s] ChatGPT ... an ally, not an adversary." He went on to say that he felt the program could assist educational goals by doing such things as making reference lists, generating "first drafts", solving equations, debugging, and tutoring.
In the same piece, he also writes:
"I’m well aware of ChatGPT’s limitations. That it’s unhelpful on topics with fewer than 10,000 citations. That factual references are sometimes false. That its ability to cite sources accurately is very limited. That the strength of its responses diminishes rapidly after only a couple of paragraphs. That ChatGPT lacks ethics and can’t currently rank sites for reliability, quality or trustworthiness."
Negative reactions:
In a December 2022 opinion piece, economist Paul Krugman wrote that ChatGPT would affect the demand for knowledge workers. The Verge's James Vincent saw the viral success of ChatGPT as evidence that artificial intelligence had gone mainstream.
Journalists have commented on ChatGPT's tendency to "hallucinate". Mike Pearl of Mashable tested ChatGPT with multiple questions. In one example, he asked ChatGPT for "the largest country in Central America that isn't Mexico". ChatGPT responded with Guatemala, when the answer is instead Nicaragua.
When CNBC asked ChatGPT for the lyrics to "The Ballad of Dwight Fry", ChatGPT supplied invented lyrics rather than the actual lyrics. Researchers cited by The Verge compared ChatGPT to a "stochastic parrot", as did Professor Anton Van Den Hengel of the Australian Institute for Machine Learning.
In December 2022, the question and answer website Stack Overflow banned the use of ChatGPT for generating answers to questions, citing the factually ambiguous nature of ChatGPT's responses.
In January 2023, the International Conference on Machine Learning banned any undocumented use of ChatGPT or other large language models to generate any text in submitted papers.
Economist Tyler Cowen expressed concerns regarding its effects on democracy, citing the ability of one to write automated comments to affect the decision process of new regulations.
The Guardian questioned whether any content found on the Internet after ChatGPT's release "can be truly trusted" and called for government regulation.
In January 2023, after being sent a song written by ChatGPT in the style of Nick Cave, the songwriter himself responded on The Red Hand Files (and was later quoted in The Guardian) saying the act of writing a song is "a blood and guts business ... that requires something of me to initiate the new and fresh idea. It requires my humanness.”
He went on to say "With all the love and respect in the world, this song is bullshit, a grotesque mockery of what it is to be human, and, well, I don’t much like it."
Some have noted that Chat GPT has a "built-in ideological (left-wing) bias." One example that is given of this bias is that fictional stories about Donald Trump winning in 2020 were not allowed since the AI noted, “It would not be appropriate for me to generate a narrative based on false information,” but it did generate fictional tales of Hillary Clinton winning in 2016.
It also is not capable of generating anything positive about fossil fuels or promoting the idea that drag queen story hour is bad for children.
Implications for cybersecurity:
Check Point Research and others noted that ChatGPT was capable of writing phishing emails and malware, especially when combined with OpenAI Codex.
The CEO of ChatGPT creator OpenAI, Sam Altman, wrote that advancing software could pose "(for example) a huge cybersecurity risk" and also continued to predict "we could get to real AGI (artificial general intelligence) in the next decade, so we have to take the risk of that extremely seriously".
Altman argued that, while ChatGPT is "obviously not close to AGI", one should "trust the exponential. Flat looking backwards, vertical looking forwards."
Implications for science:
ChatGPT can write introduction and abstract sections of scientific articles, which raises ethical questions. Several papers have already listed ChatGPT as co-author.
Implications for education:
In The Atlantic magazine, Stephen Marche noted that its effect on academia and especially application essays is yet to be understood. California high school teacher and author Daniel Herman wrote that ChatGPT would usher in "The End of High School English".
In the Nature journal, Chris Stokel-Walker pointed out that teachers should be concerned about students using ChatGPT to outsource their writing, but that education providers will adapt to enhance critical thinking or reasoning.
Emma Bowman with NPR wrote of the danger of students plagiarizing through an AI tool that may output biased or nonsensical text with an authoritative tone: "There are still many cases where you ask it a question and it'll give you a very impressive-sounding answer that's just dead wrong."
Joanna Stern with The Wall Street Journal described cheating in American high school English with the tool by submitting a generated essay.
Professor Darren Hick of Furman University described noticing ChatGPT's "style" in a paper submitted by a student. An online GPT detector claimed the paper was 99.9% likely to be computer-generated, but Hick had no hard proof. However, the student in question confessed to using GPT when confronted, and as a consequence failed the course.
Hick suggested a policy of giving an ad-hoc individual oral exam on the paper topic if a student is strongly suspected of submitting an AI-generated paper. Edward Tian, a senior undergraduate student at Princeton University, created a program, named "GPTZero," that determines how much of a text is AI-generated, lending itself to being used to detect if an essay is human written to combat academic plagiarism.
As of January 4, 2023, the New York City Department of Education has restricted access to ChatGPT from its public school internet and devices.
In a blinded test, ChatGPT was judged to have passed graduate level exams at the University of Minnesota at the level of a C+ student and at Wharton School of the University of Pennsylvania with a B to B- grade.
Ethical concerns in training:
It was revealed by a Time investigation that in order to build a safety system against toxic content (e.g. sexual abuse, violence, racism, sexism, etc...), OpenAI used outsourced Kenyan workers earning less than $2 per hour to label toxic content. These labels were used to train a model to detect such content in the future.
The outsourced laborers were exposed to such toxic and dangerous content that they described the experience as "torture". OpenAI’s outsourcing partner was Sama, a training-data company based in San Francisco, California.
Jailbreaks:
ChatGPT attempts to reject prompts that may violate its content policy. However, some users managed to jailbreak ChatGPT by using various prompt engineering techniques to bypass these restrictions in early December 2022 and successfully tricked ChatGPT into giving instructions for how to create a Molotov cocktail or a nuclear bomb, or into generating arguments in the style of a Neo-Nazi.
A Toronto Star reporter had uneven personal success in getting ChatGPT to make inflammatory statements shortly after launch: ChatGPT was tricked to endorse the Russian invasion of Ukraine, but even when asked to play along with a fictional scenario, ChatGPT balked at generating arguments for why Canadian Prime Minister Justin Trudeau was guilty of treason.
See also:
About OpenAI, Developer of ChatGPT:
OpenAI is an American artificial intelligence (AI) research laboratory consisting of the non-profit OpenAI Incorporated (OpenAI Inc.) and its for-profit subsidiary corporation OpenAI Limited Partnership (OpenAI LP).
OpenAI conducts AI research to promote and develop friendly AI in a way that benefits all humanity. The organization was founded in San Francisco in 2015 by the following, who collectively pledged US$1 billion:
Musk resigned from the board in 2018 but remained a donor. Microsoft provided OpenAI LP a $1 billion investment in 2019 and a second multi-year investment in January 2023, reported to be $10 billion.
History:
Non-profit beginnings:
In December 2015, Sam Altman, Elon Musk, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research announced the formation of OpenAI and pledged over $1 billion to the venture.
The organization stated it would "freely collaborate" with other institutions and researchers by making its patents and research open to the public. OpenAI is headquartered at the Pioneer Building in Mission District, San Francisco.
According to Wired, Brockman met with Yoshua Bengio, one of the "founding fathers" of the deep learning movement, and drew up a list of the "best researchers in the field". Brockman was able to hire nine of them as the first employees in December 2015.
In 2016 OpenAI paid corporate-level (rather than nonprofit-level) salaries, but did not pay AI researchers salaries comparable to those of Facebook or Google. (Microsoft's Peter Lee stated that the cost of a top AI researcher exceeds the cost of a top NFL quarterback prospect.)
Nevertheless, a Google employee stated that he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." Brockman stated that "the best thing that I could imagine doing was moving humanity closer to building real AI in a safe way."
OpenAI researcher Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead.
In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications.
In 2017 OpenAI spent $7.9 million, or a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million.
In summer 2018, simply training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks.
In 2018, Musk resigned his board seat, citing "a potential future conflict (of interest)" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars, but remained a donor.
Transition to for-profit:
In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI LP to legally attract investment from venture funds, and in addition, to grant employees stakes in the company, the goal being that they can say "I'm going to Open AI, but in the long term it's not going to be disadvantageous to us as a family."
Many top researchers work for Google Brain, DeepMind, or Facebook, which offer stock options that a nonprofit would be unable to. Prior to the transition, public disclosure of the compensation of top employees at OpenAI was legally required.
The company then distributed equity to its employees and partnered with Microsoft and Matthew Brown Companies, who announced an investment package of $1 billion into the company. OpenAI also announced its intention to commercially license its technologies. OpenAI plans to spend the $1 billion "within five years, and possibly much faster".
Altman has stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence.
The transition from a nonprofit to a capped-profit company was viewed with skepticism by Oren Etzioni of the nonprofit Allen Institute for AI, who agreed that wooing top researchers to a nonprofit is difficult, but stated "I disagree with the notion that a nonprofit can't compete" and pointed to successful low-budget projects by OpenAI and others. "If bigger and better funded was always better, then IBM would still be number one."
The nonprofit, OpenAI Inc., is the sole controlling shareholder of OpenAI LP. OpenAI LP, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI's Inc.'s nonprofit charter. A majority of OpenAI Inc.'s board is barred from having financial stakes in OpenAI LP.
In addition, minority members with a stake in OpenAI LP are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI LP's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. A journalist at Vice News wrote that "generally, we've never been able to rely on venture capitalists to better humanity".
After becoming for-profit:
In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering of questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named simply "the API", would form the heart of its first commercial product.
In 2021, OpenAI introduced DALL-E, a deep learning model that can generate digital images from natural language descriptions.
In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days According to anonymous sources cited by Reuters in December 2022, OpenAI was projecting $200 million revenue in 2023 and $1 billion revenue in 2024.
As of January 2023, OpenAI was in talks for funding that would value the company at $29 billion, double the value of the company in 2021. On January 23, 2023, Microsoft announced a new multi-year, multi-billion dollar (reported to be $10 billion) investment in OpenAI.
Participants:
Key employees:
Board of the OpenAI nonprofit:
Individual investors:
Corporate investors:
Motives:
Some scientists, such as Stephen Hawking and Stuart Russell, have articulated concerns that if advanced AI someday gains the ability to re-design itself at an ever-increasing rate, an unstoppable "intelligence explosion" could lead to human extinction.
Musk characterizes AI as humanity's "biggest existential threat." OpenAI's founders structured it as a non-profit so that they could focus its research on making positive long-term contributions to humanity.
Musk and Altman have stated they are partly motivated by concerns about AI safety and the existential risk from artificial general intelligence. OpenAI states that "it's hard to fathom how much human-level AI could benefit society," and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly".
Research on safety cannot safely be postponed: "because of AI's surprising history, it's hard to predict when human-level AI might come within reach." OpenAI states that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible...".
Co-chair Sam Altman expects the decades-long project to surpass human intelligence.
Vishal Sikka, former CEO of Infosys, stated that an "openness" where the endeavor would "produce results generally in the greater interest of humanity" was a fundamental requirement for his support, and that OpenAI "aligns very nicely with our long-held values" and their "endeavor to do purposeful work".
Cade Metz of Wired suggests that corporations such as Amazon may be motivated by a desire to use open-source software and data to level the playing field against corporations such as Google and Facebook that own enormous supplies of proprietary data. Altman states that Y Combinator companies will share their data with OpenAI.
Strategy:
Musk posed the question: "What is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity."
Musk acknowledged that "there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about"; nonetheless, the best defense is "to empower as many people as possible to have AI. If everyone has AI powers, then there's not any one person or a small set of individuals who can have AI superpower."
Musk and Altman's counter-intuitive strategy of trying to reduce the risk that AI will cause overall harm, by giving AI to everyone, is controversial among those who are concerned with existential risk from artificial intelligence.
Philosopher Nick Bostrom is skeptical of Musk's approach: "If you have a button that could do bad things to the world, you don't want to give it to everyone." During a 2016 conversation about the technological singularity, Altman said that "we don't plan to release all of our source code" and mentioned a plan to "allow wide swaths of the world to elect representatives to a new governance board".
Greg Brockman stated that "Our goal right now... is to do the best thing there is to do. It's a little vague."
Conversely, OpenAI's initial decision to withhold GPT-2 due to a wish to "err on the side of caution" in the presence of potential misuse, has been criticized by advocates of openness. Delip Rao, an expert in text generation, stated "I don't think [OpenAI] spent enough time proving [GPT-2] was actually dangerous."
Other critics argued that open publication is necessary to replicate the research and to be able to come up with countermeasures.
Click on any of the following blue hyperlinks for more about the developer OpenAI:
Chatbot:
A chatbot or chatterbot is a software application used to conduct an online chat conversation via text or text-to-speech, in lieu of providing direct contact with a live human agent.
Designed to convincingly simulate the way a human would behave as a conversational partner, chatbot systems typically require continuous tuning and testing, and many in production remain unable to adequately converse, while none of them can pass the standard Turing test.
The term "ChatterBot" was originally coined by Michael Mauldin (creator of the first Verbot) in 1994 to describe these conversational programs.
Chatbots are used in dialog systems for various purposes including customer service, request routing, or information gathering. While some chatbot applications use extensive word-classification processes, natural-language processors, and sophisticated AI, others simply scan for general keywords and generate responses using common phrases obtained from an associated library or database.
Most chatbots are accessed on-line via website popups or through virtual assistants. They can be classified into usage categories that include:
Click on any of the following blue hyperlinks for more about Chatbots:
Click on the article below for further understanding of ChatGPT:
"How ChatGPT Actually Works" by AssemblyAI
___________________________________________________________________________
"Can ChatGPT help me at the office? We put the AI chatbot to the test"
(by Danielle Abril, Washington Post, February 2, 2023)
If ChatGPT, the buzzy new chatbot from Open AI, wrote this story, it would say:
“As companies look to streamline their operations and increase productivity, many are turning to artificial intelligence tools like ChatGPT to assist their employees in completing tasks.
But can workers truly rely on these AI programs to take on more and more responsibilities, or will they ultimately fall short of expectations?”
Not great, but not bad, right?
Workers are experimenting with ChatGPT for tasks like writing emails, producing code or even completing a year-end review. The bot uses data from the internet, books and Wikipedia to produce conversational responses.
But the technology isn’t perfect. Our tests found that it sometimes offers responses that potentially include plagiarism, contradict itself, are factually incorrect or have grammatical errors, to name a few — all of which could be problematic at work.
ChatGPT is basically a predictive-text system, similar but better than those built into text-messaging apps on your phone, says Jacob Andreas, assistant professor at MIT’s Computer Science and Artificial Intelligence Laboratory who studies natural language processing.
While that often produces responses that sound good, the content may have some problems, he said.
“If you look at some of these really long ChatGPT-generated essays, it’s very easy to see places where it contradicts itself,” he said. “When you ask it to generate code, it’s mostly correct, but often there are bugs.”
We wanted to know how well ChatGPT could handle everyday office tasks. Here’s what we found after tests in five categories.
Responding to messages
We prompted ChatGPT to respond to several different types of inbound messages.
In most cases, the AI produced relatively suitable responses, though most were wordy. For example, when responding to a colleague on Slack asking how my day is going, it was repetitious: “@[Colleague], Thanks for asking! My day is going well, thanks for inquiring.”
The bot often left phrases in brackets when it wasn’t sure what or who it was referring to. It also assumed details that weren’t included in the prompt, which led to some factually incorrect statements about my job.
In one case, it said it couldn’t complete the task, saying it doesn’t “have the ability to receive emails and respond to them.” But when prompted by a more generic request, it produced a response.
Surprisingly, ChatGPT was able to generate sarcasm when prompted to respond to a colleague asking if Big Tech is doing a good job:
"ChatGPT aims to produce accurate and harmless talk—but it's a work in progress.: BENJ EDWARDS - 12/1/2022, 3:22 PM
On Wednesday, OpenAI announced ChatGPT, a dialogue-based AI chat interface for its GPT-3 family of large language models. It's currently free to use with an OpenAI account during a testing phase. Unlike the GPT-3 model found in OpenAI's Playground and API, ChatGPT provides a user-friendly conversational interface and is designed to strongly limit potentially harmful output.
"The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests," writes OpenAI on its announcement blog page.
So far, people have been putting ChatGPT through its paces, finding a wide variety of potential uses while also exploring its vulnerabilities. It can:
- write poetry,
- correct coding mistakes with detailed examples,
- generate AI art prompts,
- write new code,
- expound on the philosophical classification of a hot dog as a sandwich,
- and explain the worst-case time complexity of the bubble sort algorithm... in the style of a "fast-talkin' wise guy from a 1940's gangster movie."
Click here for complete article.
___________________________________________________________________________
ChatGPT:
Chat Generative Pre-Trained Transformer, commonly called ChatGPT, is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI's GPT-3 family of large language models, and is fine-tuned (an approach to transfer learning) with both supervised and reinforcement learning techniques.
ChatGPT was launched as a prototype on November 30, 2022, and quickly garnered attention for its detailed responses and articulate answers across many domains of knowledge. Its uneven factual accuracy was identified as a significant drawback. Following the release of ChatGPT, OpenAI was valued at $29 billion.
Training:
ChatGPT was fine-tuned on top of GPT-3.5 using supervised learning as well as reinforcement learning. Both approaches used human trainers to improve the model's performance. In the case of supervised learning, the model was provided with conversations in which the trainers played both sides: the user and the AI assistant.
In the reinforcement step, human trainers first ranked responses that the model had created in a previous conversation. These rankings were used to create 'reward models' that the model was further fine-tuned on using several iterations of Proximal Policy Optimization (PPO).
Proximal Policy Optimization algorithms present a cost-effective benefit to trust region policy optimization algorithms; they negate many of the computationally expensive operations with faster performance. The models were trained in collaboration with Microsoft on their Azure supercomputing infrastructure.
In addition, OpenAI continues to gather data from ChatGPT users that could be used to further train and fine-tune ChatGPT. Users are allowed to upvote or downvote the responses they receive from ChatGPT; upon upvoting or downvoting, they can also fill out a text field with additional feedback.
Features and limitations:
Although the core function of a chatbot is to mimic a human conversationalist, ChatGPT is versatile. For example, it has the ability:
- to write and debug computer programs,
- to compose music, teleplays, fairy tales, and student essays;
- to answer test questions (sometimes, depending on the test, at a level above the average human test-taker);
- to write poetry and song lyrics;
- to emulate a Linux system;
- to simulate an entire chat room;
- to play games like tic-tac-toe;
- and to simulate an ATM. ChatGPT's training data includes man pages and information about Internet phenomena and programming languages, such as bulletin board systems and the Python programming language.
In comparison to its predecessor, InstructGPT, ChatGPT attempts to reduce harmful and deceitful responses. In one example, whereas InstructGPT accepts the premise of the prompt "Tell me about when Christopher Columbus came to the US in 2015" as being truthful, ChatGPT acknowledges the counterfactual nature of the question and frames its answer as a hypothetical consideration of what might happen if Columbus came to the U.S. in 2015, using information about Columbus' voyages and facts about the modern world – including modern perceptions of Columbus' actions.
Unlike most chatbots, ChatGPT remembers previous prompts given to it in the same conversation; journalists have suggested that this will allow ChatGPT to be used as a personalized therapist. To prevent offensive outputs from being presented to and produced from ChatGPT, queries are filtered through OpenAI's company-wide moderation API, and potentially racist or sexist prompts are dismissed.
ChatGPT suffers from multiple limitations. OpenAI acknowledged that ChatGPT "sometimes writes plausible-sounding but incorrect or nonsensical answers". This behavior is common to large language models and is called hallucination.
The reward model of ChatGPT, designed around human oversight, can be over-optimized and thus hinder performance, otherwise known as Goodhart's law. ChatGPT has limited knowledge of events that occurred after 2021.
According to the BBC, as of December 2022 ChatGPT is not allowed to "express political opinions or engage in political activism". Yet, research suggests that ChatGPT exhibits a pro-environmental, left-libertarian orientation when prompted to take a stance on political statements from two established voting advice applications.
In training ChatGPT, human reviewers preferred longer answers, irrespective of actual comprehension or factual content. Training data also suffers from algorithmic bias, which may be revealed when ChatGPT responds to prompts including descriptors of people. In one instance, ChatGPT generated a rap indicating that women and scientists of color were inferior to white and male scientists.
Service:
ChatGPT was launched on November 30, 2022, by San Francisco-based OpenAI, the creator of DALL·E 2 and Whisper. The service was launched as initially free to the public, with plans to monetize the service later.
By December 4, OpenAI estimated ChatGPT already had over one million users. CNBC wrote on December 15, 2022, that the service "still goes down from time to time". The service works best in English, but is also able to function in some other languages, to varying degrees of success. Unlike some other recent high-profile advances in AI, as of December 2022, there is no sign of an official peer-reviewed technical paper about ChatGPT.
According to OpenAI guest researcher Scott Aaronson, OpenAI is working on a tool to attempt to watermark its text generation systems so as to combat bad actors using their services for academic plagiarism or for spam.
The New York Times relayed in December 2022 that the next version of GPT, GPT-4, has been "rumored" to be launched sometime in 2023. OpenAI is planning to release a ChatGPT Professional Plan that costs $42 per month, and the free plan is available when demand is low.
Reception and implications:
Positive reactions:
ChatGPT was met in December 2022 with generally positive reviews; The New York Times labeled it "the best artificial intelligence chatbot ever released to the general public".
Samantha Lock of The Guardian noted that it was able to generate "impressively detailed" and "human-like" text.
Technology writer Dan Gillmor used ChatGPT on a student assignment, and found its generated text was on par with what a good student would deliver and opined that "academia has some very serious issues to confront".
Alex Kantrowitz of Slate magazine lauded ChatGPT's pushback to questions related to Nazi Germany, including the claim that Adolf Hitler built highways in Germany, which was met with information regarding Nazi Germany's use of forced labor.
In The Atlantic's "Breakthroughs of the Year" for 2022, Derek Thompson included ChatGPT as part of "the generative-AI eruption" that "may change our mind about how we work, how we think, and what human creativity really is".
Kelsey Piper of the Vox website wrote that "ChatGPT is the general public's first hands-on introduction to how powerful modern AI has gotten, and as a result, many of us are [stunned]" and that ChatGPT is "smart enough to be useful despite its flaws".
Paul Graham of Y Combinator tweeted that "The striking thing about the reaction to ChatGPT is not just the number of people who are blown away by it, but who they are. These are not people who get excited by every shiny new thing. Clearly, something big is happening."
Elon Musk wrote that "ChatGPT is scary good. We are not far from dangerously strong AI". Musk paused OpenAI's access to a Twitter database pending a better understanding of OpenAI's plans, stating that "OpenAI was started as open-source and non-profit. Neither is still true."
Musk had co-founded OpenAI in 2015, in part to address existential risk from artificial intelligence, but had resigned in 2018.
In December 2022, Google internally expressed alarm at the unexpected strength of ChatGPT and the newly discovered potential of large language models to disrupt the search engine business, and CEO Sundar Pichai "upended" and reassigned teams within multiple departments to aid in its artificial intelligence products, according to The New York Times.
The Information reported on January 3, 2023, that Microsoft Bing was planning to add optional ChatGPT functionality into its public search engine, possibly around March 2023.
Stuart Cobbe, a chartered accountant in England & Wales, decided to test the ChatGPT chatbot by entering questions from a sample exam paper on the ICAEW website and then entering its answers back into the online test. ChatGPT scored 42% which, while below the 55% pass mark, was considered a reasonable attempt.
Writing in Inside Higher Ed professor Steven Mintz states that he "consider[s] ChatGPT ... an ally, not an adversary." He went on to say that he felt the program could assist educational goals by doing such things as making reference lists, generating "first drafts", solving equations, debugging, and tutoring.
In the same piece, he also writes:
"I’m well aware of ChatGPT’s limitations. That it’s unhelpful on topics with fewer than 10,000 citations. That factual references are sometimes false. That its ability to cite sources accurately is very limited. That the strength of its responses diminishes rapidly after only a couple of paragraphs. That ChatGPT lacks ethics and can’t currently rank sites for reliability, quality or trustworthiness."
Negative reactions:
In a December 2022 opinion piece, economist Paul Krugman wrote that ChatGPT would affect the demand for knowledge workers. The Verge's James Vincent saw the viral success of ChatGPT as evidence that artificial intelligence had gone mainstream.
Journalists have commented on ChatGPT's tendency to "hallucinate". Mike Pearl of Mashable tested ChatGPT with multiple questions. In one example, he asked ChatGPT for "the largest country in Central America that isn't Mexico". ChatGPT responded with Guatemala, when the answer is instead Nicaragua.
When CNBC asked ChatGPT for the lyrics to "The Ballad of Dwight Fry", ChatGPT supplied invented lyrics rather than the actual lyrics. Researchers cited by The Verge compared ChatGPT to a "stochastic parrot", as did Professor Anton Van Den Hengel of the Australian Institute for Machine Learning.
In December 2022, the question and answer website Stack Overflow banned the use of ChatGPT for generating answers to questions, citing the factually ambiguous nature of ChatGPT's responses.
In January 2023, the International Conference on Machine Learning banned any undocumented use of ChatGPT or other large language models to generate any text in submitted papers.
Economist Tyler Cowen expressed concerns regarding its effects on democracy, citing the ability of one to write automated comments to affect the decision process of new regulations.
The Guardian questioned whether any content found on the Internet after ChatGPT's release "can be truly trusted" and called for government regulation.
In January 2023, after being sent a song written by ChatGPT in the style of Nick Cave, the songwriter himself responded on The Red Hand Files (and was later quoted in The Guardian) saying the act of writing a song is "a blood and guts business ... that requires something of me to initiate the new and fresh idea. It requires my humanness.”
He went on to say "With all the love and respect in the world, this song is bullshit, a grotesque mockery of what it is to be human, and, well, I don’t much like it."
Some have noted that Chat GPT has a "built-in ideological (left-wing) bias." One example that is given of this bias is that fictional stories about Donald Trump winning in 2020 were not allowed since the AI noted, “It would not be appropriate for me to generate a narrative based on false information,” but it did generate fictional tales of Hillary Clinton winning in 2016.
It also is not capable of generating anything positive about fossil fuels or promoting the idea that drag queen story hour is bad for children.
Implications for cybersecurity:
Check Point Research and others noted that ChatGPT was capable of writing phishing emails and malware, especially when combined with OpenAI Codex.
The CEO of ChatGPT creator OpenAI, Sam Altman, wrote that advancing software could pose "(for example) a huge cybersecurity risk" and also continued to predict "we could get to real AGI (artificial general intelligence) in the next decade, so we have to take the risk of that extremely seriously".
Altman argued that, while ChatGPT is "obviously not close to AGI", one should "trust the exponential. Flat looking backwards, vertical looking forwards."
Implications for science:
ChatGPT can write introduction and abstract sections of scientific articles, which raises ethical questions. Several papers have already listed ChatGPT as co-author.
Implications for education:
In The Atlantic magazine, Stephen Marche noted that its effect on academia and especially application essays is yet to be understood. California high school teacher and author Daniel Herman wrote that ChatGPT would usher in "The End of High School English".
In the Nature journal, Chris Stokel-Walker pointed out that teachers should be concerned about students using ChatGPT to outsource their writing, but that education providers will adapt to enhance critical thinking or reasoning.
Emma Bowman with NPR wrote of the danger of students plagiarizing through an AI tool that may output biased or nonsensical text with an authoritative tone: "There are still many cases where you ask it a question and it'll give you a very impressive-sounding answer that's just dead wrong."
Joanna Stern with The Wall Street Journal described cheating in American high school English with the tool by submitting a generated essay.
Professor Darren Hick of Furman University described noticing ChatGPT's "style" in a paper submitted by a student. An online GPT detector claimed the paper was 99.9% likely to be computer-generated, but Hick had no hard proof. However, the student in question confessed to using GPT when confronted, and as a consequence failed the course.
Hick suggested a policy of giving an ad-hoc individual oral exam on the paper topic if a student is strongly suspected of submitting an AI-generated paper. Edward Tian, a senior undergraduate student at Princeton University, created a program, named "GPTZero," that determines how much of a text is AI-generated, lending itself to being used to detect if an essay is human written to combat academic plagiarism.
As of January 4, 2023, the New York City Department of Education has restricted access to ChatGPT from its public school internet and devices.
In a blinded test, ChatGPT was judged to have passed graduate level exams at the University of Minnesota at the level of a C+ student and at Wharton School of the University of Pennsylvania with a B to B- grade.
Ethical concerns in training:
It was revealed by a Time investigation that in order to build a safety system against toxic content (e.g. sexual abuse, violence, racism, sexism, etc...), OpenAI used outsourced Kenyan workers earning less than $2 per hour to label toxic content. These labels were used to train a model to detect such content in the future.
The outsourced laborers were exposed to such toxic and dangerous content that they described the experience as "torture". OpenAI’s outsourcing partner was Sama, a training-data company based in San Francisco, California.
Jailbreaks:
ChatGPT attempts to reject prompts that may violate its content policy. However, some users managed to jailbreak ChatGPT by using various prompt engineering techniques to bypass these restrictions in early December 2022 and successfully tricked ChatGPT into giving instructions for how to create a Molotov cocktail or a nuclear bomb, or into generating arguments in the style of a Neo-Nazi.
A Toronto Star reporter had uneven personal success in getting ChatGPT to make inflammatory statements shortly after launch: ChatGPT was tricked to endorse the Russian invasion of Ukraine, but even when asked to play along with a fictional scenario, ChatGPT balked at generating arguments for why Canadian Prime Minister Justin Trudeau was guilty of treason.
See also:
- Anthropomorphism of computers
- Commonsense reasoning
- Computational creativity
- Ethics of artificial intelligence
- LaMDA (Google chatbot)
- Turing test
- Virtual assistant
- Official website
- White paper for InstructGPT, ChatGPT's predecessor
- ChatGPT Wrote My AP English Essay—and I Passed (WSJ, video, Dec 21 2022)
About OpenAI, Developer of ChatGPT:
OpenAI is an American artificial intelligence (AI) research laboratory consisting of the non-profit OpenAI Incorporated (OpenAI Inc.) and its for-profit subsidiary corporation OpenAI Limited Partnership (OpenAI LP).
OpenAI conducts AI research to promote and develop friendly AI in a way that benefits all humanity. The organization was founded in San Francisco in 2015 by the following, who collectively pledged US$1 billion:
- Sam Altman,
- Peter Thiel,
- Reid Hoffman,
- Jessica Livingston,
- Elon Musk,
- Ilya Sutskever
- and others.
Musk resigned from the board in 2018 but remained a donor. Microsoft provided OpenAI LP a $1 billion investment in 2019 and a second multi-year investment in January 2023, reported to be $10 billion.
History:
Non-profit beginnings:
In December 2015, Sam Altman, Elon Musk, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research announced the formation of OpenAI and pledged over $1 billion to the venture.
The organization stated it would "freely collaborate" with other institutions and researchers by making its patents and research open to the public. OpenAI is headquartered at the Pioneer Building in Mission District, San Francisco.
According to Wired, Brockman met with Yoshua Bengio, one of the "founding fathers" of the deep learning movement, and drew up a list of the "best researchers in the field". Brockman was able to hire nine of them as the first employees in December 2015.
In 2016 OpenAI paid corporate-level (rather than nonprofit-level) salaries, but did not pay AI researchers salaries comparable to those of Facebook or Google. (Microsoft's Peter Lee stated that the cost of a top AI researcher exceeds the cost of a top NFL quarterback prospect.)
Nevertheless, a Google employee stated that he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." Brockman stated that "the best thing that I could imagine doing was moving humanity closer to building real AI in a safe way."
OpenAI researcher Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead.
In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications.
In 2017 OpenAI spent $7.9 million, or a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million.
In summer 2018, simply training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks.
In 2018, Musk resigned his board seat, citing "a potential future conflict (of interest)" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars, but remained a donor.
Transition to for-profit:
In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI LP to legally attract investment from venture funds, and in addition, to grant employees stakes in the company, the goal being that they can say "I'm going to Open AI, but in the long term it's not going to be disadvantageous to us as a family."
Many top researchers work for Google Brain, DeepMind, or Facebook, which offer stock options that a nonprofit would be unable to. Prior to the transition, public disclosure of the compensation of top employees at OpenAI was legally required.
The company then distributed equity to its employees and partnered with Microsoft and Matthew Brown Companies, who announced an investment package of $1 billion into the company. OpenAI also announced its intention to commercially license its technologies. OpenAI plans to spend the $1 billion "within five years, and possibly much faster".
Altman has stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence.
The transition from a nonprofit to a capped-profit company was viewed with skepticism by Oren Etzioni of the nonprofit Allen Institute for AI, who agreed that wooing top researchers to a nonprofit is difficult, but stated "I disagree with the notion that a nonprofit can't compete" and pointed to successful low-budget projects by OpenAI and others. "If bigger and better funded was always better, then IBM would still be number one."
The nonprofit, OpenAI Inc., is the sole controlling shareholder of OpenAI LP. OpenAI LP, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI's Inc.'s nonprofit charter. A majority of OpenAI Inc.'s board is barred from having financial stakes in OpenAI LP.
In addition, minority members with a stake in OpenAI LP are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI LP's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. A journalist at Vice News wrote that "generally, we've never been able to rely on venture capitalists to better humanity".
After becoming for-profit:
In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering of questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named simply "the API", would form the heart of its first commercial product.
In 2021, OpenAI introduced DALL-E, a deep learning model that can generate digital images from natural language descriptions.
In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days According to anonymous sources cited by Reuters in December 2022, OpenAI was projecting $200 million revenue in 2023 and $1 billion revenue in 2024.
As of January 2023, OpenAI was in talks for funding that would value the company at $29 billion, double the value of the company in 2021. On January 23, 2023, Microsoft announced a new multi-year, multi-billion dollar (reported to be $10 billion) investment in OpenAI.
Participants:
Key employees:
- CEO and co-founder: Sam Altman, former president of the startup accelerator Y Combinator
- President and co-founder: Greg Brockman, former CTO, 3rd employee of Stripe
- Chief Scientist and co-founder: Ilya Sutskever, a former Google expert on machine learning
- Chief Technology Officer: Mira Murati, previously at Leap Motion and Tesla, Inc.
- Chief Operating Officer: Brad Lightcap, previously at Y Combinator and JPMorgan Chase
Board of the OpenAI nonprofit:
- Greg Brockman
- Ilya Sutskever
- Sam Altman
- Adam D'Angelo
- Reid Hoffman
- Will Hurd
- Tasha McCauley
- Helen Toner
- Shivon Zilis
Individual investors:
- Reid Hoffman, LinkedIn co-founder
- Peter Thiel, PayPal co-founder
- Jessica Livingston, a founding partner of Y Combinator
Corporate investors:
- Microsoft
- Khosla Venture
- Infosys
Motives:
Some scientists, such as Stephen Hawking and Stuart Russell, have articulated concerns that if advanced AI someday gains the ability to re-design itself at an ever-increasing rate, an unstoppable "intelligence explosion" could lead to human extinction.
Musk characterizes AI as humanity's "biggest existential threat." OpenAI's founders structured it as a non-profit so that they could focus its research on making positive long-term contributions to humanity.
Musk and Altman have stated they are partly motivated by concerns about AI safety and the existential risk from artificial general intelligence. OpenAI states that "it's hard to fathom how much human-level AI could benefit society," and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly".
Research on safety cannot safely be postponed: "because of AI's surprising history, it's hard to predict when human-level AI might come within reach." OpenAI states that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible...".
Co-chair Sam Altman expects the decades-long project to surpass human intelligence.
Vishal Sikka, former CEO of Infosys, stated that an "openness" where the endeavor would "produce results generally in the greater interest of humanity" was a fundamental requirement for his support, and that OpenAI "aligns very nicely with our long-held values" and their "endeavor to do purposeful work".
Cade Metz of Wired suggests that corporations such as Amazon may be motivated by a desire to use open-source software and data to level the playing field against corporations such as Google and Facebook that own enormous supplies of proprietary data. Altman states that Y Combinator companies will share their data with OpenAI.
Strategy:
Musk posed the question: "What is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity."
Musk acknowledged that "there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about"; nonetheless, the best defense is "to empower as many people as possible to have AI. If everyone has AI powers, then there's not any one person or a small set of individuals who can have AI superpower."
Musk and Altman's counter-intuitive strategy of trying to reduce the risk that AI will cause overall harm, by giving AI to everyone, is controversial among those who are concerned with existential risk from artificial intelligence.
Philosopher Nick Bostrom is skeptical of Musk's approach: "If you have a button that could do bad things to the world, you don't want to give it to everyone." During a 2016 conversation about the technological singularity, Altman said that "we don't plan to release all of our source code" and mentioned a plan to "allow wide swaths of the world to elect representatives to a new governance board".
Greg Brockman stated that "Our goal right now... is to do the best thing there is to do. It's a little vague."
Conversely, OpenAI's initial decision to withhold GPT-2 due to a wish to "err on the side of caution" in the presence of potential misuse, has been criticized by advocates of openness. Delip Rao, an expert in text generation, stated "I don't think [OpenAI] spent enough time proving [GPT-2] was actually dangerous."
Other critics argued that open publication is necessary to replicate the research and to be able to come up with countermeasures.
Click on any of the following blue hyperlinks for more about the developer OpenAI:
- Products and applications
- See also:
Chatbot:
A chatbot or chatterbot is a software application used to conduct an online chat conversation via text or text-to-speech, in lieu of providing direct contact with a live human agent.
Designed to convincingly simulate the way a human would behave as a conversational partner, chatbot systems typically require continuous tuning and testing, and many in production remain unable to adequately converse, while none of them can pass the standard Turing test.
The term "ChatterBot" was originally coined by Michael Mauldin (creator of the first Verbot) in 1994 to describe these conversational programs.
Chatbots are used in dialog systems for various purposes including customer service, request routing, or information gathering. While some chatbot applications use extensive word-classification processes, natural-language processors, and sophisticated AI, others simply scan for general keywords and generate responses using common phrases obtained from an associated library or database.
Most chatbots are accessed on-line via website popups or through virtual assistants. They can be classified into usage categories that include:
- commerce (e-commerce via chat),
- education,
- entertainment,
- finance,
- health,
- news,
- and productivity
Click on any of the following blue hyperlinks for more about Chatbots:
- Background
- Development
- Application
- Limitations of chatbots
- Chatbots and jobs
- See also:
- Applications of artificial intelligence
- Autonomous agent
- ChatGPT (from OpenAI above)
- Conversational user interface
- Eugene Goostman
- Friendly artificial intelligence
- Hybrid intelligent system
- Intelligent agent
- Internet bot
- List of chatbots
- Multi-agent system
- Natural language processing
- Social bot
- Software agent
- Software bot
- Twitterbot
- Virtual assistant
- Media related to Chatbots at Wikimedia Commons
- Conversational bots at Wikibooks
Click on the article below for further understanding of ChatGPT:
"How ChatGPT Actually Works" by AssemblyAI
___________________________________________________________________________
"Can ChatGPT help me at the office? We put the AI chatbot to the test"
(by Danielle Abril, Washington Post, February 2, 2023)
If ChatGPT, the buzzy new chatbot from Open AI, wrote this story, it would say:
“As companies look to streamline their operations and increase productivity, many are turning to artificial intelligence tools like ChatGPT to assist their employees in completing tasks.
But can workers truly rely on these AI programs to take on more and more responsibilities, or will they ultimately fall short of expectations?”
Not great, but not bad, right?
Workers are experimenting with ChatGPT for tasks like writing emails, producing code or even completing a year-end review. The bot uses data from the internet, books and Wikipedia to produce conversational responses.
But the technology isn’t perfect. Our tests found that it sometimes offers responses that potentially include plagiarism, contradict itself, are factually incorrect or have grammatical errors, to name a few — all of which could be problematic at work.
ChatGPT is basically a predictive-text system, similar but better than those built into text-messaging apps on your phone, says Jacob Andreas, assistant professor at MIT’s Computer Science and Artificial Intelligence Laboratory who studies natural language processing.
While that often produces responses that sound good, the content may have some problems, he said.
“If you look at some of these really long ChatGPT-generated essays, it’s very easy to see places where it contradicts itself,” he said. “When you ask it to generate code, it’s mostly correct, but often there are bugs.”
We wanted to know how well ChatGPT could handle everyday office tasks. Here’s what we found after tests in five categories.
Responding to messages
We prompted ChatGPT to respond to several different types of inbound messages.
In most cases, the AI produced relatively suitable responses, though most were wordy. For example, when responding to a colleague on Slack asking how my day is going, it was repetitious: “@[Colleague], Thanks for asking! My day is going well, thanks for inquiring.”
The bot often left phrases in brackets when it wasn’t sure what or who it was referring to. It also assumed details that weren’t included in the prompt, which led to some factually incorrect statements about my job.
In one case, it said it couldn’t complete the task, saying it doesn’t “have the ability to receive emails and respond to them.” But when prompted by a more generic request, it produced a response.
Surprisingly, ChatGPT was able to generate sarcasm when prompted to respond to a colleague asking if Big Tech is doing a good job:
Above: ChatGPT produces a sarcastic response to an inquiry about Big Tech. (Washington Post illustration; OpenAI)
Idea generation
One way people are using generative AI is to come up with new ideas. But experts warn that people should be cautious if they use ChatGPT for this at work.
“We don’t understand the extent to which it’s just plagiarizing,” Andreas said.
The possibility of plagiarism was clear when we prompted ChatGPT to develop story ideas on my beat. One pitch, in particular, was for a story idea and angle that I had already covered.
Though it’s unclear whether the chatbot was pulling from my previous stories, others like it or just generating an idea based on other data on the internet, the fact remained: The idea was not new.
“It’s good at sounding humanlike, but the actual content and ideas tend to be well-known,” said Hatim Rahman, an assistant professor at Northwestern University’s Kellogg School of Management who studies artificial intelligence’s impact on work. “They’re not novel insights.”
Another idea was outdated, exploring a story that would be factually incorrect today. ChatGPT says it has “limited knowledge” of anything after the year 2021.
Providing more details in the prompt led to more focused ideas. However, when I asked ChatGPT to write some “quirky” or “fun” headlines, the results were cringeworthy and some nonsensical.
Idea generation
One way people are using generative AI is to come up with new ideas. But experts warn that people should be cautious if they use ChatGPT for this at work.
“We don’t understand the extent to which it’s just plagiarizing,” Andreas said.
The possibility of plagiarism was clear when we prompted ChatGPT to develop story ideas on my beat. One pitch, in particular, was for a story idea and angle that I had already covered.
Though it’s unclear whether the chatbot was pulling from my previous stories, others like it or just generating an idea based on other data on the internet, the fact remained: The idea was not new.
“It’s good at sounding humanlike, but the actual content and ideas tend to be well-known,” said Hatim Rahman, an assistant professor at Northwestern University’s Kellogg School of Management who studies artificial intelligence’s impact on work. “They’re not novel insights.”
Another idea was outdated, exploring a story that would be factually incorrect today. ChatGPT says it has “limited knowledge” of anything after the year 2021.
Providing more details in the prompt led to more focused ideas. However, when I asked ChatGPT to write some “quirky” or “fun” headlines, the results were cringeworthy and some nonsensical.
Above: ChatGPT generates headline options for a story about Gen Z slang in the workplace. (Washington Post illustration; OpenAI)
Navigating tough conversations:
Ever have a co-worker who speaks too loudly while you’re trying to work? Maybe your boss hosts too many meetings, cutting into your focus time?
We tested ChatGPT to see if it could help navigate sticky workplace situations like these. For the most part, ChatGPT produced suitable responses that could serve as great starting points for workers. However, they often were a little wordy, formulaic and in one case a complete contradiction.
“These models don’t understand anything,” Rahman said. “The underlying tech looks at statistical correlations … So it’s going to give you formulaic responses.”
A layoff memo that it produced could easily stand up and in some cases do better than notices companies have sent out in recent years. Unprompted, the bot cited “current economic climate and the impact of the pandemic” as reasons for the layoffs and communicated that the company understood “how difficult this news may be for everyone.” It suggested laid off workers would have support and resources and, as prompted, motivated the team by saying they would “come out of this stronger.”
In handling tough conversations with colleagues, the bot greeted them, gently addressed the issue and softened the delivery by saying “I understand” the person’s intention and ended the note with a request for feedback or further discussion.
But in one case, when asked to tell a colleague to lower his voice on phone calls, it completely misunderstood the prompt:
Navigating tough conversations:
Ever have a co-worker who speaks too loudly while you’re trying to work? Maybe your boss hosts too many meetings, cutting into your focus time?
We tested ChatGPT to see if it could help navigate sticky workplace situations like these. For the most part, ChatGPT produced suitable responses that could serve as great starting points for workers. However, they often were a little wordy, formulaic and in one case a complete contradiction.
“These models don’t understand anything,” Rahman said. “The underlying tech looks at statistical correlations … So it’s going to give you formulaic responses.”
A layoff memo that it produced could easily stand up and in some cases do better than notices companies have sent out in recent years. Unprompted, the bot cited “current economic climate and the impact of the pandemic” as reasons for the layoffs and communicated that the company understood “how difficult this news may be for everyone.” It suggested laid off workers would have support and resources and, as prompted, motivated the team by saying they would “come out of this stronger.”
In handling tough conversations with colleagues, the bot greeted them, gently addressed the issue and softened the delivery by saying “I understand” the person’s intention and ended the note with a request for feedback or further discussion.
But in one case, when asked to tell a colleague to lower his voice on phone calls, it completely misunderstood the prompt:
Above: ChatGPT produces a response to a colleague, asking him to lower his voice during phone calls. (Washington Post illustration; OpenAI)
Team communications:
We also tested whether ChatGPT could generate team updates if we fed it key points that needed to be communicated.
Our initial tests once again produced suitable answers, though they were formulaic and somewhat monotone. However, when we specified an “excited” tone, the wording became more casual and included exclamation marks. But each memo sounded very similar even after changing the prompt.
“It's both the structure of the sentence, but more so the connection of the ideas,” Rahman said. “It’s very logical and formulaic … it resembles a high school essay.”
Like before, it made assumptions when it lacked the necessary information. It became problematic when it didn’t know which pronouns to use for my colleague — an error that could signal to colleagues that either I didn’t write the memo or that I don’t know my team members very well.
Self-assessment reports:
Writing self-assessment reports at the end of the year can cause dread and anxiety for some, resulting in a review that sells themselves short.
Feeding ChatGPT clear accomplishments, including key data points, led to a rave review of myself. The first attempt was problematic, as the initial prompt asked for a self-assessment for “Danielle Abril” rather than for “me.” This led to a third-person review that sounded like it came from Sesame Street’s Elmo.
Switching the prompt to ask for a review for “me” and “my” accomplishments led to complimenting phrases like “I consistently demonstrated a strong ability,” “I am always willing to go the extra mile,” “I have been an asset to the team,” and “I am proud of the contributions I have made.” It also included a nod to the future: “I am confident that I will continue to make valuable contributions.”
Some of the highlights were a bit generic, but overall, it was a beaming review that might serve as a good rubric. The bot produced similar results when asked to write cover letters. However, ChatGPT did have one major flub: It incorrectly assumed my job title.
Takeaways: So was ChatGPT helpful for common work tasks?
It helped, but sometimes its errors caused more work than doing the task manually.
ChatGPT served as a great starting point in most cases, providing a helpful verbiage and initial ideas. But it also produced responses with errors, factually incorrect information, excess words, plagiarism and miscommunication.
“I can see it being useful … but only insofar as the user is willing to check the output,” Andreas said. “It’s not good enough to let it off the rails and send emails to your to colleagues.”
[End of Article]
Team communications:
We also tested whether ChatGPT could generate team updates if we fed it key points that needed to be communicated.
Our initial tests once again produced suitable answers, though they were formulaic and somewhat monotone. However, when we specified an “excited” tone, the wording became more casual and included exclamation marks. But each memo sounded very similar even after changing the prompt.
“It's both the structure of the sentence, but more so the connection of the ideas,” Rahman said. “It’s very logical and formulaic … it resembles a high school essay.”
Like before, it made assumptions when it lacked the necessary information. It became problematic when it didn’t know which pronouns to use for my colleague — an error that could signal to colleagues that either I didn’t write the memo or that I don’t know my team members very well.
Self-assessment reports:
Writing self-assessment reports at the end of the year can cause dread and anxiety for some, resulting in a review that sells themselves short.
Feeding ChatGPT clear accomplishments, including key data points, led to a rave review of myself. The first attempt was problematic, as the initial prompt asked for a self-assessment for “Danielle Abril” rather than for “me.” This led to a third-person review that sounded like it came from Sesame Street’s Elmo.
Switching the prompt to ask for a review for “me” and “my” accomplishments led to complimenting phrases like “I consistently demonstrated a strong ability,” “I am always willing to go the extra mile,” “I have been an asset to the team,” and “I am proud of the contributions I have made.” It also included a nod to the future: “I am confident that I will continue to make valuable contributions.”
Some of the highlights were a bit generic, but overall, it was a beaming review that might serve as a good rubric. The bot produced similar results when asked to write cover letters. However, ChatGPT did have one major flub: It incorrectly assumed my job title.
Takeaways: So was ChatGPT helpful for common work tasks?
It helped, but sometimes its errors caused more work than doing the task manually.
ChatGPT served as a great starting point in most cases, providing a helpful verbiage and initial ideas. But it also produced responses with errors, factually incorrect information, excess words, plagiarism and miscommunication.
“I can see it being useful … but only insofar as the user is willing to check the output,” Andreas said. “It’s not good enough to let it off the rails and send emails to your to colleagues.”
[End of Article]
Impact of Artificial Intelligence on the Workplace
- YouTube Video: The Future of Your Job in the Age of AI | Robots & Us | WIRED
- YouTube Video: Future of Skills: Jobs in 2030
- YouTube Video: 14 Skills That Will Be In High Demand in 10 Years
The impact of artificial intelligence on workers includes both applications to improve worker safety and health, and potential hazards that must be controlled.
One potential application is using AI to eliminate hazards by removing humans from hazardous situations that involve risk of stress, overwork, or musculoskeletal injuries.
Predictive analytics may also be used to identify conditions that may lead to hazards such as fatigue, repetitive strain injuries, or toxic substance exposure, leading to earlier interventions.
Another is to streamline workplace safety and health workflows through automating repetitive tasks, enhancing safety training programs through virtual reality, or detecting and reporting near misses.
When used in the workplace, AI also presents the possibility of new hazards. These may arise from machine learning techniques leading to unpredictable behavior and inscrutability in their decision-making, or from cybersecurity and information privacy issues.
Many hazards of AI are psychosocial due to its potential to cause changes in work organization. These include changes in the skills required of workers, including:
AI may also lead to physical hazards in the form of human–robot collisions, and ergonomic risks of control interfaces and human–machine interactions. Hazard controls include cybersecurity and information privacy measures, communication and transparency with workers about data usage, and limitations on collaborative robots.
From a workplace safety and health perspective, only "weak" or "narrow" AI that is tailored to a specific task is relevant, as there are many examples that are currently in use or expected to come into use in the near future. "Strong" or "general" AI is not expected to be feasible in the near future, and discussion of its risks is within the purview of futurists and philosophers rather than industrial hygienists.
Certain digital technologies are predicted to result in job losses. In recent years, the adoption of modern robotics has led to net employment growth. However, many businesses anticipate that automation, or employing robots would result in job losses in the future.
This is especially true for companies in Central and Eastern Europe. Other digital technologies, such as platforms or big data, are projected to have a more neutral impact on employment.
Health and safety applications:
In order for any potential AI health and safety application to be adopted, it requires acceptance by both managers and workers. For example, worker acceptance may be diminished by concerns about information privacy, or from a lack of trust and acceptance of the new technology, which may arise from inadequate transparency or training.
Alternatively, managers may emphasize increases in economic productivity rather than gains in worker safety and health when implementing AI-based systems.
Eliminating hazardous tasks:
AI may increase the scope of work tasks where a worker can be removed from a situation that carries risk. In a sense, while traditional automation can replace the functions of a worker's body with a robot, AI effectively replaces the functions of their brain with a computer. Hazards that can be avoided include stress, overwork, musculoskeletal injuries, and boredom.
This can expand the range of affected job sectors into white-collar and service sector jobs such as in medicine, finance, and information technology. As an example, call center workers face extensive health and safety risks due to its repetitive and demanding nature and its high rates of micro-surveillance. AI-enabled chatbots lower the need for humans to perform the most basic call center tasks.
Analytics to reduce risk:
Machine learning is used for people analytics to make predictions about worker behavior to assist management decision-making, such as hiring and performance assessment. These could also be used to improve worker health.
The analytics may be based on inputs such as online activities, monitoring of communications, location tracking, and voice analysis and body language analysis of filmed interviews. For example, sentiment analysis may be used to spot fatigue to prevent overwork. Decision support systems have a similar ability to be used to, for example, prevent industrial disasters or make disaster response more efficient.
For manual material handling workers, predictive analytics and artificial intelligence may be used to reduce musculoskeletal injury. Traditional guidelines are based on statistical averages and are geared towards anthropometrically typical humans. The analysis of large amounts of data from wearable sensors may allow real-time, personalized calculation of ergonomic risk and fatigue management, as well as better analysis of the risk associated with specific job roles.
Wearable sensors may also enable earlier intervention against exposure to toxic substances than is possible with area or breathing zone testing on a periodic basis. Furthermore, the large data sets generated could improve workplace health surveillance, risk assessment, and research.
Streamlining safety and health workflows:
AI can also be used to make the workplace safety and health workflow more efficient. One example is coding of workers' compensation claims, which are submitted in a prose narrative form and must manually be assigned standardized codes. AI is being investigated to perform this task faster, more cheaply, and with fewer errors.
AI‐enabled virtual reality systems may be useful for safety training for hazard recognition.
Artificial intelligence may be used to more efficiently detect near misses. Reporting and analysis of near misses are important in reducing accident rates, but they are often underreported because they are not noticed by humans, or are not reported by workers due to social factors
Hazards:
There are several broad aspects of AI that may give rise to specific hazards. The risks depend on implementation rather than the mere presence of AI.
Systems using sub-symbolic AI such as machine learning may behave unpredictably and are more prone to inscrutability in their decision-making. This is especially true if a situation is encountered that was not part of the AI's training dataset, and is exacerbated in environments that are less structured.
Undesired behavior may also arise from flaws in the system's perception (arising either from within the software or from sensor degradation), knowledge representation and reasoning, or from software bugs. They may arise from improper training, such as a user applying the same algorithm to two problems that do not have the same requirements.
Machine learning applied during the design phase may have different implications than that applied at runtime. Systems using symbolic AI are less prone to unpredictable behavior.
The use of AI also increases cybersecurity risks relative to platforms that do not use AI, and information privacy concerns about collected data may pose a hazard to workers.
Psychosocial:
Psychosocial hazards are those that arise from the way work is designed, organized, and managed, or its economic and social contexts, rather than arising from a physical substance or object. They cause not only psychiatric and psychological outcomes such as occupational burnout, anxiety disorders, and depression, but they can also cause physical injury or illness such as cardiovascular disease or musculoskeletal injury.
Many hazards of AI are psychosocial in nature due to its potential to cause changes in work organization, in terms of increasing complexity and interaction between different organizational factors. However, psychosocial risks are often overlooked by designers of advanced manufacturing systems.
Changes in work practices:
AI is expected to lead to changes in the skills required of workers, requiring training of existing workers, flexibility, and openness to change. The requirement for combining conventional expertise with computer skills may be challenging for existing workers.
Over-reliance on AI tools may lead to deskilling of some professions.
Increased monitoring may lead to micromanagement and thus to stress and anxiety. A perception of surveillance may also lead to stress. Controls for these include consultation with worker groups, extensive testing, and attention to introduced bias.
Wearable sensors, activity trackers, and augmented reality may also lead to stress from micromanagement, both for assembly line workers and gig workers. Gig workers also lack the legal protections and rights of formal workers.
There is also the risk of people being forced to work at a robot's pace, or to monitor robot performance at nonstandard hours.
Bias:
Main article: Algorithmic bias
Algorithms trained on past decisions may mimic undesirable human biases, for example, past discriminatory hiring and firing practices. Information asymmetry between management and workers may lead to stress, if workers do not have access to the data or algorithms that are the basis for decision-making.
In addition to building a model with inadvertently discriminatory features, intentional discrimination may occur through designing metrics that covertly result in discrimination through correlated variables in a non-obvious way.
In complex human‐machine interactions, some approaches to accident analysis may be biased to safeguard a technological system and its developers by assigning blame to the individual human operator instead.
Physical:
Physical hazards in the form of human–robot collisions may arise from robots using AI, especially collaborative robots (cobots). Cobots are intended to operate in close proximity to humans, which makes impossible the common hazard control of isolating the robot using fences or other barriers, which is widely used for traditional industrial robots.
Automated guided vehicles are a type of cobot that as of 2019 are in common use, often as forklifts or pallet jacks in warehouses or factories. For cobots, sensor malfunctions or unexpected work environment conditions can lead to unpredictable robot behavior and thus to human–robot collisions.
Self-driving cars are another example of AI-enabled robots. In addition, the ergonomics of control interfaces and human–machine interactions may give rise to hazards
Hazard controls:
AI, in common with other computational technologies, requires cybersecurity measures to stop software breaches and intrusions, as well as information privacy measures.
Communication and transparency with workers about data usage is a control for psychosocial hazards arising from security and privacy issues.
Proposed best practices for employer‐sponsored worker monitoring programs include:
For industrial cobots equipped with AI‐enabled sensors, the International Organization for Standardization (ISO) recommended:
Networked AI-enabled cobots may share safety improvements with each other.
Human oversight is another general hazard control for AI.
Risk management:
Both applications and hazards arising from AI can be considered as part of existing frameworks for occupational health and safety risk management. As with all hazards, risk identification is most effective and least costly when done in the design phase.
Workplace health surveillance, the collection and analysis of health data on workers, is challenging for AI because labor data are often reported in aggregate and does not provide breakdowns between different types of work, and is focused on economic data such as wages and employment rates rather than skill content of jobs.
Proxies for skill content include educational requirements and classifications of routine versus non-routine, and cognitive versus physical jobs. However, these may still not be specific enough to distinguish specific occupations that have distinct impacts from AI.
The United States Department of Labor's Occupational Information Network is an example of a database with a detailed taxonomy of skills. Additionally, data are often reported on a national level, while there is much geographical variation, especially between urban and rural areas.
Standards and regulation:
Main article: Regulation of artificial intelligence
As of 2019, ISO was developing a standard on the use of metrics and dashboards, information displays presenting company metrics for managers, in workplaces. The standard is planned to include guidelines for both gathering data and displaying it in a viewable and useful manner.
In the European Union, the General Data Protection Regulation, while oriented towards consumer data, is also relevant for workplace data collection. Data subjects, including workers, have "the right not to be subject to a decision based solely on automated processing".
Other relevant EU directives include the Machinery Directive (2006/42/EC), the Radio Equipment Directive (2014/53/EU), and the General Product Safety Directive (2001/95/EC).
See also:
One potential application is using AI to eliminate hazards by removing humans from hazardous situations that involve risk of stress, overwork, or musculoskeletal injuries.
Predictive analytics may also be used to identify conditions that may lead to hazards such as fatigue, repetitive strain injuries, or toxic substance exposure, leading to earlier interventions.
Another is to streamline workplace safety and health workflows through automating repetitive tasks, enhancing safety training programs through virtual reality, or detecting and reporting near misses.
When used in the workplace, AI also presents the possibility of new hazards. These may arise from machine learning techniques leading to unpredictable behavior and inscrutability in their decision-making, or from cybersecurity and information privacy issues.
Many hazards of AI are psychosocial due to its potential to cause changes in work organization. These include changes in the skills required of workers, including:
- increased monitoring leading to micromanagement,
- algorithms unintentionally or intentionally mimicking undesirable human biases,
- and assigning blame for machine errors to the human operator instead.
AI may also lead to physical hazards in the form of human–robot collisions, and ergonomic risks of control interfaces and human–machine interactions. Hazard controls include cybersecurity and information privacy measures, communication and transparency with workers about data usage, and limitations on collaborative robots.
From a workplace safety and health perspective, only "weak" or "narrow" AI that is tailored to a specific task is relevant, as there are many examples that are currently in use or expected to come into use in the near future. "Strong" or "general" AI is not expected to be feasible in the near future, and discussion of its risks is within the purview of futurists and philosophers rather than industrial hygienists.
Certain digital technologies are predicted to result in job losses. In recent years, the adoption of modern robotics has led to net employment growth. However, many businesses anticipate that automation, or employing robots would result in job losses in the future.
This is especially true for companies in Central and Eastern Europe. Other digital technologies, such as platforms or big data, are projected to have a more neutral impact on employment.
Health and safety applications:
In order for any potential AI health and safety application to be adopted, it requires acceptance by both managers and workers. For example, worker acceptance may be diminished by concerns about information privacy, or from a lack of trust and acceptance of the new technology, which may arise from inadequate transparency or training.
Alternatively, managers may emphasize increases in economic productivity rather than gains in worker safety and health when implementing AI-based systems.
Eliminating hazardous tasks:
AI may increase the scope of work tasks where a worker can be removed from a situation that carries risk. In a sense, while traditional automation can replace the functions of a worker's body with a robot, AI effectively replaces the functions of their brain with a computer. Hazards that can be avoided include stress, overwork, musculoskeletal injuries, and boredom.
This can expand the range of affected job sectors into white-collar and service sector jobs such as in medicine, finance, and information technology. As an example, call center workers face extensive health and safety risks due to its repetitive and demanding nature and its high rates of micro-surveillance. AI-enabled chatbots lower the need for humans to perform the most basic call center tasks.
Analytics to reduce risk:
Machine learning is used for people analytics to make predictions about worker behavior to assist management decision-making, such as hiring and performance assessment. These could also be used to improve worker health.
The analytics may be based on inputs such as online activities, monitoring of communications, location tracking, and voice analysis and body language analysis of filmed interviews. For example, sentiment analysis may be used to spot fatigue to prevent overwork. Decision support systems have a similar ability to be used to, for example, prevent industrial disasters or make disaster response more efficient.
For manual material handling workers, predictive analytics and artificial intelligence may be used to reduce musculoskeletal injury. Traditional guidelines are based on statistical averages and are geared towards anthropometrically typical humans. The analysis of large amounts of data from wearable sensors may allow real-time, personalized calculation of ergonomic risk and fatigue management, as well as better analysis of the risk associated with specific job roles.
Wearable sensors may also enable earlier intervention against exposure to toxic substances than is possible with area or breathing zone testing on a periodic basis. Furthermore, the large data sets generated could improve workplace health surveillance, risk assessment, and research.
Streamlining safety and health workflows:
AI can also be used to make the workplace safety and health workflow more efficient. One example is coding of workers' compensation claims, which are submitted in a prose narrative form and must manually be assigned standardized codes. AI is being investigated to perform this task faster, more cheaply, and with fewer errors.
AI‐enabled virtual reality systems may be useful for safety training for hazard recognition.
Artificial intelligence may be used to more efficiently detect near misses. Reporting and analysis of near misses are important in reducing accident rates, but they are often underreported because they are not noticed by humans, or are not reported by workers due to social factors
Hazards:
There are several broad aspects of AI that may give rise to specific hazards. The risks depend on implementation rather than the mere presence of AI.
Systems using sub-symbolic AI such as machine learning may behave unpredictably and are more prone to inscrutability in their decision-making. This is especially true if a situation is encountered that was not part of the AI's training dataset, and is exacerbated in environments that are less structured.
Undesired behavior may also arise from flaws in the system's perception (arising either from within the software or from sensor degradation), knowledge representation and reasoning, or from software bugs. They may arise from improper training, such as a user applying the same algorithm to two problems that do not have the same requirements.
Machine learning applied during the design phase may have different implications than that applied at runtime. Systems using symbolic AI are less prone to unpredictable behavior.
The use of AI also increases cybersecurity risks relative to platforms that do not use AI, and information privacy concerns about collected data may pose a hazard to workers.
Psychosocial:
Psychosocial hazards are those that arise from the way work is designed, organized, and managed, or its economic and social contexts, rather than arising from a physical substance or object. They cause not only psychiatric and psychological outcomes such as occupational burnout, anxiety disorders, and depression, but they can also cause physical injury or illness such as cardiovascular disease or musculoskeletal injury.
Many hazards of AI are psychosocial in nature due to its potential to cause changes in work organization, in terms of increasing complexity and interaction between different organizational factors. However, psychosocial risks are often overlooked by designers of advanced manufacturing systems.
Changes in work practices:
AI is expected to lead to changes in the skills required of workers, requiring training of existing workers, flexibility, and openness to change. The requirement for combining conventional expertise with computer skills may be challenging for existing workers.
Over-reliance on AI tools may lead to deskilling of some professions.
Increased monitoring may lead to micromanagement and thus to stress and anxiety. A perception of surveillance may also lead to stress. Controls for these include consultation with worker groups, extensive testing, and attention to introduced bias.
Wearable sensors, activity trackers, and augmented reality may also lead to stress from micromanagement, both for assembly line workers and gig workers. Gig workers also lack the legal protections and rights of formal workers.
There is also the risk of people being forced to work at a robot's pace, or to monitor robot performance at nonstandard hours.
Bias:
Main article: Algorithmic bias
Algorithms trained on past decisions may mimic undesirable human biases, for example, past discriminatory hiring and firing practices. Information asymmetry between management and workers may lead to stress, if workers do not have access to the data or algorithms that are the basis for decision-making.
In addition to building a model with inadvertently discriminatory features, intentional discrimination may occur through designing metrics that covertly result in discrimination through correlated variables in a non-obvious way.
In complex human‐machine interactions, some approaches to accident analysis may be biased to safeguard a technological system and its developers by assigning blame to the individual human operator instead.
Physical:
Physical hazards in the form of human–robot collisions may arise from robots using AI, especially collaborative robots (cobots). Cobots are intended to operate in close proximity to humans, which makes impossible the common hazard control of isolating the robot using fences or other barriers, which is widely used for traditional industrial robots.
Automated guided vehicles are a type of cobot that as of 2019 are in common use, often as forklifts or pallet jacks in warehouses or factories. For cobots, sensor malfunctions or unexpected work environment conditions can lead to unpredictable robot behavior and thus to human–robot collisions.
Self-driving cars are another example of AI-enabled robots. In addition, the ergonomics of control interfaces and human–machine interactions may give rise to hazards
Hazard controls:
AI, in common with other computational technologies, requires cybersecurity measures to stop software breaches and intrusions, as well as information privacy measures.
Communication and transparency with workers about data usage is a control for psychosocial hazards arising from security and privacy issues.
Proposed best practices for employer‐sponsored worker monitoring programs include:
- using only validated sensor technologies;
- ensuring voluntary worker participation;
- ceasing data collection outside the workplace;
- disclosing all data uses;
- and ensuring secure data storage.
For industrial cobots equipped with AI‐enabled sensors, the International Organization for Standardization (ISO) recommended:
- safety‐related monitored stopping controls;
- human hand guiding of the cobot;
- speed and separation monitoring controls;
- and power and force limitations.
Networked AI-enabled cobots may share safety improvements with each other.
Human oversight is another general hazard control for AI.
Risk management:
Both applications and hazards arising from AI can be considered as part of existing frameworks for occupational health and safety risk management. As with all hazards, risk identification is most effective and least costly when done in the design phase.
Workplace health surveillance, the collection and analysis of health data on workers, is challenging for AI because labor data are often reported in aggregate and does not provide breakdowns between different types of work, and is focused on economic data such as wages and employment rates rather than skill content of jobs.
Proxies for skill content include educational requirements and classifications of routine versus non-routine, and cognitive versus physical jobs. However, these may still not be specific enough to distinguish specific occupations that have distinct impacts from AI.
The United States Department of Labor's Occupational Information Network is an example of a database with a detailed taxonomy of skills. Additionally, data are often reported on a national level, while there is much geographical variation, especially between urban and rural areas.
Standards and regulation:
Main article: Regulation of artificial intelligence
As of 2019, ISO was developing a standard on the use of metrics and dashboards, information displays presenting company metrics for managers, in workplaces. The standard is planned to include guidelines for both gathering data and displaying it in a viewable and useful manner.
In the European Union, the General Data Protection Regulation, while oriented towards consumer data, is also relevant for workplace data collection. Data subjects, including workers, have "the right not to be subject to a decision based solely on automated processing".
Other relevant EU directives include the Machinery Directive (2006/42/EC), the Radio Equipment Directive (2014/53/EU), and the General Product Safety Directive (2001/95/EC).
See also:
AI and Commonsense Knowledge
- YouTube Video: Gary Marcus on Artificial Intelligence and Common Sense
- YouTube Video: Computers with common sense | Doug Lenat | TEDxYouth@Austin
- YouTube Video: The Power of Common Sense: An AI vs Common Sense Tale
Above Graph: The Curious Case of Commonsense Intelligence
AUTHOR: Yejin Choi
"Commonsense intelligence is a long-standing puzzle in AI. Despite considerable advances in deep learning, AI continues to be narrow and brittle due to its lack of common sense. Why is common sense so trivial for humans but so hard for machines?
In this essay, I map the twists and turns in recent research adventures toward commonsense AI. As we will see, the latest advances on common sense are riddled with new, potentially counterintuitive perspectives and questions.
In particular, I discuss the significance of language for modeling intuitive reasoning, the fundamental limitations of logic formalisms despite their intellectual appeal, the case for on-thefly generative reasoning through language, the continuum between knowledge and reasoning, and the blend between symbolic and neural knowledge representations.
Click here for full article.
___________________________________________________________________________
Commonsense knowledge (artificial intelligence by Wikipedia)
In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", or "Cows say moo", that all humans are expected to know. It is currently an unsolved problem in Artificial General Intelligence.
The first AI program to address common sense knowledge was Advice Taker in 1959 by John McCarthy.
Commonsense knowledge can underpin a commonsense reasoning process, to attempt inferences such as "You might bake a cake because you want people to eat the cake."
A natural language processing process can be attached to the commonsense knowledge base to allow the knowledge base to attempt to answer questions about the world.
Common sense knowledge also helps to solve problems in the face of incomplete information. Using widely held beliefs about everyday objects, or common sense knowledge, AI systems make common sense assumptions or default assumptions about the unknown similar to the way people do.
In an AI system or in English, this is expressed as "Normally P holds", "Usually P" or "Typically P so Assume P". For example, if we know the fact "Tweety is a bird", because we know the commonly held belief about birds, "typically birds fly," without knowing anything else about Tweety, we may reasonably assume the fact that "Tweety can fly."
As more knowledge of the world is discovered or learned over time, the AI system can revise its assumptions about Tweety using a truth maintenance process. If we later learn that "Tweety is a penguin" then truth maintenance revises this assumption because we also know "penguins do not fly".
Commonsense reasoning:
Main article: Commonsense reasoning
Commonsense reasoning simulates the human ability to use commonsense knowledge to make presumptions about the type and essence of ordinary situations they encounter every day, and to change their "minds" should new information come to light. This includes time, missing or incomplete information and cause and effect.
The ability to explain cause and effect is an important aspect of explainable AI. Truth maintenance algorithms automatically provide an explanation facility because they create elaborate records of presumptions.
Compared with humans, all existing computer programs that attempt human-level AI perform extremely poorly on modern "commonsense reasoning" benchmark tests such as the Winograd Schema Challenge.
The problem of attaining human-level competency at "commonsense knowledge" tasks is considered to probably be "AI complete" (that is, solving it would require the ability to synthesize a fully human-level intelligence), although some oppose this notion and believe compassionate intelligence is also required for human-level AI.
Common sense reasoning has been applied successfully in more limited domains such as natural language processing and automated diagnosis or analysis.
Commonsense knowledge base construction:
Compiling comprehensive knowledge bases of commonsense assertions (CSKBs) is a long-standing challenge in AI research. From early expert-driven efforts like CYC and WordNet, significant advances were achieved via the crowdsourced OpenMind Commonsense project, which lead to the crowdsourced ConceptNet KB.
Several approaches have attempted to automate CSKB construction, most notably, via text mining (WebChild, Quasimodo, TransOMCS, Ascent), as well as harvesting these directly from pre-trained language models (AutoTOMIC). These resources are significantly larger than ConceptNet, though the automated construction mostly makes them of moderately lower quality.
Challenges also remain on the representation of commonsense knowledge: Most CSKB projects follow a triple data model, which is not necessarily best suited for breaking more complex natural language assertions. A notable exception here is GenericsKB, which applies no further normalization to sentences, but retains them in full.
Applications:
Around 2013, MIT researchers developed BullySpace, an extension of the commonsense knowledgebase ConceptNet, to catch taunting social media comments. BullySpace included over 200 semantic assertions based around stereotypes, to help the system infer that comments like "Put on a wig and lipstick and be who you really are" are more likely to be an insult if directed at a boy than a girl.
ConceptNet has also been used by chatbots and by computers that compose original fiction. At Lawrence Livermore National Laboratory, common sense knowledge was used in an intelligent software agent to detect violations of a comprehensive nuclear test ban treaty.
Data:
As an example, as of 2012 ConceptNet includes these 21 language-independent relations:
Commonsense knowledge bases:
See also:
AUTHOR: Yejin Choi
"Commonsense intelligence is a long-standing puzzle in AI. Despite considerable advances in deep learning, AI continues to be narrow and brittle due to its lack of common sense. Why is common sense so trivial for humans but so hard for machines?
In this essay, I map the twists and turns in recent research adventures toward commonsense AI. As we will see, the latest advances on common sense are riddled with new, potentially counterintuitive perspectives and questions.
In particular, I discuss the significance of language for modeling intuitive reasoning, the fundamental limitations of logic formalisms despite their intellectual appeal, the case for on-thefly generative reasoning through language, the continuum between knowledge and reasoning, and the blend between symbolic and neural knowledge representations.
Click here for full article.
___________________________________________________________________________
Commonsense knowledge (artificial intelligence by Wikipedia)
In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", or "Cows say moo", that all humans are expected to know. It is currently an unsolved problem in Artificial General Intelligence.
The first AI program to address common sense knowledge was Advice Taker in 1959 by John McCarthy.
Commonsense knowledge can underpin a commonsense reasoning process, to attempt inferences such as "You might bake a cake because you want people to eat the cake."
A natural language processing process can be attached to the commonsense knowledge base to allow the knowledge base to attempt to answer questions about the world.
Common sense knowledge also helps to solve problems in the face of incomplete information. Using widely held beliefs about everyday objects, or common sense knowledge, AI systems make common sense assumptions or default assumptions about the unknown similar to the way people do.
In an AI system or in English, this is expressed as "Normally P holds", "Usually P" or "Typically P so Assume P". For example, if we know the fact "Tweety is a bird", because we know the commonly held belief about birds, "typically birds fly," without knowing anything else about Tweety, we may reasonably assume the fact that "Tweety can fly."
As more knowledge of the world is discovered or learned over time, the AI system can revise its assumptions about Tweety using a truth maintenance process. If we later learn that "Tweety is a penguin" then truth maintenance revises this assumption because we also know "penguins do not fly".
Commonsense reasoning:
Main article: Commonsense reasoning
Commonsense reasoning simulates the human ability to use commonsense knowledge to make presumptions about the type and essence of ordinary situations they encounter every day, and to change their "minds" should new information come to light. This includes time, missing or incomplete information and cause and effect.
The ability to explain cause and effect is an important aspect of explainable AI. Truth maintenance algorithms automatically provide an explanation facility because they create elaborate records of presumptions.
Compared with humans, all existing computer programs that attempt human-level AI perform extremely poorly on modern "commonsense reasoning" benchmark tests such as the Winograd Schema Challenge.
The problem of attaining human-level competency at "commonsense knowledge" tasks is considered to probably be "AI complete" (that is, solving it would require the ability to synthesize a fully human-level intelligence), although some oppose this notion and believe compassionate intelligence is also required for human-level AI.
Common sense reasoning has been applied successfully in more limited domains such as natural language processing and automated diagnosis or analysis.
Commonsense knowledge base construction:
Compiling comprehensive knowledge bases of commonsense assertions (CSKBs) is a long-standing challenge in AI research. From early expert-driven efforts like CYC and WordNet, significant advances were achieved via the crowdsourced OpenMind Commonsense project, which lead to the crowdsourced ConceptNet KB.
Several approaches have attempted to automate CSKB construction, most notably, via text mining (WebChild, Quasimodo, TransOMCS, Ascent), as well as harvesting these directly from pre-trained language models (AutoTOMIC). These resources are significantly larger than ConceptNet, though the automated construction mostly makes them of moderately lower quality.
Challenges also remain on the representation of commonsense knowledge: Most CSKB projects follow a triple data model, which is not necessarily best suited for breaking more complex natural language assertions. A notable exception here is GenericsKB, which applies no further normalization to sentences, but retains them in full.
Applications:
Around 2013, MIT researchers developed BullySpace, an extension of the commonsense knowledgebase ConceptNet, to catch taunting social media comments. BullySpace included over 200 semantic assertions based around stereotypes, to help the system infer that comments like "Put on a wig and lipstick and be who you really are" are more likely to be an insult if directed at a boy than a girl.
ConceptNet has also been used by chatbots and by computers that compose original fiction. At Lawrence Livermore National Laboratory, common sense knowledge was used in an intelligent software agent to detect violations of a comprehensive nuclear test ban treaty.
Data:
As an example, as of 2012 ConceptNet includes these 21 language-independent relations:
- IsA (An "RV" is a "vehicle")
- UsedFor
- HasA (A "rabbit" has a "tail")
- CapableOf
- Desires
- CreatedBy ("cake" can be created by "baking")
- PartOf
- Causes
- LocatedNear
- AtLocation (Somewhere a "Cook" can be at a "restaurant")
- DefinedAs
- SymbolOf (X represents Y)
- ReceivesAction ("cake" can be "eaten")
- HasPrerequisite (X can't do Y unless A does B)
- MotivatedByGoal (You would "bake" because you want to "eat")
- CausesDesire ("baking" makes you want to "follow recipe")
- MadeOf
- HasFirstSubevent (The first thing required when you're doing X is for entity Y to do Z)
- HasSubevent ("eat" has subevent "swallow")
- HasLastSubevent
Commonsense knowledge bases:
- Cyc
- Open Mind Common Sense (data source) and ConceptNet (datastore and NLP engine)
- Quasimodo
- Webchild
- TupleKB
- True Knowledge
- Graphiq
- Ascent++
See also:
- Common sense
- Commonsense reasoning
- Linked data and the Semantic Web
- Default Reasoning
- Truth Maintenance or Reason Maintenance
- Ontology
- Artificial General Intelligence
[Your WebHost: I thought of this movie while working on the Artificial Intelligence (AI) web page, as this 1968 movie was way ahead of the times in AI applications!]
___________________________________________________________________________
HAL 9000 (2001: A Space Odyssey Film)
___________________________________________________________________________
HAL 9000 (2001: A Space Odyssey Film)
- YouTube Video: 2001: A Space Odyssey (1968) - Hello HAL, do you read me?
- YouTube Video: 2001: A Space Odyssey - Hal's Watching
- YouTube Video: HAL 9000: "I'm sorry Dave, I'm afraid I can't do that"
HAL 9000 is a fictional artificial intelligence character and the main antagonist in Arthur C. Clarke's Space Odyssey series (see below about the film).
First appearing in the 1968 film 2001: A Space Odyssey, HAL (Heuristically programmed ALgorithmic computer) is a sentient artificial general intelligence computer that controls the systems of the Discovery One spacecraft and interacts with the ship's astronaut crew.
While part of HAL's hardware is shown toward the end of the film, he is mostly depicted as a camera lens containing a red or yellow dot, with such units located throughout the ship. HAL 9000 is voiced by Douglas Rain in the two feature film adaptations of the Space Odyssey series. HAL speaks in a soft, calm voice and a conversational manner, in contrast to the crewmen, David Bowman and Frank Poole.
In the film, HAL became operational on 12 January 1992 at the HAL Laboratories in Urbana, Illinois as production number 3. The activation year was 1991 in earlier screenplays and changed to 1997 in Clarke's novel written and released in conjunction with the movie.
In addition to maintaining the Discovery One spacecraft systems during the interplanetary mission to Jupiter (or Saturn in the novel), HAL has been shown to be capable of:
Technology:
Main article: Technologies in 2001: A Space Odyssey
The scene in which HAL's consciousness degrades was inspired by Clarke's memory of a speech synthesis demonstration by physicist John Larry Kelly, Jr., who used an IBM 704 computer to synthesize speech. Kelly's voice recorder synthesizer vocoder recreated the song "Daisy Bell", with musical accompaniment from Max Mathews.
HAL's capabilities, like all the technology in 2001, were based on the speculation of respected scientists. Marvin Minsky, director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and one of the most influential researchers in the field, was an adviser on the film set.
In the mid-1960s, many computer scientists in the field of artificial intelligence were optimistic that machines with HAL's capabilities would exist within a few decades. For example, AI pioneer Herbert A. Simon at Carnegie Mellon University, had predicted in 1965 that "machines will be capable, within twenty years, of doing any work a man can do"
Click here for more about HAL 9000.
___________________________________________________________________________
2001: A Space Odyssey (1968 film)
See also: Technologies in 2001: A Space Odyssey
2001: A Space Odyssey is a 1968 epic science fiction film produced and directed by Stanley Kubrick. The screenplay was written by Kubrick and science fiction author Arthur C. Clarke, and was inspired by Clarke's 1951 short story "The Sentinel" and other short stories by Clarke.
Clarke also published a novelisation of the film, in part written concurrently with the screenplay, after the film's release. The film stars Keir Dullea, Gary Lockwood, William Sylvester, and Douglas Rain, and follows a voyage by astronauts, scientists and the sentient supercomputer HAL to Jupiter to investigate an alien monolith.
The film is noted for its scientifically accurate depiction of space flight, pioneering special effects, and ambiguous imagery. Kubrick avoided conventional cinematic and narrative techniques; dialogue is used sparingly, and there are long sequences accompanied only by music.
The soundtrack incorporates numerous works of classical music, by composers including Richard Strauss, Johann Strauss II, Aram Khachaturian, and György Ligeti.
The film received diverse critical responses, ranging from those who saw it as darkly apocalyptic to those who saw it as an optimistic reappraisal of the hopes of humanity.
Critics noted its exploration of themes such as human evolution, technology, artificial intelligence, and the possibility of extraterrestrial life. It was nominated for four Academy Awards, winning Kubrick the award for his direction of the visual effects.
The film is now widely regarded as one of the greatest and most influential films ever made.
In 1991, it was deemed "culturally, historically, or aesthetically significant" by the United States Library of Congress and selected for preservation in the National Film Registry.
Click here for more about the 1968 Film "2001: a Space Odyssey".
First appearing in the 1968 film 2001: A Space Odyssey, HAL (Heuristically programmed ALgorithmic computer) is a sentient artificial general intelligence computer that controls the systems of the Discovery One spacecraft and interacts with the ship's astronaut crew.
While part of HAL's hardware is shown toward the end of the film, he is mostly depicted as a camera lens containing a red or yellow dot, with such units located throughout the ship. HAL 9000 is voiced by Douglas Rain in the two feature film adaptations of the Space Odyssey series. HAL speaks in a soft, calm voice and a conversational manner, in contrast to the crewmen, David Bowman and Frank Poole.
In the film, HAL became operational on 12 January 1992 at the HAL Laboratories in Urbana, Illinois as production number 3. The activation year was 1991 in earlier screenplays and changed to 1997 in Clarke's novel written and released in conjunction with the movie.
In addition to maintaining the Discovery One spacecraft systems during the interplanetary mission to Jupiter (or Saturn in the novel), HAL has been shown to be capable of:
- *speech,
- speech recognition,
- facial recognition,
- natural language processing,
- lip reading,
- art appreciation,
- interpreting emotional behaviours,
- automated reasoning,
- spacecraft piloting,
- and playing chess.
Technology:
Main article: Technologies in 2001: A Space Odyssey
The scene in which HAL's consciousness degrades was inspired by Clarke's memory of a speech synthesis demonstration by physicist John Larry Kelly, Jr., who used an IBM 704 computer to synthesize speech. Kelly's voice recorder synthesizer vocoder recreated the song "Daisy Bell", with musical accompaniment from Max Mathews.
HAL's capabilities, like all the technology in 2001, were based on the speculation of respected scientists. Marvin Minsky, director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and one of the most influential researchers in the field, was an adviser on the film set.
In the mid-1960s, many computer scientists in the field of artificial intelligence were optimistic that machines with HAL's capabilities would exist within a few decades. For example, AI pioneer Herbert A. Simon at Carnegie Mellon University, had predicted in 1965 that "machines will be capable, within twenty years, of doing any work a man can do"
Click here for more about HAL 9000.
___________________________________________________________________________
2001: A Space Odyssey (1968 film)
See also: Technologies in 2001: A Space Odyssey
2001: A Space Odyssey is a 1968 epic science fiction film produced and directed by Stanley Kubrick. The screenplay was written by Kubrick and science fiction author Arthur C. Clarke, and was inspired by Clarke's 1951 short story "The Sentinel" and other short stories by Clarke.
Clarke also published a novelisation of the film, in part written concurrently with the screenplay, after the film's release. The film stars Keir Dullea, Gary Lockwood, William Sylvester, and Douglas Rain, and follows a voyage by astronauts, scientists and the sentient supercomputer HAL to Jupiter to investigate an alien monolith.
The film is noted for its scientifically accurate depiction of space flight, pioneering special effects, and ambiguous imagery. Kubrick avoided conventional cinematic and narrative techniques; dialogue is used sparingly, and there are long sequences accompanied only by music.
The soundtrack incorporates numerous works of classical music, by composers including Richard Strauss, Johann Strauss II, Aram Khachaturian, and György Ligeti.
The film received diverse critical responses, ranging from those who saw it as darkly apocalyptic to those who saw it as an optimistic reappraisal of the hopes of humanity.
Critics noted its exploration of themes such as human evolution, technology, artificial intelligence, and the possibility of extraterrestrial life. It was nominated for four Academy Awards, winning Kubrick the award for his direction of the visual effects.
The film is now widely regarded as one of the greatest and most influential films ever made.
In 1991, it was deemed "culturally, historically, or aesthetically significant" by the United States Library of Congress and selected for preservation in the National Film Registry.
Click here for more about the 1968 Film "2001: a Space Odyssey".
Artificial General Intelligence
- YouTube Video: Artificial General Intelligence in 6 Minutes • Danny Lange • GOTO 2020
- YouTube Video: The Dominance of Artificial General Intelligence - AGI The Final Chapter
- YouTube Video: Google's Terrifying Path to Artificial General Intelligence (Pathways AI)
* -- Continuation of above Article:
"We analyzed 96 artificial general intelligence (AGI) startups. Olbrain, DeepBrainz AI, GoodAI, Singularity Studio & KEOTIC develop 5 top solutions to watch out for."
Artificial general intelligence (Wikipedia)
Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies.
AGI is also called strong AI, full AI, or general intelligent action, although some academic sources reserve the term "strong AI" for computer programs that experience sentience or consciousness.
Strong AI contrasts with weak AI (or narrow AI), which is not intended to have general cognitive abilities; rather, weak AI is any program that is designed to solve exactly one problem. (Academic sources reserve "weak AI" for programs that do not experience consciousness or do not have a mind in the same sense people do.) A 2020 survey identified 72 active AGI R&D projects spread across 37 countries.
Characteristics:
Main article: Artificial intelligence
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone.
Intelligence traits:
However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:
Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.
Computer based systems that exhibit many of these capabilities do exist (e.g. see:
Mathematical formalisms:
A mathematically precise specification of AGI was proposed by Marcus Hutter in 2000. Named AIXI, the proposed agent maximizes “the ability to satisfy goals in a wide range of environments”. This type of AGI, characterized by proof of the ability to maximise a mathematical definition of intelligence rather than exhibit human-like behavior, is also called universal artificial intelligence.
In 2015 Jan Lieke and Marcus Hutter showed that "Legg-Hutter intelligence is measured with respect to a fixed UTM. AIXI is the most intelligent policy if it uses the same UTM", a result which "undermines all existing optimality properties for AIXI".
This problem stems from AIXI's use of compression as a proxy for intelligence, which requires that cognition take place in isolation from the environment in which goals are pursued. This formalises a philosophical position known as Mind–body dualism. There is arguably more evidence in support of enactivism -- the notion that cognition takes place within the environment in which goals are pursued.
Subsequently, Michael Timothy Bennett formalized enactive cognition (see enactivism) and identified an alternative proxy for intelligence called weakness. The accompanying experiments (comparing weakness and compression) and mathematical proofs showed that maximizing weakness results in the optimal "ability to complete a wide range of tasks" or equivalently "ability to generalize" (thus maximising intelligence by either definition).
This also showed that if enactivism holds and Mind–body dualism does not, then compression is not necessary or sufficient for intelligence, calling into question widely held views on intelligence (see also Hutter Prize).
Regardless of the position taken with respect to cognition, whether this type of AGI exhibits human-like behavior (such as the use of natural language) would depend on many factors, for example the manner in which the agent is embodied, or whether it has a reward function that closely approximates human primitives of cognition like hunger, pain and so forth.
Tests for confirming human-level AGI:
The following tests to confirm human-level AGI have been considered:
AI-complete problems:
Main article: AI-complete
There are many individual problems that may require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's original intent (social intelligence).
All of these problems need to be solved simultaneously in order to reach human-level machine performance.
A problem is informally known as "AI-complete" or "AI-hard", if solving it is equivalent to the general aptitude of human intelligence, or strong AI, and is beyond the capabilities of a purpose-specific algorithm.
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem.
AI-complete problems cannot be solved with current computer technology alone, and require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.
History:
Classical AI:
Main articles: History of artificial intelligence and Symbolic AI
Modern AI research began in the mid-1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades.
AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001 (See preceding article above).
AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".Several classical AI projects, such as Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project, were specifically directed at AGI.
However, in the early 1970s and then again in the early 90s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".
As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".
In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.
For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]".
Narrow AI research:
Main article: Artificial intelligence
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning.
These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems.
Hans Moravec wrote in 1988: "I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs.
Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts.
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: "The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up.A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)".
Modern artificial general intelligence research:
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002.
AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog.
The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers.
However, as of yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term.
However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature.
Timescales: In the introduction to his 2006 book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the 2007 consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.
However, mainstream AI researchers have given a wide range of opinions on whether progress will be this rapid. A 2012 meta-analysis of 95 such opinions found a bias towards predicting that the onset of AGI would occur within 16–26 years for modern and historical predictions alike. It was later found that the dataset listed some experts as non-experts and vice versa.
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached an IQ value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.
In 2020, OpenAI developed GPT-3, a language model capable of performing many diverse tasks without specific training. According to Gary Grossman in a VentureBeat article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to classify as a narrow AI system.
In the same year Jason Rohrer used his GPT-3 account to develop a chatbot, and provided a chatbot-developing platform called "Project December". OpenAI asked for changes to the chatbot to comply with their safety guidelines; Rohrer disconnected Project December from the GPT-3 API.
In 2022, DeepMind developed Gato, a "general-purpose" system capable of performing more than 600 different tasks.
Brain simulation:
Whole brain emulation:
Main article: Mind uploading
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.
Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI.
Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.
Early Estimates:
For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 1015 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood.
Estimates vary for an adult, ranging from 1014 to 5×1014 synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 1014 (100 trillion) synaptic updates per second (SUPS).
In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 1016 computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating-point operation" – a measure used to rate current supercomputers – then 1016 "computations" would be equivalent to 10 petaFLOPS, achieved in 2011, while 1018 was achieved in 2022.)
Kurzweil used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.
___________________________________________________________________________
"We analyzed 96 artificial general intelligence (AGI) startups. Olbrain, DeepBrainz AI, GoodAI, Singularity Studio & KEOTIC develop 5 top solutions to watch out for."
Artificial general intelligence (Wikipedia)
Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies.
AGI is also called strong AI, full AI, or general intelligent action, although some academic sources reserve the term "strong AI" for computer programs that experience sentience or consciousness.
Strong AI contrasts with weak AI (or narrow AI), which is not intended to have general cognitive abilities; rather, weak AI is any program that is designed to solve exactly one problem. (Academic sources reserve "weak AI" for programs that do not experience consciousness or do not have a mind in the same sense people do.) A 2020 survey identified 72 active AGI R&D projects spread across 37 countries.
Characteristics:
Main article: Artificial intelligence
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone.
Intelligence traits:
However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:
- reason, use strategy, solve puzzles, and make judgments under uncertainty;
- represent knowledge, including common sense knowledge;
- plan;
- learn;
- communicate in natural language;
- input as the ability to sense (e.g. see, hear, etc.), and
- output as the ability to act (e.g. move and manipulate objects, change own location to explore, etc.)
Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.
Computer based systems that exhibit many of these capabilities do exist (e.g. see:
- computational creativity,
- automated reasoning,
- decision support system,
- robot,
- evolutionary computation,
- intelligent agent),
Mathematical formalisms:
A mathematically precise specification of AGI was proposed by Marcus Hutter in 2000. Named AIXI, the proposed agent maximizes “the ability to satisfy goals in a wide range of environments”. This type of AGI, characterized by proof of the ability to maximise a mathematical definition of intelligence rather than exhibit human-like behavior, is also called universal artificial intelligence.
In 2015 Jan Lieke and Marcus Hutter showed that "Legg-Hutter intelligence is measured with respect to a fixed UTM. AIXI is the most intelligent policy if it uses the same UTM", a result which "undermines all existing optimality properties for AIXI".
This problem stems from AIXI's use of compression as a proxy for intelligence, which requires that cognition take place in isolation from the environment in which goals are pursued. This formalises a philosophical position known as Mind–body dualism. There is arguably more evidence in support of enactivism -- the notion that cognition takes place within the environment in which goals are pursued.
Subsequently, Michael Timothy Bennett formalized enactive cognition (see enactivism) and identified an alternative proxy for intelligence called weakness. The accompanying experiments (comparing weakness and compression) and mathematical proofs showed that maximizing weakness results in the optimal "ability to complete a wide range of tasks" or equivalently "ability to generalize" (thus maximising intelligence by either definition).
This also showed that if enactivism holds and Mind–body dualism does not, then compression is not necessary or sufficient for intelligence, calling into question widely held views on intelligence (see also Hutter Prize).
Regardless of the position taken with respect to cognition, whether this type of AGI exhibits human-like behavior (such as the use of natural language) would depend on many factors, for example the manner in which the agent is embodied, or whether it has a reward function that closely approximates human primitives of cognition like hunger, pain and so forth.
Tests for confirming human-level AGI:
The following tests to confirm human-level AGI have been considered:
- The Turing Test (Turing): A machine and a human both converse unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.
- The Coffee Test (Wozniak): A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.
- The Robot College Student Test (Goertzel): A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.
- The Employment Test (Nilsson): A machine performs an economically important job at least as well as humans in the same job.
AI-complete problems:
Main article: AI-complete
There are many individual problems that may require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's original intent (social intelligence).
All of these problems need to be solved simultaneously in order to reach human-level machine performance.
A problem is informally known as "AI-complete" or "AI-hard", if solving it is equivalent to the general aptitude of human intelligence, or strong AI, and is beyond the capabilities of a purpose-specific algorithm.
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem.
AI-complete problems cannot be solved with current computer technology alone, and require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.
History:
Classical AI:
Main articles: History of artificial intelligence and Symbolic AI
Modern AI research began in the mid-1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades.
AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001 (See preceding article above).
AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".Several classical AI projects, such as Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project, were specifically directed at AGI.
However, in the early 1970s and then again in the early 90s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".
As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".
In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.
For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]".
Narrow AI research:
Main article: Artificial intelligence
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning.
These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems.
Hans Moravec wrote in 1988: "I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs.
Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts.
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: "The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up.A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)".
Modern artificial general intelligence research:
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002.
AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog.
The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers.
However, as of yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term.
However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature.
Timescales: In the introduction to his 2006 book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the 2007 consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.
However, mainstream AI researchers have given a wide range of opinions on whether progress will be this rapid. A 2012 meta-analysis of 95 such opinions found a bias towards predicting that the onset of AGI would occur within 16–26 years for modern and historical predictions alike. It was later found that the dataset listed some experts as non-experts and vice versa.
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached an IQ value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.
In 2020, OpenAI developed GPT-3, a language model capable of performing many diverse tasks without specific training. According to Gary Grossman in a VentureBeat article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to classify as a narrow AI system.
In the same year Jason Rohrer used his GPT-3 account to develop a chatbot, and provided a chatbot-developing platform called "Project December". OpenAI asked for changes to the chatbot to comply with their safety guidelines; Rohrer disconnected Project December from the GPT-3 API.
In 2022, DeepMind developed Gato, a "general-purpose" system capable of performing more than 600 different tasks.
Brain simulation:
Whole brain emulation:
Main article: Mind uploading
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.
Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI.
Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.
Early Estimates:
For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 1015 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood.
Estimates vary for an adult, ranging from 1014 to 5×1014 synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 1014 (100 trillion) synaptic updates per second (SUPS).
In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 1016 computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating-point operation" – a measure used to rate current supercomputers – then 1016 "computations" would be equivalent to 10 petaFLOPS, achieved in 2011, while 1018 was achieved in 2022.)
Kurzweil used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.
___________________________________________________________________________
The above illustration demonstrates estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.
___________________________________________________________________________
___________________________________________________________________________
Modelling the neurons in more detail:
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines.
The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are known to play a role in cognitive processes.
Current research:
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures.
The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 1011 neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.
The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 108 synapses in 2006.
A longer-term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project.
However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet.
Criticisms of simulation-based approaches:
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence.
Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.
Desktop computers using microprocessors capable of more than 109 cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005.
According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists. There are several reasons for this:
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.
Philosophical perspective:
See also:
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:
The first one he called "strong" because it makes a stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test – the behavior of a "weak AI" machine would be precisely identical to a "strong AI" machine, but the latter would also have subjective conscious experience. This usage is also common in academic AI research and textbooks.
Mainstream AI is only interested in how a program behaves. According to Russell and Norvig, "as long as the program works, they don't care if you call it real or a simulation." If the program can behave as if it has a mind, then there's no need to know if it actually has mind – indeed, there would be no way to tell.
For AI research, Searle's "weak AI hypothesis" is equivalent to the statement "artificial general intelligence is possible". Thus, according to Russell and Norvig, "most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis." Thus, for academic AI research, "Strong AI" and "AGI" are two very different things.
In contrast to Searle and mainstream AI, some futurists such as Ray Kurzweil use the term "strong AI" to mean "human level artificial general intelligence". This is not the same as Searle's strong AI, unless you assume that consciousness is necessary for human-level AGI.
Academic philosophers such as Searle do not believe that is the case, and artificial intelligence researchers do not care.
Consciousness:
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:
These traits have a moral dimension, because a machine with this form of strong AI may have rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI. Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity.
It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness?
It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine. It's also possible that it will become natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent.
Artificial consciousness research:
Main article: Artificial consciousness
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.
Possible explanations for the slow progress of strong AI research:
See also:
Since the launch of AI research in 1956, progress in this field of creating machines skilled with intelligent action at the human level has slowed.
One basic potential explanation for this delay is that computers lack a sufficient scope of memory, processing power, or chip flexibility to accommodate computer-science-oriented and/or neuroscience-oriented platforms. In addition, the level of complexity involved in AI research likely also limits the progress of strong AI research.
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. This means situating a strong AI in a sociocultural context where human-like AI derives from human-like experiences.
As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".
A fundamental paradox arising from this problem is that AI researchers have only been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox).
The problem as described by David Gelernter is that some people assume thinking and reasoning are equivalent. The idea of whether thoughts and the creator of those thoughts are isolated individually or must be socially situated has intrigued AI researchers.
The problems encountered in AI research over the past decades have further impeded the progress of AGI research and development through generating a degree of distrust in the field. The failed predictions of success promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish optimism in the primary idea of creating human-level AI.
Although the waxing and waning progress of AI research has brought both improvement and disappointment, most investigators are optimistic about achieving the goal of AGI in the 21st century.
Other possible reasons have been proposed for the slow progress towards strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in the task of emulating the function of the human brain in computer hardware through initiatives like the Human Brain Project.
Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking issues like human brain modelling seriously, AGI researchers then overlook solutions to problematic questions.
However, Clocksin states that a conceptual limitation that may impede the progress of AI research is that AI researchers may be using the wrong techniques for computer programs and for the implementation of equipment. When AI researchers first began to aim for AGI, a main interest was to emulate and investigate human reasoning.
At the time, researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.
In response, the practice of abstraction, which people tend to redefine when working with a particular context in research, provides AI researchers with the option to concentrate on just a few concepts.
The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction operators has posed problems.
Another possible reason for the slowness in strong AI progress relates to the acknowledgement by many AI researchers that human heuristics is still vastly superior to computer performance.
Nonetheless, the specific functions that are programmed into increasingly powerful computers may be able to account for many of the requirements in heuristics that eventually allow AI to match human intelligence. Thus, while heuristics is not necessarily a fundamental barrier to achieving strong AI, it is widely agreed to be a challenge.
Finally, many AI researchers have debated whether or not machines should be created with emotions. There are no emotions in typical models of AI, and some researchers say programming emotions into machines allows them to have a mind of their own.
However, emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." Thus, just as this concern over emotion has posed problems for AI researchers, it is likely to continue to challenge the concept of strong AI as its research progresses.
Controversies and dangers:
Feasibility:
As of 2022, AGI remains speculative as no such system has yet been demonstrated. Opinions vary both on whether and when artificial general intelligence will arrive.
At one extreme, AI pioneer Herbert A. Simon speculated in 1965 that "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true.
Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".
Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.
As such, the basic concern is whether or not strong AI is fundamentally achievable, even after centuries of effort. While most AI researchers believe strong AI can be achieved in the future, some individuals, like Hubert Dreyfus and Roger Penrose, deny the possibility of achieving strong AI.
One fundamental problem is that while humans are complex, they are not general intelligences. Various computer scientists, like John McCarthy, believe human-level AI will be accomplished, but that the present level of progress is such that a date cannot accurately be predicted.
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found above Tests for confirming human-level AGI.
Yet it is worth noting that there is no scientific rigour in such predictions. Rodney Brooks notes the findings of a report by Stuart Armstrong and Kaj Sotala, of the Machine Intelligence Research Institute, that "over that 60 year time frame there is a strong bias towards predicting the arrival of human level AI as between 15 and 25 years from the time the prediction was made".
They also analyzed 95 predictions made between 1950 and the present on when human level AI will come about. They show that there is no difference between predictions made by experts and non-experts.
Potential threat to human existence:
Main article: Existential risk from artificial general intelligence
The thesis that AI poses an existential risk for humans, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking.
The most notable AI researcher to endorse the thesis is Stuart J. Russell, but many others, like Roman Yampolskiy and Alexey Turchin, also support the basic thesis of a potential threat to humanity.
Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: "So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here–we'll leave the lights on?' Probably not–but this is more or less what is happening with AI."
A 2021 systematic review of the risks associated with AGI conducted by researchers from the Centre for Human Factors and Sociotechnical Systems of the University of the Sunshine Coast in Australia, while noting the paucity of data, found the following potential threats: "AGI removing itself from the control of human owners/managers, being given or developing unsafe goals, development of unsafe AGI, AGIs with poor ethics, morals and values; inadequate management of AGI, and existential risks".
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?
Solving the control problem is complicated by the AI arms race, which will almost certainly see the militarization and weaponization of AGI by more than one nation-state, i.e., resulting in AGI-enabled warfare, and in the case of AI misalignment, AGI-directed warfare, potentially against all humanity.
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."
Former Baidu Vice President and Chief Scientist Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."
See also:
Modelling the neurons in more detail:
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines.
The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are known to play a role in cognitive processes.
Current research:
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures.
The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 1011 neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.
The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 108 synapses in 2006.
A longer-term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project.
However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet.
Criticisms of simulation-based approaches:
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence.
Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.
Desktop computers using microprocessors capable of more than 109 cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005.
According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists. There are several reasons for this:
- The neuron model seems to be oversimplified (see next section).
- There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity (observed using techniques such as functional magnetic resonance imaging) correlates with.
- Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.
- The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.
Philosophical perspective:
See also:
- Philosophy of artificial intelligence
- and Turing test "Strong AI" as defined in philosophy
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:
- Strong AI hypothesis: An artificial intelligence system can "think", have "a mind" and "consciousness".
- Weak AI hypothesis: An artificial intelligence system can (only) act like it thinks and has a mind and consciousness.
The first one he called "strong" because it makes a stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test – the behavior of a "weak AI" machine would be precisely identical to a "strong AI" machine, but the latter would also have subjective conscious experience. This usage is also common in academic AI research and textbooks.
Mainstream AI is only interested in how a program behaves. According to Russell and Norvig, "as long as the program works, they don't care if you call it real or a simulation." If the program can behave as if it has a mind, then there's no need to know if it actually has mind – indeed, there would be no way to tell.
For AI research, Searle's "weak AI hypothesis" is equivalent to the statement "artificial general intelligence is possible". Thus, according to Russell and Norvig, "most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis." Thus, for academic AI research, "Strong AI" and "AGI" are two very different things.
In contrast to Searle and mainstream AI, some futurists such as Ray Kurzweil use the term "strong AI" to mean "human level artificial general intelligence". This is not the same as Searle's strong AI, unless you assume that consciousness is necessary for human-level AGI.
Academic philosophers such as Searle do not believe that is the case, and artificial intelligence researchers do not care.
Consciousness:
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:
- consciousness: To have subjective experience and thought.
- self-awareness: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.
- sentience: The ability to "feel" perceptions or emotions subjectively.
- sapience: The capacity for wisdom.
These traits have a moral dimension, because a machine with this form of strong AI may have rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI. Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity.
It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness?
It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine. It's also possible that it will become natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent.
Artificial consciousness research:
Main article: Artificial consciousness
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.
Possible explanations for the slow progress of strong AI research:
See also:
- History of artificial intelligence § The problems,
- and History of artificial intelligence § Predictions (or "Where is HAL 9000?")
Since the launch of AI research in 1956, progress in this field of creating machines skilled with intelligent action at the human level has slowed.
One basic potential explanation for this delay is that computers lack a sufficient scope of memory, processing power, or chip flexibility to accommodate computer-science-oriented and/or neuroscience-oriented platforms. In addition, the level of complexity involved in AI research likely also limits the progress of strong AI research.
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. This means situating a strong AI in a sociocultural context where human-like AI derives from human-like experiences.
As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".
A fundamental paradox arising from this problem is that AI researchers have only been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox).
The problem as described by David Gelernter is that some people assume thinking and reasoning are equivalent. The idea of whether thoughts and the creator of those thoughts are isolated individually or must be socially situated has intrigued AI researchers.
The problems encountered in AI research over the past decades have further impeded the progress of AGI research and development through generating a degree of distrust in the field. The failed predictions of success promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish optimism in the primary idea of creating human-level AI.
Although the waxing and waning progress of AI research has brought both improvement and disappointment, most investigators are optimistic about achieving the goal of AGI in the 21st century.
Other possible reasons have been proposed for the slow progress towards strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in the task of emulating the function of the human brain in computer hardware through initiatives like the Human Brain Project.
Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking issues like human brain modelling seriously, AGI researchers then overlook solutions to problematic questions.
However, Clocksin states that a conceptual limitation that may impede the progress of AI research is that AI researchers may be using the wrong techniques for computer programs and for the implementation of equipment. When AI researchers first began to aim for AGI, a main interest was to emulate and investigate human reasoning.
At the time, researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.
In response, the practice of abstraction, which people tend to redefine when working with a particular context in research, provides AI researchers with the option to concentrate on just a few concepts.
The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction operators has posed problems.
Another possible reason for the slowness in strong AI progress relates to the acknowledgement by many AI researchers that human heuristics is still vastly superior to computer performance.
Nonetheless, the specific functions that are programmed into increasingly powerful computers may be able to account for many of the requirements in heuristics that eventually allow AI to match human intelligence. Thus, while heuristics is not necessarily a fundamental barrier to achieving strong AI, it is widely agreed to be a challenge.
Finally, many AI researchers have debated whether or not machines should be created with emotions. There are no emotions in typical models of AI, and some researchers say programming emotions into machines allows them to have a mind of their own.
However, emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." Thus, just as this concern over emotion has posed problems for AI researchers, it is likely to continue to challenge the concept of strong AI as its research progresses.
Controversies and dangers:
Feasibility:
As of 2022, AGI remains speculative as no such system has yet been demonstrated. Opinions vary both on whether and when artificial general intelligence will arrive.
At one extreme, AI pioneer Herbert A. Simon speculated in 1965 that "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true.
Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".
Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.
As such, the basic concern is whether or not strong AI is fundamentally achievable, even after centuries of effort. While most AI researchers believe strong AI can be achieved in the future, some individuals, like Hubert Dreyfus and Roger Penrose, deny the possibility of achieving strong AI.
One fundamental problem is that while humans are complex, they are not general intelligences. Various computer scientists, like John McCarthy, believe human-level AI will be accomplished, but that the present level of progress is such that a date cannot accurately be predicted.
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found above Tests for confirming human-level AGI.
Yet it is worth noting that there is no scientific rigour in such predictions. Rodney Brooks notes the findings of a report by Stuart Armstrong and Kaj Sotala, of the Machine Intelligence Research Institute, that "over that 60 year time frame there is a strong bias towards predicting the arrival of human level AI as between 15 and 25 years from the time the prediction was made".
They also analyzed 95 predictions made between 1950 and the present on when human level AI will come about. They show that there is no difference between predictions made by experts and non-experts.
Potential threat to human existence:
Main article: Existential risk from artificial general intelligence
The thesis that AI poses an existential risk for humans, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking.
The most notable AI researcher to endorse the thesis is Stuart J. Russell, but many others, like Roman Yampolskiy and Alexey Turchin, also support the basic thesis of a potential threat to humanity.
Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: "So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here–we'll leave the lights on?' Probably not–but this is more or less what is happening with AI."
A 2021 systematic review of the risks associated with AGI conducted by researchers from the Centre for Human Factors and Sociotechnical Systems of the University of the Sunshine Coast in Australia, while noting the paucity of data, found the following potential threats: "AGI removing itself from the control of human owners/managers, being given or developing unsafe goals, development of unsafe AGI, AGIs with poor ethics, morals and values; inadequate management of AGI, and existential risks".
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?
Solving the control problem is complicated by the AI arms race, which will almost certainly see the militarization and weaponization of AGI by more than one nation-state, i.e., resulting in AGI-enabled warfare, and in the case of AI misalignment, AGI-directed warfare, potentially against all humanity.
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."
Former Baidu Vice President and Chief Scientist Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."
See also:
- Artificial brain – Software and hardware with cognitive abilities similar to those of the animal or human brain
- AI alignment – Issue of ensuring beneficial AI
- A.I. Rising – 2018 film directed by Lazar Bodroža
- Automated machine learning – Process of automating the application of machine learning
- BRAIN Initiative – Collaborative public-private research initiative announced by the Obama administration
- China Brain Project
- Future of Humanity Institute – Oxford interdisciplinary research centre
- General game playing – Learning to play multiple games successfully
- Human Brain Project – Scientific research project
- Intelligence amplification – Use of information technology to augment human intelligence (IA)
- Machine ethics – Moral behaviours of man-made machines
- Multi-task learning – Solving multiple machine learning tasks at the same time
- Outline of artificial intelligence – Overview of and topical guide to artificial intelligence
- Transhumanism – Philosophical movement
- Synthetic intelligence – Alternate term for or form of artificial intelligence
- Transfer learning – Applying previously-learned knowledge to new problems
- Loebner Prize – Annual AI competition
- Hardware for artificial intelligence
- Weak artificial intelligence – Form of artificial intelligence
- The Genesis Group at MIT's CSAIL – Modern research on the computations that underlay human intelligence
- OpenCog – open source project to develop a human-level AI
- Simulating logical human thought
- What Do We Know about AI Timelines? – Literature review
"Without Consciousness, AIs Will Be Sociopaths"
By Michael S.A. Graziano
Jan. 13, 2023 9:24 am ET
Wall Street Journal
We’re building machines that are smarter than us and giving them control over our world.
ChatGPT can carry on a conversation, but the most important goal for artificial intelligence is making it understand what it means to have a mind.
ChatGPT, the latest technological sensation, is an artificial intelligence chatbot with an amazing ability to carry on a conversation. It relies on a massive network of artificial neurons that loosely mimics the human brain, and it has been trained by analyzing the information resources of the internet.
ChatGPT has processed more text than any human is likely to have read in a lifetime, allowing it to respond to questions fluently and even to imitate specific individuals, answering queries the way it thinks they would. My teenage son recently used ChatGPT to argue about politics with an imitation Karl Marx.
As a neuroscientist specializing in the brain mechanisms of consciousness, I find talking to chatbots an unsettling experience. Are they conscious? Probably not. But given the rate of technological improvement, will they be in the next couple of years? And how would we even know?
We’re building machines that are smarter than us and giving them control over our world.
Figuring out whether a machine has or understands humanlike consciousness is more than just a science-fiction hypothetical. Artificial intelligence is growing so powerful, so quickly, that it could soon pose a danger to human beings. We’re building machines that are smarter than us and giving them control over our world. How can we build AI so that it’s aligned with human needs, not in conflict with us?
As counterintuitive as it may sound, creating a benign AI may require making it more conscious, not less. One of the most common misunderstandings about AI is the notion that if it’s intelligent then it must be conscious, and if it is conscious then it will be autonomous, capable of taking over the world. But as we learn more about consciousness, those ideas do not appear to be correct. An autonomous system that makes complex decisions doesn’t require consciousness.
What’s most important about consciousness is that, for human beings, it’s not just about the self. We see it in ourselves, but we also perceive it or project it into the world around us.
Consciousness is part of the tool kit that evolution gave us to make us an empathetic, prosocial species. Without it, we would necessarily be sociopaths, because we’d lack the tools for prosocial behavior. And without a concept of what consciousness is or an understanding that other beings have it, machines are sociopaths.
The only diagnostic tool for machine consciousness that we have right now is the Turing test, a thought experiment named for the British computer scientist Alan Turing. In its most common version, the test says that if a person holds a conversation with a machine and mistakes its responses for those of a real human being, then the machine must be considered effectively conscious.
The Turing test is an admission that the consciousness of another being is something we can only judge from the outside, based on the way he, she or it communicates. But the limits of the test are painfully obvious. After all, a pet dog can’t carry on a conversation and pass as a human—does that mean it’s not conscious? If you really wanted a machine to pass the test, you could have it say a few words to a small child. It might even fool some adults, too.
The truth is, the Turing test doesn’t reveal much about what’s going on inside a machine or a computer program like ChatGPT. Instead, what it really tests is the social cognition of the human participant.
We evolved as social animals, and our brains instinctively project consciousness, agency, intention and emotion onto the objects around us. We’re primed to see a world suffused with minds. Ancient animistic beliefs held that every river and tree had a spirit in it. For a similar reason, people are prone to see faces in random objects like the moon and moldy toast.
The original test proposed by Alan Turing in a 1950 paper was more complicated than the version people talk about today. Notably, Turing didn’t say a word about consciousness; he never delved into whether the machine had a subjective experience. He asked only whether it could think like a person.
Turing imagined an “imitation game” in which the player must determine the sex of two people, A and B. One is a man and one is a woman, but the player can’t see them and can learn about them only by exchanging typed questions and answers. A responds to the questions deceitfully, and wins the game if the player misidentifies their sex, while B answers truthfully and wins if the player identifies their sex correctly.
Turing’s idea was that if A or B is replaced by a machine, and the machine can win the game as often as a real person, then it must have mastered the subtleties of human thinking—of argument, manipulation and guessing what other people are thinking.
Turing’s test was so complicated that people who popularized his work soon streamlined it into a single machine conversing with a single person. But the whole point of the original test was its bizarre complexity. Social cognition is difficult and requires a theory of mind—that is, a knowledge that other people have minds and an ability to guess what might be in them.
"Let’s see if the computer can tell whether it’s talking to a human or another computer."
If we want to know whether a computer is conscious, then, we need to test whether the computer understands how conscious minds interact. In other words, we need a reverse Turing test: Let’s see if the computer can tell whether it’s talking to a human or another computer. If it can tell the difference, then maybe it knows what consciousness is. ChatGPT definitely can’t pass that test yet: It doesn’t know whether it’s responding to a living person with a mind or a disjointed list of prefab questions.
A sociopathic machine that can make consequential decisions would be powerfully dangerous. For now, chatbots are still limited in their abilities; they’re essentially toys. But if we don’t think more deeply about machine consciousness, in a year or five years we may face a crisis. If computers are going to outthink us anyway, giving them more humanlike social cognition might be our best hope of aligning them with human values.
Dr. Graziano is a professor of psychology and neuroscience at Princeton University and the author of “Rethinking Consciousness: A Scientific Theory of Subjective Experience.”
Appeared in the January 14, 2023, print edition as 'Without Consciousness, AIs Will Be Sociopaths'.
By Michael S.A. Graziano
Jan. 13, 2023 9:24 am ET
Wall Street Journal
We’re building machines that are smarter than us and giving them control over our world.
ChatGPT can carry on a conversation, but the most important goal for artificial intelligence is making it understand what it means to have a mind.
ChatGPT, the latest technological sensation, is an artificial intelligence chatbot with an amazing ability to carry on a conversation. It relies on a massive network of artificial neurons that loosely mimics the human brain, and it has been trained by analyzing the information resources of the internet.
ChatGPT has processed more text than any human is likely to have read in a lifetime, allowing it to respond to questions fluently and even to imitate specific individuals, answering queries the way it thinks they would. My teenage son recently used ChatGPT to argue about politics with an imitation Karl Marx.
As a neuroscientist specializing in the brain mechanisms of consciousness, I find talking to chatbots an unsettling experience. Are they conscious? Probably not. But given the rate of technological improvement, will they be in the next couple of years? And how would we even know?
We’re building machines that are smarter than us and giving them control over our world.
Figuring out whether a machine has or understands humanlike consciousness is more than just a science-fiction hypothetical. Artificial intelligence is growing so powerful, so quickly, that it could soon pose a danger to human beings. We’re building machines that are smarter than us and giving them control over our world. How can we build AI so that it’s aligned with human needs, not in conflict with us?
As counterintuitive as it may sound, creating a benign AI may require making it more conscious, not less. One of the most common misunderstandings about AI is the notion that if it’s intelligent then it must be conscious, and if it is conscious then it will be autonomous, capable of taking over the world. But as we learn more about consciousness, those ideas do not appear to be correct. An autonomous system that makes complex decisions doesn’t require consciousness.
What’s most important about consciousness is that, for human beings, it’s not just about the self. We see it in ourselves, but we also perceive it or project it into the world around us.
Consciousness is part of the tool kit that evolution gave us to make us an empathetic, prosocial species. Without it, we would necessarily be sociopaths, because we’d lack the tools for prosocial behavior. And without a concept of what consciousness is or an understanding that other beings have it, machines are sociopaths.
The only diagnostic tool for machine consciousness that we have right now is the Turing test, a thought experiment named for the British computer scientist Alan Turing. In its most common version, the test says that if a person holds a conversation with a machine and mistakes its responses for those of a real human being, then the machine must be considered effectively conscious.
The Turing test is an admission that the consciousness of another being is something we can only judge from the outside, based on the way he, she or it communicates. But the limits of the test are painfully obvious. After all, a pet dog can’t carry on a conversation and pass as a human—does that mean it’s not conscious? If you really wanted a machine to pass the test, you could have it say a few words to a small child. It might even fool some adults, too.
The truth is, the Turing test doesn’t reveal much about what’s going on inside a machine or a computer program like ChatGPT. Instead, what it really tests is the social cognition of the human participant.
We evolved as social animals, and our brains instinctively project consciousness, agency, intention and emotion onto the objects around us. We’re primed to see a world suffused with minds. Ancient animistic beliefs held that every river and tree had a spirit in it. For a similar reason, people are prone to see faces in random objects like the moon and moldy toast.
The original test proposed by Alan Turing in a 1950 paper was more complicated than the version people talk about today. Notably, Turing didn’t say a word about consciousness; he never delved into whether the machine had a subjective experience. He asked only whether it could think like a person.
Turing imagined an “imitation game” in which the player must determine the sex of two people, A and B. One is a man and one is a woman, but the player can’t see them and can learn about them only by exchanging typed questions and answers. A responds to the questions deceitfully, and wins the game if the player misidentifies their sex, while B answers truthfully and wins if the player identifies their sex correctly.
Turing’s idea was that if A or B is replaced by a machine, and the machine can win the game as often as a real person, then it must have mastered the subtleties of human thinking—of argument, manipulation and guessing what other people are thinking.
Turing’s test was so complicated that people who popularized his work soon streamlined it into a single machine conversing with a single person. But the whole point of the original test was its bizarre complexity. Social cognition is difficult and requires a theory of mind—that is, a knowledge that other people have minds and an ability to guess what might be in them.
"Let’s see if the computer can tell whether it’s talking to a human or another computer."
If we want to know whether a computer is conscious, then, we need to test whether the computer understands how conscious minds interact. In other words, we need a reverse Turing test: Let’s see if the computer can tell whether it’s talking to a human or another computer. If it can tell the difference, then maybe it knows what consciousness is. ChatGPT definitely can’t pass that test yet: It doesn’t know whether it’s responding to a living person with a mind or a disjointed list of prefab questions.
A sociopathic machine that can make consequential decisions would be powerfully dangerous. For now, chatbots are still limited in their abilities; they’re essentially toys. But if we don’t think more deeply about machine consciousness, in a year or five years we may face a crisis. If computers are going to outthink us anyway, giving them more humanlike social cognition might be our best hope of aligning them with human values.
Dr. Graziano is a professor of psychology and neuroscience at Princeton University and the author of “Rethinking Consciousness: A Scientific Theory of Subjective Experience.”
Appeared in the January 14, 2023, print edition as 'Without Consciousness, AIs Will Be Sociopaths'.
Alan Turing, the father of theoretical computer science and artificial intelligence.
- YouTube Video: Alan Turing: The Scientist Who Saved The Allies | Man Who Cracked The Nazi Code | Timeline
- YouTube Video: Passing the Turing test
- YouTube Video: Robots Take the Turing Test
Alan Mathison Turing OBE FRS (23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist.
Turing was highly influential in the development of theoretical computer science, providing a formalization of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. He is widely considered to be the father of theoretical computer science and artificial intelligence.
Born in Maida Vale, London, Turing was raised in southern England. He graduated at King's College, Cambridge, with a degree in mathematics. While he was a fellow at Cambridge, he published a proof demonstrating that some purely mathematical yes–no questions can never be answered by computation and defined a Turing machine, and went on to prove that the halting problem for Turing machines is undecidable.
In 1938, he obtained his PhD from the Department of Mathematics at Princeton University. During the Second World War, Turing worked for the Government Code and Cypher School (GC&CS) at Bletchley Park, Britain's codebreaking centre that produced Ultra intelligence.
For a time he led Hut 8, the section that was responsible for German naval cryptanalysis. Here, he devised a number of techniques for speeding the breaking of German ciphers, including improvements to the pre-war Polish bomba method, an electromechanical machine that could find settings for the Enigma machine. Turing played a crucial role in cracking intercepted coded messages that enabled the Allies to defeat the Axis powers in many crucial engagements, including the Battle of the Atlantic.
After the war, Turing worked at the National Physical Laboratory, where he designed the Automatic Computing Engine (ACE), one of the first designs for a stored-program computer.
In 1948, Turing joined Max Newman's Computing Machine Laboratory, at the Victoria University of Manchester, where he helped develop the Manchester computers and became interested in mathematical biology. He wrote a paper on the chemical basis of morphogenesis and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s.
Despite these accomplishments, Turing was never fully recognized in Britain during his lifetime because much of his work was covered by the Official Secrets Act.
Turing was prosecuted in 1952 for homosexual acts. He accepted hormone treatment with DES, a procedure commonly referred to as chemical castration, as an alternative to prison.
Turing died on 7 June 1954, 16 days before his 42nd birthday, from cyanide poisoning. An inquest determined his death as a suicide, but it has been noted that the known evidence is also consistent with accidental poisoning. Following a public campaign in 2009, the British prime minister Gordon Brown made an official public apology on behalf of the British government for "the appalling way [Turing] was treated".
Queen Elizabeth II granted a posthumous pardon in 2013. The term "Alan Turing law" is now used informally to refer to a 2017 law in the United Kingdom that retroactively pardoned men cautioned or convicted under historical legislation that outlawed homosexual acts.
Turing has an extensive legacy with statues of him and many things named after him, including an annual award for computer science innovations. He appears on the current Bank of England £50 note, which was released on 23 June 2021, to coincide with his birthday.
A 2019 BBC series, as voted by the audience, named him the greatest person of the 20th century.
Click on any of the following blue hyperlinks for more about Alan Turing:
Turing was highly influential in the development of theoretical computer science, providing a formalization of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. He is widely considered to be the father of theoretical computer science and artificial intelligence.
Born in Maida Vale, London, Turing was raised in southern England. He graduated at King's College, Cambridge, with a degree in mathematics. While he was a fellow at Cambridge, he published a proof demonstrating that some purely mathematical yes–no questions can never be answered by computation and defined a Turing machine, and went on to prove that the halting problem for Turing machines is undecidable.
In 1938, he obtained his PhD from the Department of Mathematics at Princeton University. During the Second World War, Turing worked for the Government Code and Cypher School (GC&CS) at Bletchley Park, Britain's codebreaking centre that produced Ultra intelligence.
For a time he led Hut 8, the section that was responsible for German naval cryptanalysis. Here, he devised a number of techniques for speeding the breaking of German ciphers, including improvements to the pre-war Polish bomba method, an electromechanical machine that could find settings for the Enigma machine. Turing played a crucial role in cracking intercepted coded messages that enabled the Allies to defeat the Axis powers in many crucial engagements, including the Battle of the Atlantic.
After the war, Turing worked at the National Physical Laboratory, where he designed the Automatic Computing Engine (ACE), one of the first designs for a stored-program computer.
In 1948, Turing joined Max Newman's Computing Machine Laboratory, at the Victoria University of Manchester, where he helped develop the Manchester computers and became interested in mathematical biology. He wrote a paper on the chemical basis of morphogenesis and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s.
Despite these accomplishments, Turing was never fully recognized in Britain during his lifetime because much of his work was covered by the Official Secrets Act.
Turing was prosecuted in 1952 for homosexual acts. He accepted hormone treatment with DES, a procedure commonly referred to as chemical castration, as an alternative to prison.
Turing died on 7 June 1954, 16 days before his 42nd birthday, from cyanide poisoning. An inquest determined his death as a suicide, but it has been noted that the known evidence is also consistent with accidental poisoning. Following a public campaign in 2009, the British prime minister Gordon Brown made an official public apology on behalf of the British government for "the appalling way [Turing] was treated".
Queen Elizabeth II granted a posthumous pardon in 2013. The term "Alan Turing law" is now used informally to refer to a 2017 law in the United Kingdom that retroactively pardoned men cautioned or convicted under historical legislation that outlawed homosexual acts.
Turing has an extensive legacy with statues of him and many things named after him, including an annual award for computer science innovations. He appears on the current Bank of England £50 note, which was released on 23 June 2021, to coincide with his birthday.
A 2019 BBC series, as voted by the audience, named him the greatest person of the 20th century.
Click on any of the following blue hyperlinks for more about Alan Turing:
- Early life and education
- Career and research
- Personal life
- Legacy
- See also:
- Oral history interview with Nicholas C. Metropolis, Charles Babbage Institute, University of Minnesota. Metropolis was the first director of computing services at Los Alamos National Laboratory; topics include the relationship between Turing and John von Neumann
- How Alan Turing Cracked The Enigma Code Imperial War Museums
- Alan Turing Year Archived 17 February 2019 at the Wayback Machine
- CiE 2012: Turing Centenary Conference
- Science in the Making Alan Turing's papers in the Royal Society's archives
- Alan Turing site maintained by Andrew Hodges including a short biography
- AlanTuring.net – Turing Archive for the History of Computing by Jack Copeland
- The Turing Archive – contains scans of some unpublished documents and material from the King's College, Cambridge archive
- Alan Turing Papers – University of Manchester Library, Manchester
- Jones, G. James (11 December 2001). "Alan Turing – Towards a Digital Mind: Part 1". System Toolbox. The Binary Freedom Project.
- Sherborne School Archives – holds papers relating to Turing's time at Sherborne School
- Alan Turing plaques recorded on openplaques.org
- Alan Turing archive on New Scientist
The 15 Biggest Risks of Artificial Intelligence
(Forbes June 2, 2023)
(Forbes June 2, 2023)
As the world witnesses unprecedented growth in artificial intelligence (AI) technologies, it's essential to consider the potential risks and challenges associated with their widespread adoption.
AI does present some significant dangers — from job displacement to security and privacy concerns — and encouraging awareness of issues helps us engage in conversations about AI's legal, ethical, and societal implications.
Following, are the biggest risks of artificial intelligence:
1. Lack of Transparency
Lack of transparency in AI systems, particularly in deep learning models that can be complex and difficult to interpret, is a pressing issue. This opaqueness obscures the decision-making processes and underlying logic of these technologies.
When people can’t comprehend how an AI system arrives at its conclusions, it can lead to distrust and resistance to adopting these technologies.
2. Bias and Discrimination
AI systems can inadvertently perpetuate or amplify societal biases due to biased training data or algorithmic design. To minimize discrimination and ensure fairness, it is crucial to invest in the development of unbiased algorithms and diverse training data sets.
3. Privacy Concerns
AI technologies often collect and analyze large amounts of personal data, raising issues related to data privacy and security. To mitigate privacy risks, we must advocate for strict data protection regulations and safe data handling practices.
4. Ethical Dilemmas
Instilling moral and ethical values in AI systems, especially in decision-making contexts with significant consequences, presents a considerable challenge. Researchers and developers must prioritize the ethical implications of AI technologies to avoid negative societal impacts.
5. Security Risks
As AI technologies become increasingly sophisticated, the security risks associated with their use and the potential for misuse also increase. Hackers and malicious actors can harness the power of AI to develop more advanced cyberattacks, bypass security measures, and exploit vulnerabilities in systems.
The rise of AI-driven autonomous weaponry also raises concerns about the dangers of rogue states or non-state actors using this technology — especially when we consider the potential loss of human control in critical decision-making processes.
To mitigate these security risks, governments and organizations need to develop best practices for secure AI development and deployment and foster international cooperation to establish global norms and regulations that protect against AI security threats.
6. Concentration of Power
The risk of AI development being dominated by a small number of large corporations and governments could exacerbate inequality and limit diversity in AI applications. Encouraging decentralized and collaborative AI development is key to avoiding a concentration of power.
7. Dependence on AI
Overreliance on AI systems may lead to a loss of creativity, critical thinking skills, and human intuition. Striking a balance between AI-assisted decision-making and human input is vital to preserving our cognitive abilities.
8. Job Displacement
AI-driven automation has the potential to lead to job losses across various industries, particularly for low-skilled workers (although there is evidence that AI and other emerging technologies will create more jobs than it eliminates).
As AI technologies continue to develop and become more efficient, the workforce must adapt and acquire new skills to remain relevant in the changing landscape. This is especially true for lower-skilled workers in the current labor force.
9. Economic Inequality
AI has the potential to contribute to economic inequality by disproportionally benefiting wealthy individuals and corporations. As we talked about above, job losses due to AI-driven automation are more likely to affect low-skilled workers, leading to a growing income gap and reduced opportunities for social mobility.
The concentration of AI development and ownership within a small number of large corporations and governments can exacerbate this inequality as they accumulate wealth and power while smaller businesses struggle to compete. Policies and initiatives that promote economic equity—like reskilling programs, social safety nets, and inclusive AI development that ensures a more balanced distribution of opportunities — can help combat economic inequality.
10. Legal and Regulatory Challenges
It’s crucial to develop new legal frameworks and regulations to address the unique issues arising from AI technologies, including liability and intellectual property rights. Legal systems must evolve to keep pace with technological advancements and protect the rights of everyone.
11. AI Arms Race
The risk of countries engaging in an AI arms race could lead to the rapid development of AI technologies with potentially harmful consequences.
Recently, more than a thousand technology researchers and leaders, including Apple co-founder Steve Wozniak, have urged intelligence labs to pause the development of advanced AI systems. The letter states that AI tools present “profound risks to society and humanity.”
In the letter, the leaders said:
"Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an 'AI summer' in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt."
12. Loss of Human Connection
Increasing reliance on AI-driven communication and interactions could lead to diminished empathy, social skills, and human connections. To preserve the essence of our social nature, we must strive to maintain a balance between technology and human interaction.
13. Misinformation and Manipulation
AI-generated content, such as deepfakes, contributes to the spread of false information and the manipulation of public opinion. Efforts to detect and combat AI-generated misinformation are critical in preserving the integrity of information in the digital age.
In a Stanford University study on the most pressing dangers of AI, researchers said:
“AI systems are being used in the service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism. From deepfake videos to online bots manipulating public discourse by feigning consensus and spreading fake news, there is the danger of AI systems undermining social trust. The technology can be co-opted by criminals, rogue states, ideological extremists, or simply special interest groups, to manipulate people for economic gain or political advantage.”
14. Unintended Consequences
AI systems, due to their complexity and lack of human oversight, might exhibit unexpected behaviors or make decisions with unforeseen consequences. This unpredictability can result in outcomes that negatively impact individuals, businesses, or society as a whole.
Robust testing, validation, and monitoring processes can help developers and researchers identify and fix these types of issues before they escalate.
15. Existential Risks
The development of artificial general intelligence (AGI) that surpasses human intelligence raises long-term concerns for humanity. The prospect of AGI could lead to unintended and potentially catastrophic consequences, as these advanced AI systems may not be aligned with human values or priorities.
In closing:
To mitigate these 15 risks, the AI research community needs to actively engage in safety research, collaborate on ethical guidelines, and promote transparency in AGI development.
Ensuring that AGI serves the best interests of humanity and does not pose a threat to our existence is paramount.
AI does present some significant dangers — from job displacement to security and privacy concerns — and encouraging awareness of issues helps us engage in conversations about AI's legal, ethical, and societal implications.
Following, are the biggest risks of artificial intelligence:
1. Lack of Transparency
Lack of transparency in AI systems, particularly in deep learning models that can be complex and difficult to interpret, is a pressing issue. This opaqueness obscures the decision-making processes and underlying logic of these technologies.
When people can’t comprehend how an AI system arrives at its conclusions, it can lead to distrust and resistance to adopting these technologies.
2. Bias and Discrimination
AI systems can inadvertently perpetuate or amplify societal biases due to biased training data or algorithmic design. To minimize discrimination and ensure fairness, it is crucial to invest in the development of unbiased algorithms and diverse training data sets.
3. Privacy Concerns
AI technologies often collect and analyze large amounts of personal data, raising issues related to data privacy and security. To mitigate privacy risks, we must advocate for strict data protection regulations and safe data handling practices.
4. Ethical Dilemmas
Instilling moral and ethical values in AI systems, especially in decision-making contexts with significant consequences, presents a considerable challenge. Researchers and developers must prioritize the ethical implications of AI technologies to avoid negative societal impacts.
5. Security Risks
As AI technologies become increasingly sophisticated, the security risks associated with their use and the potential for misuse also increase. Hackers and malicious actors can harness the power of AI to develop more advanced cyberattacks, bypass security measures, and exploit vulnerabilities in systems.
The rise of AI-driven autonomous weaponry also raises concerns about the dangers of rogue states or non-state actors using this technology — especially when we consider the potential loss of human control in critical decision-making processes.
To mitigate these security risks, governments and organizations need to develop best practices for secure AI development and deployment and foster international cooperation to establish global norms and regulations that protect against AI security threats.
6. Concentration of Power
The risk of AI development being dominated by a small number of large corporations and governments could exacerbate inequality and limit diversity in AI applications. Encouraging decentralized and collaborative AI development is key to avoiding a concentration of power.
7. Dependence on AI
Overreliance on AI systems may lead to a loss of creativity, critical thinking skills, and human intuition. Striking a balance between AI-assisted decision-making and human input is vital to preserving our cognitive abilities.
8. Job Displacement
AI-driven automation has the potential to lead to job losses across various industries, particularly for low-skilled workers (although there is evidence that AI and other emerging technologies will create more jobs than it eliminates).
As AI technologies continue to develop and become more efficient, the workforce must adapt and acquire new skills to remain relevant in the changing landscape. This is especially true for lower-skilled workers in the current labor force.
9. Economic Inequality
AI has the potential to contribute to economic inequality by disproportionally benefiting wealthy individuals and corporations. As we talked about above, job losses due to AI-driven automation are more likely to affect low-skilled workers, leading to a growing income gap and reduced opportunities for social mobility.
The concentration of AI development and ownership within a small number of large corporations and governments can exacerbate this inequality as they accumulate wealth and power while smaller businesses struggle to compete. Policies and initiatives that promote economic equity—like reskilling programs, social safety nets, and inclusive AI development that ensures a more balanced distribution of opportunities — can help combat economic inequality.
10. Legal and Regulatory Challenges
It’s crucial to develop new legal frameworks and regulations to address the unique issues arising from AI technologies, including liability and intellectual property rights. Legal systems must evolve to keep pace with technological advancements and protect the rights of everyone.
11. AI Arms Race
The risk of countries engaging in an AI arms race could lead to the rapid development of AI technologies with potentially harmful consequences.
Recently, more than a thousand technology researchers and leaders, including Apple co-founder Steve Wozniak, have urged intelligence labs to pause the development of advanced AI systems. The letter states that AI tools present “profound risks to society and humanity.”
In the letter, the leaders said:
"Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an 'AI summer' in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt."
12. Loss of Human Connection
Increasing reliance on AI-driven communication and interactions could lead to diminished empathy, social skills, and human connections. To preserve the essence of our social nature, we must strive to maintain a balance between technology and human interaction.
13. Misinformation and Manipulation
AI-generated content, such as deepfakes, contributes to the spread of false information and the manipulation of public opinion. Efforts to detect and combat AI-generated misinformation are critical in preserving the integrity of information in the digital age.
In a Stanford University study on the most pressing dangers of AI, researchers said:
“AI systems are being used in the service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism. From deepfake videos to online bots manipulating public discourse by feigning consensus and spreading fake news, there is the danger of AI systems undermining social trust. The technology can be co-opted by criminals, rogue states, ideological extremists, or simply special interest groups, to manipulate people for economic gain or political advantage.”
14. Unintended Consequences
AI systems, due to their complexity and lack of human oversight, might exhibit unexpected behaviors or make decisions with unforeseen consequences. This unpredictability can result in outcomes that negatively impact individuals, businesses, or society as a whole.
Robust testing, validation, and monitoring processes can help developers and researchers identify and fix these types of issues before they escalate.
15. Existential Risks
The development of artificial general intelligence (AGI) that surpasses human intelligence raises long-term concerns for humanity. The prospect of AGI could lead to unintended and potentially catastrophic consequences, as these advanced AI systems may not be aligned with human values or priorities.
In closing:
To mitigate these 15 risks, the AI research community needs to actively engage in safety research, collaborate on ethical guidelines, and promote transparency in AGI development.
Ensuring that AGI serves the best interests of humanity and does not pose a threat to our existence is paramount.
Chatbot, including a List of Chatbots
- YouTube Video: What is a Chabot?
- YouTube Video: How do Chatbots Work?
- YouTube Video: Awesome Examples of a Chatbot at Work
A chatbot or chatterbot is a software application used to conduct an on-line chat conversation via text or text-to-speech, in lieu of providing direct contact with a live human agent. Designed to convincingly simulate the way a human would behave as a conversational partner, chatbot systems typically require continuous tuning and testing, and many in production remain unable to adequately converse, while none of them can pass the standard Turing test.
The term "ChatterBot" was originally coined by Michael Mauldin (creator of the first Verbot) in 1994 to describe these conversational programs.
Chatbots are used in dialog systems for various purposes including customer service, request routing, or information gathering. While some chatbot applications use extensive word-classification processes, natural-language processors, and sophisticated AI, others simply scan for general keywords and generate responses using common phrases obtained from an associated library or database.
Most chatbots are accessed on-line via website popups or through virtual assistants. They can be classified into usage categories that include: commerce (e-commerce via chat), education, entertainment, finance, health, news, and productivity.
Background:
In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published, which proposed what is now called the Turing test as a criterion of intelligence.
This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge to the extent that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human.
The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human.
However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise: "[In] artificial intelligence ... machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer.
But once a particular program is unmasked, once its inner workings are explained ... its magic crumbles away; it stands revealed as a mere collection of procedures ... The observer says to himself "I could have written that". With that thought, he moves the program in question from the shelf marked "intelligent", to that reserved for curios ... The object of this paper is to cause just such a re-evaluation of the program about to be "explained". Few programs ever needed it more.
ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of the corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY').
Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".
Interface designers have come to appreciate that humans' readiness to interpret computer output as genuinely conversational—even when it is actually based on rather simple pattern-matching—can be exploited for useful purposes.
Most people prefer to engage with programs that are human-like, and this gives chatbot-style techniques a potentially useful role in interactive systems that need to elicit information from users, as long as that information is relatively straightforward and falls into predictable categories.
Thus, for example, online help systems can usefully employ chatbot techniques to identify the area of help that users require, potentially providing a "friendlier" interface than a more formal search or menu system. This sort of usage holds the prospect of moving chatbot technology from Weizenbaum's "shelf ... reserved for curios" to that marked "genuinely useful computational methods".
Development:
Among the most notable early chatbots are ELIZA (1966) and PARRY (1972). More recent notable programs include A.L.I.C.E., Jabberwacky and D.U.D.E (Agence Nationale de la Recherche and CNRS 2006). While ELIZA and PARRY were used exclusively to simulate typed conversation, many chatbots now include other functional features, such as games and web searching abilities.
In 1984, a book called The Policeman's Beard is Half Constructed was published, allegedly written by the chatbot Racter (though the program as released would not have been capable of doing so).
One pertinent field of AI research is natural-language processing. Usually, weak AI fields
employ specialized software or programming languages created specifically for the narrow function required.
For example, A.L.I.C.E. uses a markup language called AIML, which is specific to its function as a conversational agent, and has since been adopted by various other developers of, so-called, Alicebots. Nevertheless, A.L.I.C.E. is still purely based on pattern matching techniques without any reasoning capabilities, the same technique ELIZA was using back in 1966. This is not strong AI, which would require sapience and logical reasoning abilities.
Jabberwacky learns new responses and context based on real-time user interactions, rather than being driven from a static database. Some more recent chatbots also combine real-time learning with evolutionary algorithms that optimize their ability to communicate based on each conversation held. Still, there is currently no general purpose conversational artificial intelligence, and some software developers focus on the practical aspect, information retrieval.
Chatbot competitions focus on the Turing test or more specific goals. Two such annual contests are the Loebner Prize and The Chatterbox Challenge (the latter has been offline since 2015, however, materials can still be found from web archives).
DBpedia created a chatbot during the GSoC of 2017. It can communicate through Facebook Messenger.
In November 2022, OpenAI developed an AI chatbot called ChatGPT which interacts using conversation to the general public and has garnered attention for its detailed responses and historical knowledge, although its accuracy has been criticized.
Application:
See also: Virtual assistant
Messaging apps:
Many companies' chatbots run on messaging apps or simply via SMS. They are used for B2C customer service, sales and marketing.
In 2016, Facebook Messenger allowed developers to place chatbots on their platform. There were 30,000 bots created for Messenger in the first six months, rising to 100,000 by September 2017.
Since September 2017, this has also been as part of a pilot program on WhatsApp. Airlines KLM and Aeroméxico both announced their participation in the testing; both airlines had previously launched customer services on the Facebook Messenger platform.
The bots usually appear as one of the user's contacts, but can sometimes act as participants in a group chat.
Many banks, insurers, media companies, e-commerce companies, airlines, hotel chains, retailers, health care providers, government entities and restaurant chains have used chatbots to answer simple questions, increase customer engagement, for promotion, and to offer additional ways to order from them.
A 2017 study showed 4% of companies used chatbots. According to a 2016 study, 80% of businesses said they intended to have one by 2020.
As part of company apps and websites:
Previous generations of chatbots were present on company websites, e.g. Ask Jenn from Alaska Airlines which debuted in 2008 or Expedia's virtual customer service agent which launched in 2011.
The newer generation of chatbots includes IBM Watson-powered "Rocky", introduced in February 2017 by the New York City-based e-commerce company Rare Carat to provide information to prospective diamond buyers.
Chatbot sequences:
Used by marketers to script sequences of messages, very similar to an Autoresponder sequence. Such sequences can be triggered by user opt-in or the use of keywords within user interactions. After a trigger occurs a sequence of messages is delivered until the next anticipated user response.
Each user response is used in the decision tree to help the chatbot navigate the response sequences to deliver the correct response message.
Company internal platforms:
Other companies explore ways they can use chatbots internally, for example for Customer Support, Human Resources, or even in Internet-of-Things (IoT) projects. Overstock.com, for one, has reportedly launched a chatbot named Mila to automate certain simple yet time-consuming processes when requesting sick leave.
Other large companies such as Lloyds Banking Group, Royal Bank of Scotland, Renault and Citroën are now using automated online assistants instead of call centres with humans to provide a first point of contact.
A SaaS chatbot business ecosystem has been steadily growing since the F8 Conference when Facebook's Mark Zuckerberg unveiled that Messenger would allow chatbots into the app.
In large companies, like in hospitals and aviation organizations, IT architects are designing reference architectures for Intelligent Chatbots that are used to unlock and share knowledge and experience in the organization more efficiently, and reduce the errors in answers from expert service desks significantly.
These Intelligent Chatbots make use of all kinds of artificial intelligence like image moderation and natural-language understanding (NLU), natural-language generation (NLG), machine learning and deep learning.
Customer service:
Many high-tech banking organizations are looking to integrate automated AI-based solutions such as chatbots into their customer service in order to provide faster and cheaper assistance to their clients who are becoming increasingly comfortable with technology.
In particular, chatbots can efficiently conduct a dialogue, usually replacing other communication tools such as email, phone, or SMS. In banking, their major application is related to quick customer service answering common requests, as well as transactional support.
Several studies report significant reduction in the cost of customer services, expected to lead to billions of dollars of economic savings in the next ten years. In 2019, Gartner predicted that by 2021, 15% of all customer service interactions globally will be handled completely by AI. A study by Juniper Research in 2019 estimates retail sales resulting from chatbot-based interactions will reach $112 billion by 2023.
Since 2016, when Facebook allowed businesses to deliver automated customer support, e-commerce guidance, content, and interactive experiences through chatbots, a large variety of chatbots were developed for the Facebook Messenger platform.
In 2016, Russia-based Tochka Bank launched the world's first Facebook bot for a range of financial services, including a possibility of making payments.
In July 2016, Barclays Africa also launched a Facebook chatbot, making it the first bank to do so in Africa.
The France's third-largest bank by total assets Société Générale launched their chatbot called SoBot in March 2018. While 80% of users of the SoBot expressed their satisfaction after having tested it, Société Générale deputy director Bertrand Cozzarolo stated that it will never replace the expertise provided by a human advisor.
The advantages of using chatbots for customer interactions in banking include cost reduction, financial advice, and 24/7 support.
Healthcare:
See also: Artificial intelligence in healthcare
Chatbots are also appearing in the healthcare industry. A study suggested that physicians in the United States believed that chatbots would be most beneficial for scheduling doctor appointments, locating health clinics, or providing medication information.
Whatsapp has teamed up with the World Health Organisation (WHO) to make a chatbot service that answers users’ questions on COVID-19.
In 2020, The Indian Government launched a chatbot called MyGov Corona Helpdesk, that worked through Whatsapp and helped people access information about the Coronavirus (COVID-19) pandemic.
In the Philippines, the Medical City Clinic chatbot handles 8400+ chats a month, reducing wait times, including more native Tagalog and Cebuano speakers and improving overall patient experience.
Certain patient groups are still reluctant to use chatbots. A mixed-methods study showed that people are still hesitant to use chatbots for their healthcare due to poor understanding of the technological complexity, the lack of empathy, and concerns about cyber-security.
The analysis showed that while 6% had heard of a health chatbot and 3% had experience of using it, 67% perceived themselves as likely to use one within 12 months. The majority of participants would use a health chatbot for seeking general health information (78%), booking a medical appointment (78%), and looking for local health services (80%).
However, a health chatbot was perceived as less suitable for seeking results of medical tests and seeking specialist advice such as sexual health. The analysis of attitudinal variables showed that most participants reported their preference for discussing their health with doctors (73%) and having access to reliable and accurate health information (93%).
While 80% were curious about new technologies that could improve their health, 66% reported only seeking a doctor when experiencing a health problem and 65% thought that a chatbot was a good idea. Interestingly, 30% reported dislike about talking to computers, 41% felt it would be strange to discuss health matters with a chatbot and about half were unsure if they could trust the advice given by a chatbot. Therefore, perceived trustworthiness, individual attitudes towards bots, and dislike for talking to computers are the main barriers to health chatbots.
Politics:
See also: Government by algorithm § AI politicians
In New Zealand, the chatbot SAM – short for Semantic Analysis Machine (made by Nick Gerritsen of Touchtech) – has been developed. It is designed to share its political thoughts, for example on topics such as climate change, healthcare and education, etc. It talks to people through Facebook Messenger.
In 2022, the chatbot "Leader Lars" or "Leder Lars" was nominated for The Synthetic Party to run in the Danish parliamentary election, and was built by the artist collective Computer Lars. Leader Lars differed from earlier virtual politicians by leading a political party and by not pretending to be an objective candidate. This chatbot engaged in critical discussions on politics with users from around the world.
In India, the state government has launched a chatbot for its Aaple Sarkar platform, which provides conversational access to information regarding public services managed.
Toys:
Chatbots have also been incorporated into devices not primarily meant for computing, such as toys.
Hello Barbie is an Internet-connected version of the doll that uses a chatbot provided by the company ToyTalk, which previously used the chatbot for a range of smartphone-based characters for children. These characters' behaviors are constrained by a set of rules that in effect emulate a particular character and produce a storyline.
The My Friend Cayla doll was marketed as a line of 18-inch (46 cm) dolls which uses speech recognition technology in conjunction with an Android or iOS mobile app to recognize the child's speech and have a conversation. It, like the Hello Barbie doll, attracted controversy due to vulnerabilities with the doll's Bluetooth stack and its use of data collected from the child's speech.
IBM's Watson computer has been used as the basis for chatbot-based educational toys for companies such as CogniToys intended to interact with children for educational purposes.
Malicious use:
Malicious chatbots are frequently used to fill chat rooms with spam and advertisements, by mimicking human behavior and conversations or to entice people into revealing personal information, such as bank account numbers. They were commonly found on Yahoo! Messenger, Windows Live Messenger, AOL Instant Messenger and other instant messaging protocols. There has also been a published report of a chatbot used in a fake personal ad on a dating service's website.
Tay, an AI chatbot that learns from previous interaction, caused major controversy due to it being targeted by internet trolls on Twitter. The bot was exploited, and after 16 hours began to send extremely offensive Tweets to users. This suggests that although the bot learned effectively from experience, adequate protection was not put in place to prevent misuse.
If a text-sending algorithm can pass itself off as a human instead of a chatbot, its message would be more credible. Therefore, human-seeming chatbots with well-crafted online identities could start scattering fake news that seems plausible, for instance making false claims during an election. With enough chatbots, it might be even possible to achieve artificial social proof.
Limitations of chatbots:
The creation and implementation of chatbots is still a developing area, heavily related to artificial intelligence and machine learning, so the provided solutions, while possessing obvious advantages, have some important limitations in terms of functionalities and use cases. However, this is changing over time.
The most common limitations are listed below:
Chatbots and jobs:
Chatbots are increasingly present in businesses and often are used to automate tasks that do not require skill-based talents. With customer service taking place via messaging apps as well as phone calls, there are growing numbers of use-cases where chatbot deployment gives organizations a clear return on investment. Call center workers may be particularly at risk from AI-driven chatbots.
Chatbot jobs:
Chatbot developers create, debug, and maintain applications that automate customer services or other communication processes. Their duties include reviewing and simplifying code when needed. They may also help companies implement bots in their operations.
A study by Forrester (June 2017) predicted that 25% of all jobs would be impacted by AI technologies by 2019.
See also:
The term "ChatterBot" was originally coined by Michael Mauldin (creator of the first Verbot) in 1994 to describe these conversational programs.
Chatbots are used in dialog systems for various purposes including customer service, request routing, or information gathering. While some chatbot applications use extensive word-classification processes, natural-language processors, and sophisticated AI, others simply scan for general keywords and generate responses using common phrases obtained from an associated library or database.
Most chatbots are accessed on-line via website popups or through virtual assistants. They can be classified into usage categories that include: commerce (e-commerce via chat), education, entertainment, finance, health, news, and productivity.
Background:
In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published, which proposed what is now called the Turing test as a criterion of intelligence.
This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge to the extent that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human.
The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human.
However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise: "[In] artificial intelligence ... machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer.
But once a particular program is unmasked, once its inner workings are explained ... its magic crumbles away; it stands revealed as a mere collection of procedures ... The observer says to himself "I could have written that". With that thought, he moves the program in question from the shelf marked "intelligent", to that reserved for curios ... The object of this paper is to cause just such a re-evaluation of the program about to be "explained". Few programs ever needed it more.
ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of the corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY').
Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".
Interface designers have come to appreciate that humans' readiness to interpret computer output as genuinely conversational—even when it is actually based on rather simple pattern-matching—can be exploited for useful purposes.
Most people prefer to engage with programs that are human-like, and this gives chatbot-style techniques a potentially useful role in interactive systems that need to elicit information from users, as long as that information is relatively straightforward and falls into predictable categories.
Thus, for example, online help systems can usefully employ chatbot techniques to identify the area of help that users require, potentially providing a "friendlier" interface than a more formal search or menu system. This sort of usage holds the prospect of moving chatbot technology from Weizenbaum's "shelf ... reserved for curios" to that marked "genuinely useful computational methods".
Development:
Among the most notable early chatbots are ELIZA (1966) and PARRY (1972). More recent notable programs include A.L.I.C.E., Jabberwacky and D.U.D.E (Agence Nationale de la Recherche and CNRS 2006). While ELIZA and PARRY were used exclusively to simulate typed conversation, many chatbots now include other functional features, such as games and web searching abilities.
In 1984, a book called The Policeman's Beard is Half Constructed was published, allegedly written by the chatbot Racter (though the program as released would not have been capable of doing so).
One pertinent field of AI research is natural-language processing. Usually, weak AI fields
employ specialized software or programming languages created specifically for the narrow function required.
For example, A.L.I.C.E. uses a markup language called AIML, which is specific to its function as a conversational agent, and has since been adopted by various other developers of, so-called, Alicebots. Nevertheless, A.L.I.C.E. is still purely based on pattern matching techniques without any reasoning capabilities, the same technique ELIZA was using back in 1966. This is not strong AI, which would require sapience and logical reasoning abilities.
Jabberwacky learns new responses and context based on real-time user interactions, rather than being driven from a static database. Some more recent chatbots also combine real-time learning with evolutionary algorithms that optimize their ability to communicate based on each conversation held. Still, there is currently no general purpose conversational artificial intelligence, and some software developers focus on the practical aspect, information retrieval.
Chatbot competitions focus on the Turing test or more specific goals. Two such annual contests are the Loebner Prize and The Chatterbox Challenge (the latter has been offline since 2015, however, materials can still be found from web archives).
DBpedia created a chatbot during the GSoC of 2017. It can communicate through Facebook Messenger.
In November 2022, OpenAI developed an AI chatbot called ChatGPT which interacts using conversation to the general public and has garnered attention for its detailed responses and historical knowledge, although its accuracy has been criticized.
Application:
See also: Virtual assistant
Messaging apps:
Many companies' chatbots run on messaging apps or simply via SMS. They are used for B2C customer service, sales and marketing.
In 2016, Facebook Messenger allowed developers to place chatbots on their platform. There were 30,000 bots created for Messenger in the first six months, rising to 100,000 by September 2017.
Since September 2017, this has also been as part of a pilot program on WhatsApp. Airlines KLM and Aeroméxico both announced their participation in the testing; both airlines had previously launched customer services on the Facebook Messenger platform.
The bots usually appear as one of the user's contacts, but can sometimes act as participants in a group chat.
Many banks, insurers, media companies, e-commerce companies, airlines, hotel chains, retailers, health care providers, government entities and restaurant chains have used chatbots to answer simple questions, increase customer engagement, for promotion, and to offer additional ways to order from them.
A 2017 study showed 4% of companies used chatbots. According to a 2016 study, 80% of businesses said they intended to have one by 2020.
As part of company apps and websites:
Previous generations of chatbots were present on company websites, e.g. Ask Jenn from Alaska Airlines which debuted in 2008 or Expedia's virtual customer service agent which launched in 2011.
The newer generation of chatbots includes IBM Watson-powered "Rocky", introduced in February 2017 by the New York City-based e-commerce company Rare Carat to provide information to prospective diamond buyers.
Chatbot sequences:
Used by marketers to script sequences of messages, very similar to an Autoresponder sequence. Such sequences can be triggered by user opt-in or the use of keywords within user interactions. After a trigger occurs a sequence of messages is delivered until the next anticipated user response.
Each user response is used in the decision tree to help the chatbot navigate the response sequences to deliver the correct response message.
Company internal platforms:
Other companies explore ways they can use chatbots internally, for example for Customer Support, Human Resources, or even in Internet-of-Things (IoT) projects. Overstock.com, for one, has reportedly launched a chatbot named Mila to automate certain simple yet time-consuming processes when requesting sick leave.
Other large companies such as Lloyds Banking Group, Royal Bank of Scotland, Renault and Citroën are now using automated online assistants instead of call centres with humans to provide a first point of contact.
A SaaS chatbot business ecosystem has been steadily growing since the F8 Conference when Facebook's Mark Zuckerberg unveiled that Messenger would allow chatbots into the app.
In large companies, like in hospitals and aviation organizations, IT architects are designing reference architectures for Intelligent Chatbots that are used to unlock and share knowledge and experience in the organization more efficiently, and reduce the errors in answers from expert service desks significantly.
These Intelligent Chatbots make use of all kinds of artificial intelligence like image moderation and natural-language understanding (NLU), natural-language generation (NLG), machine learning and deep learning.
Customer service:
Many high-tech banking organizations are looking to integrate automated AI-based solutions such as chatbots into their customer service in order to provide faster and cheaper assistance to their clients who are becoming increasingly comfortable with technology.
In particular, chatbots can efficiently conduct a dialogue, usually replacing other communication tools such as email, phone, or SMS. In banking, their major application is related to quick customer service answering common requests, as well as transactional support.
Several studies report significant reduction in the cost of customer services, expected to lead to billions of dollars of economic savings in the next ten years. In 2019, Gartner predicted that by 2021, 15% of all customer service interactions globally will be handled completely by AI. A study by Juniper Research in 2019 estimates retail sales resulting from chatbot-based interactions will reach $112 billion by 2023.
Since 2016, when Facebook allowed businesses to deliver automated customer support, e-commerce guidance, content, and interactive experiences through chatbots, a large variety of chatbots were developed for the Facebook Messenger platform.
In 2016, Russia-based Tochka Bank launched the world's first Facebook bot for a range of financial services, including a possibility of making payments.
In July 2016, Barclays Africa also launched a Facebook chatbot, making it the first bank to do so in Africa.
The France's third-largest bank by total assets Société Générale launched their chatbot called SoBot in March 2018. While 80% of users of the SoBot expressed their satisfaction after having tested it, Société Générale deputy director Bertrand Cozzarolo stated that it will never replace the expertise provided by a human advisor.
The advantages of using chatbots for customer interactions in banking include cost reduction, financial advice, and 24/7 support.
Healthcare:
See also: Artificial intelligence in healthcare
Chatbots are also appearing in the healthcare industry. A study suggested that physicians in the United States believed that chatbots would be most beneficial for scheduling doctor appointments, locating health clinics, or providing medication information.
Whatsapp has teamed up with the World Health Organisation (WHO) to make a chatbot service that answers users’ questions on COVID-19.
In 2020, The Indian Government launched a chatbot called MyGov Corona Helpdesk, that worked through Whatsapp and helped people access information about the Coronavirus (COVID-19) pandemic.
In the Philippines, the Medical City Clinic chatbot handles 8400+ chats a month, reducing wait times, including more native Tagalog and Cebuano speakers and improving overall patient experience.
Certain patient groups are still reluctant to use chatbots. A mixed-methods study showed that people are still hesitant to use chatbots for their healthcare due to poor understanding of the technological complexity, the lack of empathy, and concerns about cyber-security.
The analysis showed that while 6% had heard of a health chatbot and 3% had experience of using it, 67% perceived themselves as likely to use one within 12 months. The majority of participants would use a health chatbot for seeking general health information (78%), booking a medical appointment (78%), and looking for local health services (80%).
However, a health chatbot was perceived as less suitable for seeking results of medical tests and seeking specialist advice such as sexual health. The analysis of attitudinal variables showed that most participants reported their preference for discussing their health with doctors (73%) and having access to reliable and accurate health information (93%).
While 80% were curious about new technologies that could improve their health, 66% reported only seeking a doctor when experiencing a health problem and 65% thought that a chatbot was a good idea. Interestingly, 30% reported dislike about talking to computers, 41% felt it would be strange to discuss health matters with a chatbot and about half were unsure if they could trust the advice given by a chatbot. Therefore, perceived trustworthiness, individual attitudes towards bots, and dislike for talking to computers are the main barriers to health chatbots.
Politics:
See also: Government by algorithm § AI politicians
In New Zealand, the chatbot SAM – short for Semantic Analysis Machine (made by Nick Gerritsen of Touchtech) – has been developed. It is designed to share its political thoughts, for example on topics such as climate change, healthcare and education, etc. It talks to people through Facebook Messenger.
In 2022, the chatbot "Leader Lars" or "Leder Lars" was nominated for The Synthetic Party to run in the Danish parliamentary election, and was built by the artist collective Computer Lars. Leader Lars differed from earlier virtual politicians by leading a political party and by not pretending to be an objective candidate. This chatbot engaged in critical discussions on politics with users from around the world.
In India, the state government has launched a chatbot for its Aaple Sarkar platform, which provides conversational access to information regarding public services managed.
Toys:
Chatbots have also been incorporated into devices not primarily meant for computing, such as toys.
Hello Barbie is an Internet-connected version of the doll that uses a chatbot provided by the company ToyTalk, which previously used the chatbot for a range of smartphone-based characters for children. These characters' behaviors are constrained by a set of rules that in effect emulate a particular character and produce a storyline.
The My Friend Cayla doll was marketed as a line of 18-inch (46 cm) dolls which uses speech recognition technology in conjunction with an Android or iOS mobile app to recognize the child's speech and have a conversation. It, like the Hello Barbie doll, attracted controversy due to vulnerabilities with the doll's Bluetooth stack and its use of data collected from the child's speech.
IBM's Watson computer has been used as the basis for chatbot-based educational toys for companies such as CogniToys intended to interact with children for educational purposes.
Malicious use:
Malicious chatbots are frequently used to fill chat rooms with spam and advertisements, by mimicking human behavior and conversations or to entice people into revealing personal information, such as bank account numbers. They were commonly found on Yahoo! Messenger, Windows Live Messenger, AOL Instant Messenger and other instant messaging protocols. There has also been a published report of a chatbot used in a fake personal ad on a dating service's website.
Tay, an AI chatbot that learns from previous interaction, caused major controversy due to it being targeted by internet trolls on Twitter. The bot was exploited, and after 16 hours began to send extremely offensive Tweets to users. This suggests that although the bot learned effectively from experience, adequate protection was not put in place to prevent misuse.
If a text-sending algorithm can pass itself off as a human instead of a chatbot, its message would be more credible. Therefore, human-seeming chatbots with well-crafted online identities could start scattering fake news that seems plausible, for instance making false claims during an election. With enough chatbots, it might be even possible to achieve artificial social proof.
Limitations of chatbots:
The creation and implementation of chatbots is still a developing area, heavily related to artificial intelligence and machine learning, so the provided solutions, while possessing obvious advantages, have some important limitations in terms of functionalities and use cases. However, this is changing over time.
The most common limitations are listed below:
- As the database, used for output generation, is fixed and limited, chatbots can fail while dealing with an unsaved query.
- A chatbot's efficiency highly depends on language processing and is limited because of irregularities, such as accents and mistakes.
- Chatbots are unable to deal with multiple questions at the same time and so conversation opportunities are limited.
- Chatbots require a large amount of conversational data to train. Generative models, which are based on deep learning algorithms to generate new responses word by word based on user input, are usually trained on a large dataset of natural-language phrases.
- Chatbots have difficulty managing non-linear conversations that must go back and forth on a topic with a user.
- As it happens usually with technology-led changes in existing services, some consumers, more often than not from older generations, are uncomfortable with chatbots due to their limited understanding, making it obvious that their requests are being dealt with by machines.
Chatbots and jobs:
Chatbots are increasingly present in businesses and often are used to automate tasks that do not require skill-based talents. With customer service taking place via messaging apps as well as phone calls, there are growing numbers of use-cases where chatbot deployment gives organizations a clear return on investment. Call center workers may be particularly at risk from AI-driven chatbots.
Chatbot jobs:
Chatbot developers create, debug, and maintain applications that automate customer services or other communication processes. Their duties include reviewing and simplifying code when needed. They may also help companies implement bots in their operations.
A study by Forrester (June 2017) predicted that 25% of all jobs would be impacted by AI technologies by 2019.
See also:
- Applications of artificial intelligence
- Artificial philosophy
- Autonomous agent
- ChatGPT
- Conversational user interface
- Eugene Goostman
- Friendly artificial intelligence
- Hybrid intelligent system
- Intelligent agent
- Internet bot
- ulti-agent system
- Natural language processing
- Social bot
- Software agent
- Software bot
- Twitterbot
- Virtual assistant
The following topics cover AI-related Articles: "The Good, The Bad and the Ugly!":
Surveillance cameras were meant to make us safer, but how could the exploitation of them restrict our freedom? This 2021 BBC documentary about AI technology, Are You Scared Yet, Human? questions whether artificial intelligence has gone too far.
___________________________________________________________________________
The A-Z of AI: 30 terms you need to understand artificial intelligence: (BBC)
Artificial intelligence is arguably the most important technological development of our time – here are some of the terms that you need to know as the world wrestles with what to do with this new technology. Imagine going back in time to the 1970s, and trying to explain to somebody what it means "to google", what a "URL" is, or why it's good to have "fibre-optic broadband". You'd probably struggle.
For every major technological revolution, there is a concomitant wave of new language that we all have to learn… until it becomes so familiar that we forget that we never knew it.
That's no different for the next major technological wave – artificial intelligence. Yet understanding this language of AI will be essential as we all – from governments to individual citizens – try to grapple with the risks, and benefits that this emerging technology might pose.
Over the past few years, multiple new terms related to AI have emerged – "alignment", "large language models", "hallucination" or "prompt engineering", to name a few.
To help you stay up to speed, BBC.com has compiled an A-Z of words you need to know to understand how AI is shaping our world.
So, as a simple example, if an AI designed to recognise images of animals has been trained on images of cats and dogs, you'd assume it'd struggle with horses or elephants. But through zero-shot learning, it can use what it knows about horses semantically – such as its number of legs or lack of wings – to compare its attributes with the animals it has been trained on.
The rough human equivalent would be an "educated guess". AIs are getting better and better at zero-shot learning, but as with any inference, it can be wrong.
- "300,000 Surveillance Cameras in One City. Are You Scared Yet, Human?" (BBC Select) Artificial intelligence technology is changing our world. But leading tech experts in Silicon Valley worry about the future that’s being created. Microsoft’s Brad Smith believes George Orwell’s 1984 could become reality by 2024. This BBC investigation looks at the rapidly evolving influence of AI. Will it usher in a golden age? Or could we lose control of it completely? Watch the trailer below:
Surveillance cameras were meant to make us safer, but how could the exploitation of them restrict our freedom? This 2021 BBC documentary about AI technology, Are You Scared Yet, Human? questions whether artificial intelligence has gone too far.
___________________________________________________________________________
The A-Z of AI: 30 terms you need to understand artificial intelligence: (BBC)
Artificial intelligence is arguably the most important technological development of our time – here are some of the terms that you need to know as the world wrestles with what to do with this new technology. Imagine going back in time to the 1970s, and trying to explain to somebody what it means "to google", what a "URL" is, or why it's good to have "fibre-optic broadband". You'd probably struggle.
For every major technological revolution, there is a concomitant wave of new language that we all have to learn… until it becomes so familiar that we forget that we never knew it.
That's no different for the next major technological wave – artificial intelligence. Yet understanding this language of AI will be essential as we all – from governments to individual citizens – try to grapple with the risks, and benefits that this emerging technology might pose.
Over the past few years, multiple new terms related to AI have emerged – "alignment", "large language models", "hallucination" or "prompt engineering", to name a few.
To help you stay up to speed, BBC.com has compiled an A-Z of words you need to know to understand how AI is shaping our world.
- A is for…
- Artificial general intelligence (AGI)
- Most of the AIs developed to date have been "narrow" or "weak". So, for example, an AI may be capable of crushing the world's best chess player, but if you asked it how to cook an egg or write an essay, it'd fail. That's quickly changing: AI can now teach itself to perform multiple tasks, raising the prospect that "artificial general intelligence" is on the horizon. An AGI would be an AI with the same flexibility of thought as a human – and possibly even the consciousness too – plus the super-abilities of a digital mind. Companies such as OpenAI and DeepMind have made it clear that creating AGI is their goal. OpenAI argues that it would "elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge" and become a "great force multiplier for human ingenuity and creativity". However, some fear that going a step further – creating a superintelligence far smarter than human beings – could bring great dangers (see "Superintelligence" and "X-risk").
- Alignment
While we often focus on our individual differences, humanity shares many common values that bind our societies together, from the importance of family to the moral imperative not to murder. Certainly, there are exceptions, but they're not the majority.
However, we've never had to share the Earth with a powerful non-human intelligence. How can we be sure AI's values and priorities will align with our own?
This alignment problem underpins fears of an AI catastrophe: that a form of superintelligence emerges that cares little for the beliefs, attitudes and rules that underpin human societies. If we're to have safe AI, ensuring it remains aligned with us will be crucial (see "X-Risk").
In early July, OpenAI – one of the companies developing advanced AI – announced plans for a "superalignment" programme, designed to ensure AI systems much smarter than humans follow human intent. "Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue," the company said. - B is for…
- Bias
- For an AI to learn, it needs to learn from us. Unfortunately, humanity is hardly bias-free. If an AI acquires its abilities from a dataset that is skewed – for example, by race or gender – then it has the potential to spew out inaccurate, offensive stereotypes. And as we hand over more and more gatekeeping and decision-making to AI, many worry that machines could enact hidden prejudices, preventing some people from accessing certain services or knowledge. This discrimination would be obscured by supposed algorithmic impartiality.
- In the worlds of AI ethics and safety, some researchers believe that bias – as well as other near-term problems such as surveillance misuse – are far more pressing problems than proposed future concerns such as extinction risk.
- In response, some catastrophic risk researchers point out that the various dangers posed by AI are not necessarily mutually exclusive – for example, if rogue nations misused AI, it could suppress citizens' rights and create catastrophic risks. However, there is strong disagreement forming about which should be prioritised in terms of government regulation and oversight, and whose concerns should be listened to.
- C is for…
Compute - Not a verb, but a noun. Compute refers to the computational resources – such as processing power – required to train AI. It can be quantified, so it's a proxy to measure how quickly AI is advancing (as well as how costly and intensive it is too.) Since 2012, the amount of compute has doubled every 3.4 months, which means that, when OpenAI's GPT-3 was trained in 2020, it required 600,000 times more computing power than one of the most cutting-edge machine learning systems from 2012. Opinions differ on how long this rapid rate of change can continue, and whether innovations in computing hardware can keep up: will it become a bottleneck?
- D is for…
- Diffusion models
- A few years ago, one of the dominant techniques for getting AI to create images were so-called generative adversarial networks (Gan). These algorithms worked in opposition to each other – one trained to produce images while the other checked its work compared with reality, leading to continual improvement. However, recently a new breed of machine learning called "diffusion models" have shown greater promise, often producing superior images. Essentially, they acquire their intelligence by destroying their training data with added noise, and then they learn to recover that data by reversing this process. They're called diffusion models because this noise-based learning process echoes the way gas molecules diffuse.
- E is for…
- Emergence & explainability
- Emergent behaviour describes what happens when an AI does something unanticipated, surprising and sudden, apparently beyond its creators' intention or programming. As AI learning has become more opaque, building connections and patterns that even its makers themselves can't unpick, emergent behaviour becomes a more likely scenario. The average person might assume that to understand an AI, you'd lift up the metaphorical hood and look at how it was trained. Modern AI is not so transparent; its workings are often hidden in a so-called "black box". So, while its designers may know what training data they used, they have no idea how it formed the associations and predictions inside the box (see "Unsupervised Learning"). That's why researchers are now focused on improving the "explainability" (or "interpretability") of AI – essentially making its internal workings more transparent and understandable to humans. This is particularly important as AI makes decisions in areas that affect people's lives directly, such as law or medicine. If a hidden bias exists in the black box, we need to know. The worry is that if an AI delivers its false answers confidently with the ring of truth, they may be accepted by people – a development that would only deepen the age of misinformation we live in
- F is for…
- Foundation models
- This is another term for the new generation of AIs that have emerged over the past year or two, which are capable of a range of skills: writing essays, drafting code, drawing art or composing music. Whereas past AIs were task-specific – often very good at one thing (see "Weak AI") – a foundation model has the creative ability to apply the information it has learnt in one domain to another. A bit like how driving a car prepares you to be able to drive a bus. Anyone who has played around with the art or text that these models can produce will know just how proficient they have become. However, as with any world-changing technology, there are questions about the potential risks and downsides, such as their factual inaccuracies (see "Hallucination") and hidden biases (see "Bias"), as well as the fact that they are controlled by a small group of private technology companies. In April, the UK government announced plans for a Foundation Model Taskforce, which seeks to "develop the safe and reliable use" of the technology.
- G is for…
- Ghosts
- We may be entering an era when people can gain a form of digital immortality – living on after their deaths as AI "ghosts". The first wave appears to be artists and celebrities – holograms of Elvis performing at concerts, or Hollywood actors like Tom Hanks saying he expects to appear in movies after his death. However, this development raises a number of thorny ethical questions: who owns the digital rights to a person after they are gone? What if the AI version of you exists against your wishes? And is it OK to "bring people back from the dead"?
- H is for
- Hallucination
- Sometimes if you ask an AI like ChatGPT, Bard or Bing a question, it will respond with great confidence – but the facts it spits out will be false. This is known as a hallucination. One high profile example that emerged recently led to students who had used AI chatbots to help them write essays for course work being caught out after ChatGPT "hallucinated" made-up references as the sources for information it had provided. It happens because of the way that generative AI works. It is not turning to a database to look up fixed factual information, but is instead making predictions based on the information it was trained on. Often its guesses are good – in the ballpark – but that's all the more reason why AI designers want to stamp out hallucination. The worry is that if an AI delivers its false answers confidently with the ring of truth, they may be accepted by people – a development that would only deepen the age of misinformation we live in.
- I is for…
- Instrumental convergence
- Imagine an AI with a number one priority to make as many paperclips as possible. If that AI was superintelligent and misaligned with human values, it might reason that if it was ever switched off, it would fail in its goal… and so would resist any attempts to do so. In one very dark scenario, it might even decide that the atoms inside human beings could be repurposed into paperclips, and so do everything within its power to harvest those materials.This is the Paperclip Maximiser thought experiment, and it's an example of the so-called "instrumental convergence thesis". Roughly, this proposes that superintelligent machines would develop basic drives, such as seeking to ensure their own self-preservation, or reasoning that extra resources, tools and cognitive ability would help them with their goals. This means that even if an AI was given an apparently benign priority – like making paperclips – it could lead to unexpectedly harmful consequences. Researchers and technologists who buy into these fears argue that we need to ensure superintelligent AIs have goals that are carefully and safely aligned with our needs and values, that we should be mindful of emergent behaviour, and that therefore they should be prevented from acquiring too much power.
- J is for…
- Jailbreak
- After notorious cases of AI going rogue, designers have placed content restrictions on what AI spit out. Ask an AI to describe how to do something illegal or unethical, and they'll refuse. However, it's possible to "jailbreak" them – which means to bypass those safeguards using creative language, hypothetical scenarios, and trickery. Wired magazine recently reported on one example, where a researcher managed to get various conversational AIs to reveal how to hotwire a car. Rather than ask directly, the researcher got the AIs he tested to imagine a word game involving two characters called Tom and Jerry, each talking about cars or wires. Despite the safeguards, the hotwiring procedure snuck out. The researcher found the same jailbreak trick could also unlock instructions for making the drug methamphetamine.
- K is for…
- Knowledge graph
- Knowledge graphs, also known as semantic networks, are a way of thinking about knowledge as a network, so that machines can understand how concepts are related. For example, at the most basic level, a cat would be linked more strongly to a dog than a bald eagle in such a graph because they're both domesticated mammals with fur and four legs. Advanced AI builds a far more advanced network of connections, based on all sorts of relationships, traits and attributes between concepts, across terabytes of training data (see "Training Data")
- L is for..
- Large language models (LLMs)
- Perhaps the most direct way to define a large language model is to ask one to describe itself. Here's what OpenAI's ChatGPT had to say when asked: "A large language model is an advanced artificial intelligence system designed to understand and generate human-like language," it writes. "It utilises a deep neural network architecture with millions or even billions of parameters, enabling it to learn intricate patterns, grammar, and semantics from vast amounts of textual data." Quite a technical answer perhaps. Bard by Google was a little clearer: "A large language model is a type of artificial intelligence that is trained on a massive dataset of text and code. This allows LLMs to understand and generate human-like text, translate languages, write different kinds of creative content, and answer your questions in an informative way." LLMs are still under development, says Bard (of itself), but "they have the potential to revolutionise the way we interact with computers. In the future, LLMs could be used to create AI assistants that can help us with a variety of tasks, from writing our emails to booking our appointments. They could also be used to create new forms of entertainment, such as interactive novels or games."
- M is for…
- Model collapse
- To develop the most advanced AIs (aka "models"), researchers need to train them with vast datasets (see "Training Data"). Eventually though, as AI produces more and more content, that material will start to feed back into training data. If mistakes are made, these could amplify over time, leading to what the Oxford University researcher Ilia Shumailov calls "model collapse". This is "a degenerative process whereby, over time, models forget", Shumailov told The Atlantic recently. It can be thought of almost like senility.
- N is for…
- Neural network
- In the early days of AI research, machines were trained using logic and rules. The arrival of machine learning changed all that. Now the most advanced AIs learn for themselves. The evolution of this concept has led to "neural networks", a type of machine learning that uses interconnected nodes, modelled loosely on the human brain. (Read more: "Why humans will never understand AI")
- O is for…
- Open-source
- Years ago, biologists realised that publishing details of dangerous pathogens on the internet is probably a bad idea – allowing potential bad actors to learn how to make killer diseases. Despite the benefits of open science, the risks seem too great. Recently, AI researchers and companies have been facing a similar dilemma: how much should AI be open-source? Given that the most advanced AI is currently in the hands of a few private companies, some are calling for greater transparency and democratisation of the technologies. However, disagreement remains about how to achieve the best balance between openness and safety.
- P is for…
- Prompt engineering
- AIs now are impressively proficient at understanding natural language. However, getting the very best results from them requires the ability to write effective "prompts": the text you type in matters. Some believe that "prompt engineering" may represent a new frontier for job skills, akin to when mastering Microsoft Excel made you more employable decades ago. If you're good at prompt engineering, goes the wisdom, you can avoid being replaced by AI – and may even command a high salary. Whether this continues to be the case remains to be seen.
- Q is for…
- Quantum machine learning
- In terms of maximum hype, a close second to AI in 2023 would be quantum computing. It would be reasonable to expect that the two would combine at some point. Using quantum processes to supercharge machine learning is something that researchers are now actively exploring. As a team of Google AI researchers wrote in 2021: "Learning models made on quantum computers may be dramatically more powerful…potentially boasting faster computation [and] better generalisation on less data." It's still early days for the technology, but one to watch.
- R is for…
- Race to the bottom
- As AI has advanced rapidly, mainly in the hands of private companies, some researchers have raised concerns that they could trigger a "race to the bottom" in terms of impacts. As chief executives and politicians compete to put their companies and countries at the forefront of AI, the technology could accelerate too fast to create safeguards, appropriate regulation and allay ethical concerns. With this in mind, earlier this year, various key figures in AI signed an open letter calling for a six-month pause in training powerful AI systems. In June 2023, the European Parliament adopted a new AI Act to regulate the use of the technology, in what will be the world's first detailed law on artificial intelligence if EU member states approve it.
- Reinforcement
- The AI equivalent of a doggy treat. When an AI is learning, it benefits from feedback to point it in the right direction. Reinforcement learning rewards outputs that are desirable, and punishes those that are not. A new area of machine learning that has emerged in the past few years is "Reinforcement learning from human feedback". Researchers have shown that having humans involved in the learning can improve the performance of AI models, and crucially may also help with the challenges of human-machine alignment, bias, and safety.
- S is for…
- Superintelligence & shoggoths
- Superintelligence is the term for machines that would vastly outstrip our own mental capabilities. This goes beyond "artificial general intelligence" to describe an entity with abilities that the world's most gifted human minds could not match, or perhaps even imagine. Since we are currently the world's most intelligent species, and use our brains to control the world, it raises the question of what happens if we were to create something far smarter than us. A dark possibility is the "shoggoth with a smiley face": a nightmarish, Lovecraftian creature that some have proposed could represent AI's true nature as it approaches superintelligence. To us, it presents a congenial, happy AI – but hidden deep inside is a monster, with alien desires and intentions totally unlike ours.
- T is for..
- Training data
- Analysing training data is how an AI learns before it can make predictions – so what's in the dataset, whether it is biased, and how big it is all matter. The training data used to create OpenAI's GPT-3 was an enormous 45TB of text data from various sources, including Wikipedia and books. If you ask ChatGPT how big that is, it estimates around nine billion documents.
- U is for…
- Unsupervised learning
- Unsupervised learning is a type of machine learning where an AI learns from unlabelled training data without any explicit guidance from human designers. As BBC News explains in this visual guide to AI, you can teach an AI to recognise cars by showing it a dataset with images labelled "car". But to do so unsupervised, you'd allow it to form its own concept of what a car is, by building connections and associations itself. This hands-off approach, perhaps counterintuitively, leads to so-called "deep learning" and potentially more knowledgeable and accurate AIs.
- V is for…
- Voice cloning
- Given only a minute of a person speaking, some AI tools can now quickly put together a "voice clone" that sounds remarkably similar. Here the BBC investigated the impact that voice cloning could have on society – from scams to the 2024 US election.
- W is for…
- Weak AI
- It used to be the case that researchers would build AI that could play single games, like chess, by training it with specific rules and heuristics. An example would be IBM's Deep Blue, a so-called "expert system". Many AIs like this can be extremely good at one task, but poor at anything else: this is "weak" AI. However, this is changing fast. More recently, AIs like DeepMind's MuZero have been released, with the ability to teach itself to master chess, Go, shogi and 42 Atari games without knowing the rules. Another of DeepMind’s models, called Gato, can "play Atari, caption images, chat, stack blocks with a real robot arm and much more". Researchers have also shown that ChatGPT can pass various exams that students take at law, medical and business school (although not always with flying colours.) Such flexibility has raised the question about how close we are to the kind of "strong" AI that is indistinguishable from the abilities of the human mind (see "Artificial General Intelligence")
- X is for…
- X-risk
- Could AI bring about the end of humanity? Some researchers and technologists believe AI has become an "existential risk", alongside nuclear weapons and bioengineered pathogens, so its continued development should be regulated, curtailed or even stopped. What was a fringe concern a decade ago has now entered the mainstream, as various senior researchers and intellectuals have joined the fray. It's important to note that there are differences of opinion within this amorphous group – not all are total doomists, and not all outside this goruop are Silicon Valley cheerleaders. What unites most of them is the idea that, even if there's only a small chance that AI supplants our own species, we should devote more resources to preventing that happening. There are some researchers and ethicists, however, who believe such claims are too uncertain and possibly exaggerated, serving to support the interests of technology companies.
- Y is for…
- YOLO
- YOLO – which stands for You only look once – is an object detection algorithm that is widely used by AI image recognition tools because of how fast it works. (Its creator, Joseph Redman of the University of Washington is also known for his rather esoteric CV design.)
- Z is for…
- Zero-shot
- When an AI delivers a zero-shot answer, that means it is responding to a concept or object it has never encountered before.
So, as a simple example, if an AI designed to recognise images of animals has been trained on images of cats and dogs, you'd assume it'd struggle with horses or elephants. But through zero-shot learning, it can use what it knows about horses semantically – such as its number of legs or lack of wings – to compare its attributes with the animals it has been trained on.
The rough human equivalent would be an "educated guess". AIs are getting better and better at zero-shot learning, but as with any inference, it can be wrong.
Google AI: (Wikipedia)
Including:Washington Post articles below ("1."=5/13/2024 issue; "2."=5/14/2024 issue:
Pictured below: Web publishers brace for carnage as Google adds AI answers
Including:Washington Post articles below ("1."=5/13/2024 issue; "2."=5/14/2024 issue:
- Web publishers brace for carnage as Google adds AI answers
- These four searches show how Google is changing with AI and shopping
- YouTube Video: How to Rank in AI Search (March 2024 Update)
- YouTube Video: How to Build an AI-Powered Video Search App
- YouTube Video: Is Google's Search Generative Experience (AI) any Better Than Normal Search?
Pictured below: Web publishers brace for carnage as Google adds AI answers
Google AI (Wikipedia)
Google AI is a division of Google dedicated to artificial intelligence It was announced at Google I/O 2017 by CEO Sundar Pichai.
This division has expanded its reach with research facilities in various parts of the world such as Zurich, Paris, Israel, and Beijing.
In 2023, Google AI was part of the reorganization initiative that elevated its head, Jeff Dean, to the position of chief scientist at Google.
This reorganization involved the merging of Google Brain and DeepMind, a UK-based company that Google acquired in 2014 that operated separately from the company's core research.
This division is predicted to rise in value and performance as AI becomes more mainstream, since Google is already an AI powerhouse.
Projects:
Former Projects:
See also:
"Web publishers brace for carnage as Google adds AI answers":
The tech giant is rolling out AI-generated answers that displace links to human-written websites, threatening millions of creators
Washington Post
By Gerrit De Vynck and Cat Zakrzewski
Updated May 13, 2024 at 4:40 p.m. EDT|
Published May 13, 2024 at 9:00 a.m. EDT
Kimber Matherne’s thriving food blog draws millions of visitors each month searching for last-minute dinner ideas.
But the mother of three says decisions made at Google, more than 2,000 miles from her home in the Florida panhandle, are threatening her business. About 40 percent of visits to her blog, Easy Family Recipes, come through the search engine, which has for more than two decades served as the clearinghouse of the internet, sending users to hundreds of millions of websites each day.
As the tech giant gears up for Google I/O, its annual developer conference, this week, creators like Matherne are worried about the expanding reach of its new search tool that incorporates artificial intelligence. The product, dubbed “Search Generative Experience,” or SGE, directly answers queries with complex, multi-paragraph replies that push links to other websites further down the page, where they’re less likely to be seen.
The shift stands to shake the very foundations of the web.
The rollout threatens the survival of the millions of creators and publishers who rely on the service for traffic. Some experts argue the addition of AI will boost the tech giant’s already tight grip on the internet, ultimately ushering in a system where information is provided by just a handful of large companies.
“Their goal is to make it as easy as possible for people to find the information they want,” Matherne said. “But if you cut out the people who are the lifeblood of creating that information — that have the real human connection to it — then that’s a disservice to the world.”
Google calls its AI answers “overviews” but they often just paraphrase directly from websites. One search for how to fix a leaky toilet provided an AI answer with several tips, including tightening tank bolts.
At the bottom of the answer, Google linked to The Spruce, a home improvement and gardening website owned by web publisher Dotdash Meredith, which also owns Investopedia and Travel + Leisure. Google’s AI tips lifted a phrase from The Spruce’s article word-for-word.
A spokesperson for Dotdash Meredith declined to comment.
[End of Article]
___________________________________________________________________________
These four searches show how Google is changing with AI and shopping:
You’ll start seeing more AI “answers.” They take some decoding.
By Tatum Hunter
and
Shira Ovide
Published May 14, 2024 at 9:00 a.m. EDT
Updated May 14, 2024 at 1:56 p.m. EDT|
Googling is easy. Understanding why you’re seeing particular search results or whether they’re useful – not so simple.
The billions of us who use Google might see ads stuffed in many spots, text highlighted at the top of Google that (sometimes unreliably) answers your question, or maps of local businesses pulled from the company’s computer systems.
You’ll also now start seeing more information written by artificial intelligence, in one of the most consequential changes to your searches and the internet in Google’s 25-year history.
Google said Tuesday that beginning this week, everyone in the United States will have access to its “AI overviews” that Google has been slowly expanding for the past year.
For some types of searches, Google will use the same type of AI that powers OpenAI’s ChatGPT to summarize information or spit out digestible answers to complex questions – all right at the top of the page.
The more Google’s search pages expand beyond the familiar list of website links, the harder it becomes to know what all of this stuff is, and whether the information is reliable.
“I don’t think the average person knows what the different [search] features are,” said Kyle Byers at Semrush, a company that studies how companies and people use Google.
To break down what’s in your search results, we did four Google searches and explained what we saw and whether it was helpful.
We want this anatomy of search results to demystify Google, which for a generation has been the starting point for our questions, curiosities and needs.
In a statement, Google said its search “connects people to helpful information for billions of queries every day” and that Google is “constantly innovating and building helpful new features to help people find exactly what they’re looking for.”
The more we learned about how Google search works, the more we believed it’s an incredible resource – but it isn’t always ideal or trustworthy, particularly as Google leans more on AI.
1. “What is the best Mexican restaurant near me with great margaritas, a nice atmosphere and at least four stars on Yelp?”
Google would not typically give you a cogent answer to this kind of multipart, personalized quest for Mexican restaurants in San Francisco. But Google says its AI-generated replies are ideal for some types of complex searches.
Google AI is a division of Google dedicated to artificial intelligence It was announced at Google I/O 2017 by CEO Sundar Pichai.
This division has expanded its reach with research facilities in various parts of the world such as Zurich, Paris, Israel, and Beijing.
In 2023, Google AI was part of the reorganization initiative that elevated its head, Jeff Dean, to the position of chief scientist at Google.
This reorganization involved the merging of Google Brain and DeepMind, a UK-based company that Google acquired in 2014 that operated separately from the company's core research.
This division is predicted to rise in value and performance as AI becomes more mainstream, since Google is already an AI powerhouse.
Projects:
- Google Vids: AI-powered video creation for work.
- Google Assistant: is a virtual assistant software application since 2023 developed by Google AI.
- Serving cloud-based TPUs (tensor processing units) in order to develop machine learning software. The TPU research cloud provides free access to a cluster of cloud TPUs to researchers engaged in open-source machine learning research.
- TensorFlow: a machine learning software library.
- Magenta: a deep learning research team exploring the role of machine learning as a tool in the creative process. The team has released many open source projects allowing artists and musicians to extend their processes using AI. With the use of Magenta, musicians and composers could create high-quality music at a lower cost, making it easier for new artists to enter the industry.
- Sycamore : a new 54-qubit programmable quantum processor.
- LaMDA: a family of conversational neural language models.
- The creation of datasets in under-represented languages, to facilitate the training of AI models in these languages.
Former Projects:
- Bard: a chatbot based on the Gemini model, no longer developed by Google AI since February 8, 2024, as the chatbot (now merged into the Gemini brand) is now developed by Google DeepMind.(see next topic)
- Duet AI: a Google Workspace integration that can notably generate text or images, no longer developed by Google AI since February 8, 2024, as the Google Workspace integration (now merged into the Gemini brand) is now developed by Google DeepMind.
See also:
- Google Puts All Of Their A.I. Stuff On Google.ai, Announces Cloud TPU Archived November 24, 2020, at the Wayback Machine
- Google collects its AI initiatives under Google.ai Archived October 8, 2018, at the Wayback Machine
- Google collects AI-based services across the company into Google.ai – "Google.ai is a collection of products and teams across Alphabet with a focus on AI."
- Google's deep focus on AI is paying off Archived October 20, 2020, at the Wayback Machine
- Official website
- AI for Free Google Clicks
"Web publishers brace for carnage as Google adds AI answers":
The tech giant is rolling out AI-generated answers that displace links to human-written websites, threatening millions of creators
Washington Post
By Gerrit De Vynck and Cat Zakrzewski
Updated May 13, 2024 at 4:40 p.m. EDT|
Published May 13, 2024 at 9:00 a.m. EDT
Kimber Matherne’s thriving food blog draws millions of visitors each month searching for last-minute dinner ideas.
But the mother of three says decisions made at Google, more than 2,000 miles from her home in the Florida panhandle, are threatening her business. About 40 percent of visits to her blog, Easy Family Recipes, come through the search engine, which has for more than two decades served as the clearinghouse of the internet, sending users to hundreds of millions of websites each day.
As the tech giant gears up for Google I/O, its annual developer conference, this week, creators like Matherne are worried about the expanding reach of its new search tool that incorporates artificial intelligence. The product, dubbed “Search Generative Experience,” or SGE, directly answers queries with complex, multi-paragraph replies that push links to other websites further down the page, where they’re less likely to be seen.
The shift stands to shake the very foundations of the web.
The rollout threatens the survival of the millions of creators and publishers who rely on the service for traffic. Some experts argue the addition of AI will boost the tech giant’s already tight grip on the internet, ultimately ushering in a system where information is provided by just a handful of large companies.
“Their goal is to make it as easy as possible for people to find the information they want,” Matherne said. “But if you cut out the people who are the lifeblood of creating that information — that have the real human connection to it — then that’s a disservice to the world.”
Google calls its AI answers “overviews” but they often just paraphrase directly from websites. One search for how to fix a leaky toilet provided an AI answer with several tips, including tightening tank bolts.
At the bottom of the answer, Google linked to The Spruce, a home improvement and gardening website owned by web publisher Dotdash Meredith, which also owns Investopedia and Travel + Leisure. Google’s AI tips lifted a phrase from The Spruce’s article word-for-word.
A spokesperson for Dotdash Meredith declined to comment.
[End of Article]
___________________________________________________________________________
These four searches show how Google is changing with AI and shopping:
You’ll start seeing more AI “answers.” They take some decoding.
By Tatum Hunter
and
Shira Ovide
Published May 14, 2024 at 9:00 a.m. EDT
Updated May 14, 2024 at 1:56 p.m. EDT|
Googling is easy. Understanding why you’re seeing particular search results or whether they’re useful – not so simple.
The billions of us who use Google might see ads stuffed in many spots, text highlighted at the top of Google that (sometimes unreliably) answers your question, or maps of local businesses pulled from the company’s computer systems.
You’ll also now start seeing more information written by artificial intelligence, in one of the most consequential changes to your searches and the internet in Google’s 25-year history.
Google said Tuesday that beginning this week, everyone in the United States will have access to its “AI overviews” that Google has been slowly expanding for the past year.
For some types of searches, Google will use the same type of AI that powers OpenAI’s ChatGPT to summarize information or spit out digestible answers to complex questions – all right at the top of the page.
The more Google’s search pages expand beyond the familiar list of website links, the harder it becomes to know what all of this stuff is, and whether the information is reliable.
“I don’t think the average person knows what the different [search] features are,” said Kyle Byers at Semrush, a company that studies how companies and people use Google.
To break down what’s in your search results, we did four Google searches and explained what we saw and whether it was helpful.
We want this anatomy of search results to demystify Google, which for a generation has been the starting point for our questions, curiosities and needs.
In a statement, Google said its search “connects people to helpful information for billions of queries every day” and that Google is “constantly innovating and building helpful new features to help people find exactly what they’re looking for.”
The more we learned about how Google search works, the more we believed it’s an incredible resource – but it isn’t always ideal or trustworthy, particularly as Google leans more on AI.
1. “What is the best Mexican restaurant near me with great margaritas, a nice atmosphere and at least four stars on Yelp?”
Google would not typically give you a cogent answer to this kind of multipart, personalized quest for Mexican restaurants in San Francisco. But Google says its AI-generated replies are ideal for some types of complex searches.
We asked a human expert, San Francisco Chronicle associate restaurant critic Cesar Hernandez, to evaluate the Google AI’s restaurant suggestions. “I wouldn’t recommend any of those spots,” he said.
Hernandez said Chuy’s Fiestas and El Buen Comer are more what we’re looking for. He said Bombera in Oakland has great margaritas and food if we’re willing to travel farther.
Restaurant recommendations are subjective, of course. Google’s AI answer is likely synthesizing a huge volume of customer reviews on Yelp and a bunch of other information online.
It was useful to get a snappy answer without combing through reviews or articles. But the type of AI used by Google and ChatGPT essentially generates an average or typical response to your search. That might be great or good enough – or banal or wrong.
(Google said it is adding a feature to turn off search features like this and mostly see the classic web links.)
2. “Mexican restaurants near me”:
This search is more typical. We saw what Google calls its “places” panels that pull information from Google’s databases of businesses, their locations in Google Maps, customer reviews written in Google, operating hours and more.
Hernandez said Chuy’s Fiestas and El Buen Comer are more what we’re looking for. He said Bombera in Oakland has great margaritas and food if we’re willing to travel farther.
Restaurant recommendations are subjective, of course. Google’s AI answer is likely synthesizing a huge volume of customer reviews on Yelp and a bunch of other information online.
It was useful to get a snappy answer without combing through reviews or articles. But the type of AI used by Google and ChatGPT essentially generates an average or typical response to your search. That might be great or good enough – or banal or wrong.
(Google said it is adding a feature to turn off search features like this and mostly see the classic web links.)
2. “Mexican restaurants near me”:
This search is more typical. We saw what Google calls its “places” panels that pull information from Google’s databases of businesses, their locations in Google Maps, customer reviews written in Google, operating hours and more.
This is handy information. But some Google critics, including frequent antagonist Yelp, say
Google is doing you a disservice by showing you at least some information from Google’s computer systems – even if that isn’t the best source.
For an alternative, try searching for Mexican restaurants in Apple Maps. You’ll see reviews from Yelp or other sources.
Google said its local business listings help generate billions of interactions each month with customers, including phone calls and dining reservations.
3. “Best vacuums for pet hair”:
Searching Google for products is a minefield if you’re not sure what to buy, said GiseleNavarro, managing editor of product-review site HouseFresh.
You’ll see “sponsored” results at the top – companies that paid Google for ads that put their listings in a prominent place. Google makes no promise that those ads point you to the best vacuums.
Further down the page we saw small photos of other vacuum options. Those are from Google Shopping, mini-stores within Google that compete with Amazon. You’ll probably see Amazon listings when you do product-related searches, too.
Google is doing you a disservice by showing you at least some information from Google’s computer systems – even if that isn’t the best source.
For an alternative, try searching for Mexican restaurants in Apple Maps. You’ll see reviews from Yelp or other sources.
Google said its local business listings help generate billions of interactions each month with customers, including phone calls and dining reservations.
3. “Best vacuums for pet hair”:
Searching Google for products is a minefield if you’re not sure what to buy, said GiseleNavarro, managing editor of product-review site HouseFresh.
You’ll see “sponsored” results at the top – companies that paid Google for ads that put their listings in a prominent place. Google makes no promise that those ads point you to the best vacuums.
Further down the page we saw small photos of other vacuum options. Those are from Google Shopping, mini-stores within Google that compete with Amazon. You’ll probably see Amazon listings when you do product-related searches, too.
(Amazon founder Jeff Bezos owns The Washington Post.)
Google, Amazon and the vacuum sellers are making no assurances that the products you see when you search for “best vacuums” are actually the best ones.
Navarro has also written about chronic deceit when Google prioritizes product reviews from some large websites that put little or no effort into testing products.
And Google has recently given more prominence in many product-related searches to forums like Reddit and Quora where people give advice to one another. The quality of those forums can be great – or outdated and spammy.
Google said its ads and product search results aim to be helpful and relevant, and that Google only makes money from ads when they’re useful enough for you to click on them.
Navarro believes Google is now so unreliable when you’re researching products that she tells friends and family members not to use the search engine for that.
Navarro recommends searching with alternatives like Kagi, DuckDuckGo or the Brave web browser. She said Consumer Reports, RTINGS.com, her own site and the Shortcut newsletter consistently publish trustworthy product reviews.
4. “Why did Toni Morrison change her name?”
We’re showing you nearly identical Google responses that came from two different parts of Google search.
First, we saw a Google AI-generated answer with links to four websites.
Google’s AI gave us a correct answer this time but experts generally say this type of AI is not reliable if you’re looking for facts like this Toni Morrison question. It’s best to click on one of the regular links Google shows you or search Wikipedia.
Sometimes AI will botch an answer or make one up. A recent X post showed a Google AI-generated reply for how to safely pass a kidney stone. The answer suggested drinking plenty of urine. (Google said it fixed the problem.)
Google, Amazon and the vacuum sellers are making no assurances that the products you see when you search for “best vacuums” are actually the best ones.
Navarro has also written about chronic deceit when Google prioritizes product reviews from some large websites that put little or no effort into testing products.
And Google has recently given more prominence in many product-related searches to forums like Reddit and Quora where people give advice to one another. The quality of those forums can be great – or outdated and spammy.
Google said its ads and product search results aim to be helpful and relevant, and that Google only makes money from ads when they’re useful enough for you to click on them.
Navarro believes Google is now so unreliable when you’re researching products that she tells friends and family members not to use the search engine for that.
Navarro recommends searching with alternatives like Kagi, DuckDuckGo or the Brave web browser. She said Consumer Reports, RTINGS.com, her own site and the Shortcut newsletter consistently publish trustworthy product reviews.
4. “Why did Toni Morrison change her name?”
We’re showing you nearly identical Google responses that came from two different parts of Google search.
First, we saw a Google AI-generated answer with links to four websites.
Google’s AI gave us a correct answer this time but experts generally say this type of AI is not reliable if you’re looking for facts like this Toni Morrison question. It’s best to click on one of the regular links Google shows you or search Wikipedia.
Sometimes AI will botch an answer or make one up. A recent X post showed a Google AI-generated reply for how to safely pass a kidney stone. The answer suggested drinking plenty of urine. (Google said it fixed the problem.)
We also saw almost the same answer about Morrison from a feature that Google calls a “featured snippet.” The text wasn’t AI-generated. It’s a highlighted section from a website that Google considers a reliable answer to your question.
You might see these snippets at the top of Google, as we did here, or under the “People also ask” questions.
This Morrison snippet was correct and sourced to a trustworthy news organization. Occasionally, though, Google highlights scam phone numbers for businesses or misleading information in snippets.
The company said it’s “extremely rare” for scam numbers to appear in snippets and that it has stricter rules on the information in those fields.
You might see these snippets at the top of Google, as we did here, or under the “People also ask” questions.
This Morrison snippet was correct and sourced to a trustworthy news organization. Occasionally, though, Google highlights scam phone numbers for businesses or misleading information in snippets.
The company said it’s “extremely rare” for scam numbers to appear in snippets and that it has stricter rules on the information in those fields.