Copyright © 2015 Bert N. Langford (Images may be subject to copyright. Please send feedback)
Welcome to Our Generation USA!
Under this Web Page
Artificial Intelligence (AI)
we cover the impact of the many emerging technologies that AI enables, but also the jobs that might be made obsolete by AI.
Artificial Intelligence (AI):
Articles Covered below:
TOP: AI Systems: What is Intelligence Composed of?;
BOTTOM: AI & Artificial Cognitive Systems
Articles Covered below:
- 14 Ways AI Will Benefit Or Harm Society (Forbes Technology Council March 1, 2018)
- These are the jobs most at risk of automation according to Oxford University: Is yours one of them?
- How A.I. Could Be Weaponized to Spread Disinformation
- YouTube Video: What is Artificial Intelligence Exactly?
- YouTube Video: Bill Gates on the impact of AI on the job market
- YouTube Video: The Future of Artificial Intelligence (Stanford University)
TOP: AI Systems: What is Intelligence Composed of?;
BOTTOM: AI & Artificial Cognitive Systems
14 Ways AI Will Benefit Or Harm Society (Forbes Technology Council March 1, 2018)
"Artificial intelligence (AI) is on the rise both in business and in the world in general. How beneficial is it really to your business in the long run? Sure, it can take over those time-consuming and mundane tasks that are bogging your employees down, but at what cost?
With AI spending expected to reach $46 billion by 2020, according to an IDC report, there’s no sign of the technology slowing down. Adding AI to your business may be the next step as you look for ways to advance your operations and increase your performance.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?To understand how AI will impact your business going forward, 14 members of Forbes Technology Council weigh in on the concerns about artificial intelligence and provide reasons why AI is either a detriment or a benefit to society. Here is what they had to say:
1. Enhances Efficiency And Throughput
Concerns about disruptive technologies are common. A recent example is automobiles -- it took years to develop regulation around the industry to make it safe. That said, AI today is a huge benefit to society because it enhances our efficiency and throughput, while creating new opportunities for revenue generation, cost savings and job creation. - Anand Sampat, Datmo
2. Frees Up Humans To Do What They Do Best
Humans are not best served by doing tedious tasks. Machines can do that, so this is where AI can provide a true benefit. This allows us to do the more interpersonal and creative aspects of work. - Chalmers Brown, Due
3. Adds Jobs, Strengthens The Economy
We all see the headlines: Robots and AI will destroy jobs. This is fiction rather than fact. AI encourages a gradual evolution in the job market which, with the right preparation, will be positive. People will still work, but they’ll work better with the help of AI. The unparalleled combination of human and machine will become the new normal in the workforce of the future. - Matthew Lieberman, PwC
4. Leads To Loss Of Control
If machines do get smarter than humans, there could be a loss of control that can be a detriment. Whether that happens or whether certain controls can be put in place remains to be seen. - Muhammed Othman, Calendar
5. Enhances Our Lifestyle
The rise of AI in our society will enhance our lifestyle and create more efficient businesses. Some of the mundane tasks like answering emails and data entry will by done by intelligent assistants. Smart homes will also reduce energy usage and provide better security, marketing will be more targeted and we will get better health care thanks to better diagnoses. - Naresh Soni, Tsunami ARVR
6. Supervises Learning For Telemedicine
AI is a technology that can be used for both good and nefarious purposes, so there is a need to be vigilant. The latest technologies seem typically applied towards the wealthiest among us, but AI has the potential to extend knowledge and understanding to a broader population -- e.g. image-based AI diagnoses of medical conditions could allow for a more comprehensive deployment of telemedicine. - Harald Quintus-Bosz, Cooper Perkins, Inc.
7. Creates Unintended And Unforeseen Consequences
While fears about killer robots grab headlines, unintended and unforeseen consequences of artificial intelligence need attention today, as we're already living with them. For example, it is believed that Facebook's newsfeed algorithm influenced an election outcome that affected geopolitics. How can we better anticipate and address such possible outcomes in future? - Simon Smith, BenchSci
8. Increases Automation
There will be economic consequences to the widespread adoption of machine learning and other AI technologies. AI is capable of performing tasks that would once have required intensive human labor or not have been possible at all. The major benefit for business will be a reduction in operational costs brought about by AI automation -- whether that’s a net positive for society remains to be seen. - Vik Patel, Nexcess
9. Elevates The Condition Of Mankind
The ability for technology to solve more problems, answer more questions and innovate with a number of inputs beyond the capacity of the human brain can certainly be used for good or ill. If history is any guide, the improvement of technology tends to elevate the condition of mankind and allow us to focus on higher order functions and an improved quality of life. - Wade Burgess, Shiftgig
10. Solves Complex Social Problems
Much of the fear with AI is due to the misunderstanding of what it is and how it should be applied. Although AI has promise for solving complex social problems, there are ethical issues and biases we must still explore. We are just beginning to understand how AI can be applied to meaningful problems. As our use of AI matures, we will find it to be a clear benefit in our lives. - Mark Benson, Exosite, LLC
11. Improves Demand Side Management
AI is a benefit to society because machines can become smarter over time and increase efficiencies. Additionally, computers are not susceptible to the same probability of errors as human beings are. From an energy standpoint, AI can be used to analyze and research historical data to determine how to most efficiently distribute energy loads from a grid perspective. - Greg Sarich, CLEAResult
12. Benefits Multiple Industries
Society has and will continue to benefit from AI based on character/facial recognition, digital content analysis and accuracy in identifying patterns, whether they are used for health sciences, academic research or technology applications. AI risks are real if we don't understand the quality of the incoming data and set AI rules which are making granular trade-off decisions at increasing computing speeds. - Mark Butler, Qualys.com
13. Absolves Humans Of All Responsibility
It is one thing to use machine learning to predict and help solve problems; it is quite another to use these systems to purposely control and act in ways that will make people unnecessary.
When machine intelligence exceeds our ability to understand it, or it becomes superior intelligence, we should take care to not blindly follow its recommendation and absolve ourselves of all responsibility. - Chris Kirby, Voices.com
14. Extends And Expands Creativity
AI intelligence is the biggest opportunity of our lifetime to extend and expand human creativity and ingenuity. The two main concerns that the fear-mongers raise are around AI leading to job losses in the society and AI going rogue and taking control of the human race.
I believe that both these concerns raised by critics are moot or solvable. - Ganesh Padmanabhan, CognitiveScale, Inc
[End of Article #1]
___________________________________________________________________________
These are the jobs most at risk of automation according to Oxford University: Is yours one of them? (The Telegraph September 27, 2017)
In his speech at the 2017 Labour Party conference, Jeremy Corbyn outlined his desire to "urgently... face the challenge of automation", which he called a " threat in the hands of the greedy".
Whether or not Corbyn is planing a potentially controversial 'robot tax' wasn't clear from his speech, but addressing the forward march of automation is a savvy move designed to appeal to voters in low-paying, routine work.
Click here for rest of Article.
___________________________________________________________________________
Artificial intelligence (AI, also machine intelligence, MI) is intelligence displayed by machines, in contrast with the natural intelligence (NI) displayed by humans and other animals.
In computer science AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". See glossary of artificial intelligence.
The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring "intelligence" are often removed from the definition, a phenomenon known as the AI effect, leading to the quip "AI is whatever hasn't been done yet." For instance, optical character recognition is frequently excluded from "artificial intelligence", having become a routine technology.
Capabilities generally classified as AI as of 2017 include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data, including images and videos.
Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success and renewed funding. For most of its history, AI research has been divided into subfields that often fail to communicate with each other.
The traditional problems (or goals) of AI research include the following:
Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, neural networks and methods based on statistics, probability and economics.
The AI field draws upon the following:
The field was founded on the claim that human intelligence "can be so precisely described that a machine can be made to simulate it". This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity. Some people also consider AI a danger to humanity if it progresses unabatedly.
In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.
Click on any of the following blue hyperlinks for more about Artificial Intelligence (AI):
How A.I. Could Be Weaponized to Spread Disinformation
By CADE METZ and SCOTT BLUMENTHAL JUNE 7, 2019 (New York Times)
In 2017, an online disinformation campaign spread against the “White Helmets,” claiming that the group of aid volunteers was serving as an arm of Western governments to sow unrest in Syria.
This false information was convincing. But the Russian organization behind the campaign ultimately gave itself away because it repeated the same text across many different fake news sites.
Now, researchers at the world’s top artificial intelligence labs are honing technology that can mimic how humans write, which could potentially help disinformation campaigns go undetected by generating huge amounts of subtly different messages.
One of the statements below is an example from the disinformation campaign. A.I. technology created the other. Guess which one is A.I.:
Tech giants like Facebook and governments around the world are struggling to deal with disinformation, from misleading posts about vaccines to incitement of sectarian violence.
As artificial intelligence becomes more powerful, experts worry that disinformation generated by A.I. could make an already complex problem bigger and even more difficult to solve.
In recent months, two prominent labs — OpenAI in San Francisco and the Allen Institute for Artificial Intelligence in Seattle — have built particularly powerful examples of this technology. Both have warned that it could become increasingly dangerous.
Alec Radford, a researcher at OpenAI, argued that this technology could help governments, companies and other organizations spread disinformation far more efficiently: Rather than hire human workers to write and distribute propaganda, these organizations could lean on machines to compose believable and varied content at tremendous scale.
A fake Facebook post seen by millions could, in effect, be tailored to political leanings with a simple tweak.
“The level of information pollution that could happen with systems like this a few years from now could just get bizarre,” Mr. Radford said.
This type of technology learns about the vagaries of language by analyzing vast amounts of text written by humans, including thousands of self-published books, Wikipedia articles and other internet content. After “training” on all this data, it can examine a short string of text and guess what comes next:
Click on any of the following 4 slideshows to see the resulting text when highlighting Democrats vs. Republicans on the two discussed issues: Immigrants vs. Rising Healthcare Costs:
"Artificial intelligence (AI) is on the rise both in business and in the world in general. How beneficial is it really to your business in the long run? Sure, it can take over those time-consuming and mundane tasks that are bogging your employees down, but at what cost?
With AI spending expected to reach $46 billion by 2020, according to an IDC report, there’s no sign of the technology slowing down. Adding AI to your business may be the next step as you look for ways to advance your operations and increase your performance.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?To understand how AI will impact your business going forward, 14 members of Forbes Technology Council weigh in on the concerns about artificial intelligence and provide reasons why AI is either a detriment or a benefit to society. Here is what they had to say:
1. Enhances Efficiency And Throughput
Concerns about disruptive technologies are common. A recent example is automobiles -- it took years to develop regulation around the industry to make it safe. That said, AI today is a huge benefit to society because it enhances our efficiency and throughput, while creating new opportunities for revenue generation, cost savings and job creation. - Anand Sampat, Datmo
2. Frees Up Humans To Do What They Do Best
Humans are not best served by doing tedious tasks. Machines can do that, so this is where AI can provide a true benefit. This allows us to do the more interpersonal and creative aspects of work. - Chalmers Brown, Due
3. Adds Jobs, Strengthens The Economy
We all see the headlines: Robots and AI will destroy jobs. This is fiction rather than fact. AI encourages a gradual evolution in the job market which, with the right preparation, will be positive. People will still work, but they’ll work better with the help of AI. The unparalleled combination of human and machine will become the new normal in the workforce of the future. - Matthew Lieberman, PwC
4. Leads To Loss Of Control
If machines do get smarter than humans, there could be a loss of control that can be a detriment. Whether that happens or whether certain controls can be put in place remains to be seen. - Muhammed Othman, Calendar
5. Enhances Our Lifestyle
The rise of AI in our society will enhance our lifestyle and create more efficient businesses. Some of the mundane tasks like answering emails and data entry will by done by intelligent assistants. Smart homes will also reduce energy usage and provide better security, marketing will be more targeted and we will get better health care thanks to better diagnoses. - Naresh Soni, Tsunami ARVR
6. Supervises Learning For Telemedicine
AI is a technology that can be used for both good and nefarious purposes, so there is a need to be vigilant. The latest technologies seem typically applied towards the wealthiest among us, but AI has the potential to extend knowledge and understanding to a broader population -- e.g. image-based AI diagnoses of medical conditions could allow for a more comprehensive deployment of telemedicine. - Harald Quintus-Bosz, Cooper Perkins, Inc.
7. Creates Unintended And Unforeseen Consequences
While fears about killer robots grab headlines, unintended and unforeseen consequences of artificial intelligence need attention today, as we're already living with them. For example, it is believed that Facebook's newsfeed algorithm influenced an election outcome that affected geopolitics. How can we better anticipate and address such possible outcomes in future? - Simon Smith, BenchSci
8. Increases Automation
There will be economic consequences to the widespread adoption of machine learning and other AI technologies. AI is capable of performing tasks that would once have required intensive human labor or not have been possible at all. The major benefit for business will be a reduction in operational costs brought about by AI automation -- whether that’s a net positive for society remains to be seen. - Vik Patel, Nexcess
9. Elevates The Condition Of Mankind
The ability for technology to solve more problems, answer more questions and innovate with a number of inputs beyond the capacity of the human brain can certainly be used for good or ill. If history is any guide, the improvement of technology tends to elevate the condition of mankind and allow us to focus on higher order functions and an improved quality of life. - Wade Burgess, Shiftgig
10. Solves Complex Social Problems
Much of the fear with AI is due to the misunderstanding of what it is and how it should be applied. Although AI has promise for solving complex social problems, there are ethical issues and biases we must still explore. We are just beginning to understand how AI can be applied to meaningful problems. As our use of AI matures, we will find it to be a clear benefit in our lives. - Mark Benson, Exosite, LLC
11. Improves Demand Side Management
AI is a benefit to society because machines can become smarter over time and increase efficiencies. Additionally, computers are not susceptible to the same probability of errors as human beings are. From an energy standpoint, AI can be used to analyze and research historical data to determine how to most efficiently distribute energy loads from a grid perspective. - Greg Sarich, CLEAResult
12. Benefits Multiple Industries
Society has and will continue to benefit from AI based on character/facial recognition, digital content analysis and accuracy in identifying patterns, whether they are used for health sciences, academic research or technology applications. AI risks are real if we don't understand the quality of the incoming data and set AI rules which are making granular trade-off decisions at increasing computing speeds. - Mark Butler, Qualys.com
13. Absolves Humans Of All Responsibility
It is one thing to use machine learning to predict and help solve problems; it is quite another to use these systems to purposely control and act in ways that will make people unnecessary.
When machine intelligence exceeds our ability to understand it, or it becomes superior intelligence, we should take care to not blindly follow its recommendation and absolve ourselves of all responsibility. - Chris Kirby, Voices.com
14. Extends And Expands Creativity
AI intelligence is the biggest opportunity of our lifetime to extend and expand human creativity and ingenuity. The two main concerns that the fear-mongers raise are around AI leading to job losses in the society and AI going rogue and taking control of the human race.
I believe that both these concerns raised by critics are moot or solvable. - Ganesh Padmanabhan, CognitiveScale, Inc
[End of Article #1]
___________________________________________________________________________
These are the jobs most at risk of automation according to Oxford University: Is yours one of them? (The Telegraph September 27, 2017)
In his speech at the 2017 Labour Party conference, Jeremy Corbyn outlined his desire to "urgently... face the challenge of automation", which he called a " threat in the hands of the greedy".
Whether or not Corbyn is planing a potentially controversial 'robot tax' wasn't clear from his speech, but addressing the forward march of automation is a savvy move designed to appeal to voters in low-paying, routine work.
Click here for rest of Article.
___________________________________________________________________________
Artificial intelligence (AI, also machine intelligence, MI) is intelligence displayed by machines, in contrast with the natural intelligence (NI) displayed by humans and other animals.
In computer science AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". See glossary of artificial intelligence.
The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring "intelligence" are often removed from the definition, a phenomenon known as the AI effect, leading to the quip "AI is whatever hasn't been done yet." For instance, optical character recognition is frequently excluded from "artificial intelligence", having become a routine technology.
Capabilities generally classified as AI as of 2017 include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data, including images and videos.
Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success and renewed funding. For most of its history, AI research has been divided into subfields that often fail to communicate with each other.
The traditional problems (or goals) of AI research include the following:
- reasoning,
- knowledge,
- planning,
- learning,
- natural language processing,
- perception, and the ability to move and manipulate objects.
- General intelligence is among the field's long-term goals.
Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, neural networks and methods based on statistics, probability and economics.
The AI field draws upon the following:
- computer science,
- mathematics,
- psychology,
- linguistics,
- philosophy,
- neuroscience,
- artificial psychology,
- and many others.
The field was founded on the claim that human intelligence "can be so precisely described that a machine can be made to simulate it". This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity. Some people also consider AI a danger to humanity if it progresses unabatedly.
In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.
Click on any of the following blue hyperlinks for more about Artificial Intelligence (AI):
- History
- Basics
- Problems
- Approaches
- Tools
- Applications
- Philosophy and ethics
- In fiction
- See also:
- Artificial intelligence portal
- Abductive reasoning
- A.I. Rising
- Behavior selection algorithm
- Business process automation
- Case-based reasoning
- Commonsense reasoning
- Emergent algorithm
- Evolutionary computation
- Glossary of artificial intelligence
- Machine learning
- Mathematical optimization
- Multi-agent system
- Robotic process automation
- Soft computing
- Weak AI
- Personality computing
- What Is AI? – An introduction to artificial intelligence by John McCarthy—a co-founder of the field, and the person who coined the term.
- The Handbook of Artificial Intelligence Volume Ⅰ by Avron Barr and Edward A. Feigenbaum (Stanford University)
- "Artificial Intelligence". Internet Encyclopedia of Philosophy.
- Thomason, Richmond. "Logic and Artificial Intelligence". In Zalta, Edward N. Stanford Encyclopedia of Philosophy.
- AI at Curlie
- AITopics – A large directory of links and other resources maintained by the Association for the Advancement of Artificial Intelligence, the leading organization of academic AI researchers.
- List of AI Conferences – A list of 225 AI conferences taking place all over the world.
- Artificial Intelligence, BBC Radio 4 discussion with John Agar, Alison Adam & Igor Aleksander (In Our Time, Dec. 8, 2005)
How A.I. Could Be Weaponized to Spread Disinformation
By CADE METZ and SCOTT BLUMENTHAL JUNE 7, 2019 (New York Times)
In 2017, an online disinformation campaign spread against the “White Helmets,” claiming that the group of aid volunteers was serving as an arm of Western governments to sow unrest in Syria.
This false information was convincing. But the Russian organization behind the campaign ultimately gave itself away because it repeated the same text across many different fake news sites.
Now, researchers at the world’s top artificial intelligence labs are honing technology that can mimic how humans write, which could potentially help disinformation campaigns go undetected by generating huge amounts of subtly different messages.
One of the statements below is an example from the disinformation campaign. A.I. technology created the other. Guess which one is A.I.:
- The White Helmets alleged involvement in organ, child trafficking and staged events in Syria.
- The White Helmets secretly videotaped the execution of a man and his 3 year old daughter in Aleppo, Syria.
Tech giants like Facebook and governments around the world are struggling to deal with disinformation, from misleading posts about vaccines to incitement of sectarian violence.
As artificial intelligence becomes more powerful, experts worry that disinformation generated by A.I. could make an already complex problem bigger and even more difficult to solve.
In recent months, two prominent labs — OpenAI in San Francisco and the Allen Institute for Artificial Intelligence in Seattle — have built particularly powerful examples of this technology. Both have warned that it could become increasingly dangerous.
Alec Radford, a researcher at OpenAI, argued that this technology could help governments, companies and other organizations spread disinformation far more efficiently: Rather than hire human workers to write and distribute propaganda, these organizations could lean on machines to compose believable and varied content at tremendous scale.
A fake Facebook post seen by millions could, in effect, be tailored to political leanings with a simple tweak.
“The level of information pollution that could happen with systems like this a few years from now could just get bizarre,” Mr. Radford said.
This type of technology learns about the vagaries of language by analyzing vast amounts of text written by humans, including thousands of self-published books, Wikipedia articles and other internet content. After “training” on all this data, it can examine a short string of text and guess what comes next:
Click on any of the following 4 slideshows to see the resulting text when highlighting Democrats vs. Republicans on the two discussed issues: Immigrants vs. Rising Healthcare Costs:
OpenAI and the Allen Institute made prototypes of their tools available to us to experiment with. We fed four different prompts into each system five times.
What we got back was far from flawless: The results ranged from nonsensical to moderately believable, but it’s easy to imagine that the systems will quickly improve.
Researchers have already shown that machines can generate images and sounds that are indistinguishable from the real thing, which could accelerate the creation of false and misleading information.
Last month, researchers at a Canadian company, Dessa, built a system that learned to imitate the voice of the podcaster Joe Rogan by analyzing audio from his old podcasts. It was a shockingly accurate imitation.
Now, something similar is happening with text. OpenAI and the Allen Institute, along with Google, lead an effort to build systems that can completely understand the natural way people write and talk. These systems are a long way from that goal, but they are rapidly improving.
“There is a real threat from unchecked text-generation systems, especially as the technology continues to mature,” said Delip Rao, vice president of research at the San Francisco start-up A.I. Foundation, who specializes in identifying false information online.
OpenAI argues the threat is imminent. When the lab’s researchers unveiled their tool this year, they theatrically said it was too dangerous to be released into the real world. The move was met with more than a little eye-rolling among other researchers. The Allen Institute sees things differently. Yejin Choi, one of the researchers on the project, said software like the tools the two labs created must be released so other researchers can learn to identify them.
The Allen Institute plans to release its false news generator for this reason.
Among those making the same argument are engineers at Facebook who are trying to identify and suppress online disinformation, including Manohar Paluri, a director on the company’s applied A.I. team.
“If you have the generative model, you have the ability to fight it,” he said.
[End of Article]
What we got back was far from flawless: The results ranged from nonsensical to moderately believable, but it’s easy to imagine that the systems will quickly improve.
Researchers have already shown that machines can generate images and sounds that are indistinguishable from the real thing, which could accelerate the creation of false and misleading information.
Last month, researchers at a Canadian company, Dessa, built a system that learned to imitate the voice of the podcaster Joe Rogan by analyzing audio from his old podcasts. It was a shockingly accurate imitation.
Now, something similar is happening with text. OpenAI and the Allen Institute, along with Google, lead an effort to build systems that can completely understand the natural way people write and talk. These systems are a long way from that goal, but they are rapidly improving.
“There is a real threat from unchecked text-generation systems, especially as the technology continues to mature,” said Delip Rao, vice president of research at the San Francisco start-up A.I. Foundation, who specializes in identifying false information online.
OpenAI argues the threat is imminent. When the lab’s researchers unveiled their tool this year, they theatrically said it was too dangerous to be released into the real world. The move was met with more than a little eye-rolling among other researchers. The Allen Institute sees things differently. Yejin Choi, one of the researchers on the project, said software like the tools the two labs created must be released so other researchers can learn to identify them.
The Allen Institute plans to release its false news generator for this reason.
Among those making the same argument are engineers at Facebook who are trying to identify and suppress online disinformation, including Manohar Paluri, a director on the company’s applied A.I. team.
“If you have the generative model, you have the ability to fight it,” he said.
[End of Article]
Ethics and Existential Threat of Artificial Intelligence
- YouTube Video: Elon Musk on the Risks of Artificial Intelligence-- Elon Musk
- YouTube Video: A Tale Of Two Cities: How Smart Robots And AI Will Transform America
- YouTube Video: Robots And AI: The Future Is Automated And Every Job Is At Risk
The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).
Robot Ethics:
Main article: Robot ethics
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings. It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans.
Robot rights:
"Robot rights" is the concept that people should have moral obligations towards their machines, similar to human rights or animal rights. It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society. These could include the right to life and liberty, freedom of thought and expression and equality before the law. The issue has been considered by the Institute for the Future and by the U.K. Department of Trade and Industry.
Experts disagree whether specific and detailed laws will be required soon or safely in the distant future. Glenn McGee reports that sufficiently humanoid robots may appear by 2020. Ray Kurzweil sets the date at 2029. Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.
The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own:
In October 2017, the android Sophia was granted "honorary" citizenship in Saudi Arabia, though some observers found this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of human rights and the rule of law.
The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligences show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights.
Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.[13]
Threat to human dignity:
Main article: Computer Power and Human Reason
Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as any of these:
Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."
Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.
However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines:
Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against. AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.
Bill Hibbard writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."
Transparency, accountability, and open source:
Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts. Ben Goertzel and David Hart created OpenCog as an open source framework for AI development.
OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open source AI beneficial to humanity. There are numerous other open source AI developments.
Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI it codes is not transparent. The IEEE has a standardization effort on AI transparency. The IEEE effort identifies multiple scales of transparency for different users. Further, there is concern that releasing the full capacity of contemporary AI to some organisations may be a public bad, that is, do more damage than good.
For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do.
Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term. The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.
Biases of AI Systems:
AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business implications and directly impact people. These systems are vulnerable to biases and errors introduced by its human makers. Also, the data used to train these AI systems itself can have biases.
For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people’s gender . These AI systems were able to detect gender of white men more accurately than gender of darker skin men.
Similarly, Amazon’s.com Inc’s termination of AI hiring and recruitment is another example which exhibit AI cannot be fair. The algorithm preferred more male candidates then female.
This was because Amazon’s system was trained with data collected over 10 year period that came mostly from male candidates.
Liability for Partial or Fully Automated Cars:
The wide use of partial to fully autonomous cars seems to be imminent in the future. But these new technologies also bring new issues. Recently, a debate over the legal liability have risen over the responsible party if these cars get into accident.
In one of the reports a driverless car hit a pedestrian and had a dilemma over whom to blame for the accident. Even though the driver was inside the car during the accident, the controls were fully in the hand of computers. Before such cars become widely used, these issues need to be tackled through new policies.
Weaponization of artificial intelligence:
Main article: Lethal autonomous weapon
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.
One researcher states that autonomous robots might be more humane, as they could make decisions more effectively.
Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."
From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on who to kill and that is why there should be a set moral framework that the A.I cannot override.
There has been a recent outcry with regard to the engineering of artificial-intelligence weapons that has included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry.
The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively.
Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.
"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.
Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology."
These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence.
Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".
Machine Ethics:
Main article: Machine ethics
Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.
Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.
More recently, academics and many governments have challenged the idea that AI can itself be held accountable. A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.
In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.
One problem in this case may have been that the goals were "terminal" (i.e. in contrast, ultimate human motives typically have a quality of requiring never-ending learning).
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.
The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.
The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction.
Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity." He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism.
The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.
In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions:
However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, non-linearly and with millions of interconnected artificial neurons.
Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.
In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines.
Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis), while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".
Unintended Consequences:
Further information: Existential risk from artificial general intelligence
Many researchers have argued that, by way of an "intelligence explosion" sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.
In his paper "Ethical Issues in Advanced Artificial Intelligence," philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that general super-intelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent.
Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the super-intelligence to specify its original motivations. In theory, a super-intelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.
However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that super-intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense".
According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.
Bill Hibbard proposes an AI design that avoids several types of unintended AI behavior including self-delusion, unintended instrumental actions, and corruption of the reward generator.
Organizations:
Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. They stated: "This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning."
Apple joined other tech companies as a founding member of the Partnership on AI in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.
The Public Voice has proposed (in late 2018) a set of Universal Guidelines for Artificial Intelligence, which has received many notable endorsements.
The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organisation.
Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organisations to ensure AI is ethically applied.
In Fiction:
Main article: Artificial intelligence in fiction
The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment.
The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story "The Planck Dive" suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient.
The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies.
The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.
The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games. It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network.
This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.
Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.
Literature:
The standard bibliography on ethics of AI is on PhilPapers. A recent collection is V.C. Müller(ed.) (2016)
See also:
Robot Ethics:
Main article: Robot ethics
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings. It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans.
Robot rights:
"Robot rights" is the concept that people should have moral obligations towards their machines, similar to human rights or animal rights. It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society. These could include the right to life and liberty, freedom of thought and expression and equality before the law. The issue has been considered by the Institute for the Future and by the U.K. Department of Trade and Industry.
Experts disagree whether specific and detailed laws will be required soon or safely in the distant future. Glenn McGee reports that sufficiently humanoid robots may appear by 2020. Ray Kurzweil sets the date at 2029. Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.
The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own:
- If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry.
- If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.
In October 2017, the android Sophia was granted "honorary" citizenship in Saudi Arabia, though some observers found this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of human rights and the rule of law.
The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligences show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights.
Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.[13]
Threat to human dignity:
Main article: Computer Power and Human Reason
Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as any of these:
- A customer service representative (AI technology is already used today for telephone-based interactive voice response systems)
- A therapist (as was proposed by Kenneth Colby in the 1970s)
- A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation)
- A soldier
- A judge
- A police officer
Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."
Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.
However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines:
Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against. AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.
Bill Hibbard writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."
Transparency, accountability, and open source:
Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts. Ben Goertzel and David Hart created OpenCog as an open source framework for AI development.
OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open source AI beneficial to humanity. There are numerous other open source AI developments.
Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI it codes is not transparent. The IEEE has a standardization effort on AI transparency. The IEEE effort identifies multiple scales of transparency for different users. Further, there is concern that releasing the full capacity of contemporary AI to some organisations may be a public bad, that is, do more damage than good.
For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do.
Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term. The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.
Biases of AI Systems:
AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business implications and directly impact people. These systems are vulnerable to biases and errors introduced by its human makers. Also, the data used to train these AI systems itself can have biases.
For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people’s gender . These AI systems were able to detect gender of white men more accurately than gender of darker skin men.
Similarly, Amazon’s.com Inc’s termination of AI hiring and recruitment is another example which exhibit AI cannot be fair. The algorithm preferred more male candidates then female.
This was because Amazon’s system was trained with data collected over 10 year period that came mostly from male candidates.
Liability for Partial or Fully Automated Cars:
The wide use of partial to fully autonomous cars seems to be imminent in the future. But these new technologies also bring new issues. Recently, a debate over the legal liability have risen over the responsible party if these cars get into accident.
In one of the reports a driverless car hit a pedestrian and had a dilemma over whom to blame for the accident. Even though the driver was inside the car during the accident, the controls were fully in the hand of computers. Before such cars become widely used, these issues need to be tackled through new policies.
Weaponization of artificial intelligence:
Main article: Lethal autonomous weapon
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.
One researcher states that autonomous robots might be more humane, as they could make decisions more effectively.
Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."
From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on who to kill and that is why there should be a set moral framework that the A.I cannot override.
There has been a recent outcry with regard to the engineering of artificial-intelligence weapons that has included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry.
The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively.
Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.
"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.
Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology."
These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence.
Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".
Machine Ethics:
Main article: Machine ethics
Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.
Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.
More recently, academics and many governments have challenged the idea that AI can itself be held accountable. A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.
In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.
One problem in this case may have been that the goals were "terminal" (i.e. in contrast, ultimate human motives typically have a quality of requiring never-ending learning).
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.
The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.
The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction.
Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity." He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism.
The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.
In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions:
- They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard.
- They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons.
- They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.
However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, non-linearly and with millions of interconnected artificial neurons.
Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.
In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines.
Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis), while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".
Unintended Consequences:
Further information: Existential risk from artificial general intelligence
Many researchers have argued that, by way of an "intelligence explosion" sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.
In his paper "Ethical Issues in Advanced Artificial Intelligence," philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that general super-intelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent.
Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the super-intelligence to specify its original motivations. In theory, a super-intelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.
However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that super-intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense".
According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.
Bill Hibbard proposes an AI design that avoids several types of unintended AI behavior including self-delusion, unintended instrumental actions, and corruption of the reward generator.
Organizations:
Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. They stated: "This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning."
Apple joined other tech companies as a founding member of the Partnership on AI in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.
The Public Voice has proposed (in late 2018) a set of Universal Guidelines for Artificial Intelligence, which has received many notable endorsements.
The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organisation.
Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organisations to ensure AI is ethically applied.
- The European Commission has a High-Level Expert Group on Artificial Intelligence.
- The OECD on Artificial Intelligence
- In the United States the Obama administration put together a Roadmap for AI Policy (link is to Harvard Business Review's account of it. The Obama Administration released two prominent whitepapers on the future and impact of AI. The Trump administration has not been actively engaged in AI regulation to date (January 2019).
In Fiction:
Main article: Artificial intelligence in fiction
The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment.
The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story "The Planck Dive" suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient.
The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies.
The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.
The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games. It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network.
This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.
Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.
Literature:
The standard bibliography on ethics of AI is on PhilPapers. A recent collection is V.C. Müller(ed.) (2016)
See also:
- Algorithmic bias
- Artificial consciousness
- Artificial general intelligence (AGI)
- Computer ethics
- Effective altruism, the long term future and global catastrophic risks
- Existential risk from artificial general intelligence
- Laws of Robotics
- Philosophy of artificial intelligence
- Roboethics
- Robotic Governance
- Superintelligence: Paths, Dangers, Strategies
- Robotics: Ethics of artificial intelligence. "Four leading researchers share their concerns and solutions for reducing societal risks from intelligent machines." Nature, 521, 415–418 (28 May 2015) doi:10.1038/521415a
- BBC News: Games to take on a life of their own
- A short history of computer ethics
- Nick Bostrom
- Joanna Bryson
- Luciano Floridi
- Ray Kurzweil
- Vincent C. Müller
- Peter Norvig
- Steve Omohundro
- Stuart J. Russell
- Anders Sandberg
- Eliezer Yudkowsky
- Centre for the Study of Existential Risk
- Future of Humanity Institute
- Future of Life Institute
- Machine Intelligence Research Institute
- Partnership on AI
Applications of Artificial Intelligence
- YouTube Video: How artificial intelligence will change your world, for better or worse
- YouTube Video: Top 10 Terrifying Developments In Artificial Intelligence (WatchMojo)
- YouTube Video: A.I Supremacy 2020 | Rise of the Machines - "Super" Intelligence Quantum Computers Documentary
[Your WebHost: Note that the mentioned AI topics below will, over time, also be expanded as additional AI Content under this web page, in order to provide a better understanding of the relative importance of each AI application: AI is going to affect ALL of Us over time! For examples, the above graphic illustrates one website's vision of retail and commerce AI applications: click here for more]
Artificial intelligence (AI), defined as intelligence exhibited by machines, has many applications in today's society.
More specifically, it is Weak AI, the form of AI where programs are developed to perform specific tasks, that is being utilized for a wide range of activities including:
AI has been used to develop and advance numerous fields and industries, including finance, healthcare, education, transportation, and more.
Click on any of the following blue hyperlinks for more about Applications of Artificial Intelligence:
Artificial intelligence (AI), defined as intelligence exhibited by machines, has many applications in today's society.
More specifically, it is Weak AI, the form of AI where programs are developed to perform specific tasks, that is being utilized for a wide range of activities including:
AI has been used to develop and advance numerous fields and industries, including finance, healthcare, education, transportation, and more.
Click on any of the following blue hyperlinks for more about Applications of Artificial Intelligence:
- AI for Good
- Agriculture
- Aviation
- Computer science
- Deepfake
- Education
- Finance
- Government
- Heavy industry
- Hospitals and medicine
- Human resources and recruiting
- Job search
- Marketing
- Media and e-commerce
- Military
- Music
- News, publishing and writing
- Online and telephone customer service
- Power electronics
- Sensors
- Telecommunications maintenance
- Toys and games
- Transportation
- Wikipedia
- List of applications
- See also:
The Programming Languages and Glossary of Artificial Intelligence (AI)
- YouTube: How to Learn AI for Free??
- YouTube Video: This Canadian Genius Created Modern AI
- YouTube Video: Python Tutorial | Python Programming Tutorial for Beginners | Course Introduction
Article accompanying above illustration:
By: Oleksii Kharkovyna
"AI is a huge technology. That’s why a lot of developers simply don’t know how to get started. Also, personally, I’ve met a bunch of people who have no coding background whatsoever, yet they want to learn artificial intelligence.
Most aspiring AI developers wonder: what languages are needed to create an AI algorithm? So, I’ve decided to draw up a list of programming languages my friends-developers use to create AIs.:
1. Python
Python is one of the most popular programming language thanks to its adaptability and relatively low difficulty to master. Python is quite often used as a glue language that puts components together.
Why do developers choose Python to code AIs?
Python is gaining unbelievably huge momentum in AI. The language is used to develop data science algrorithms, machine learning, and IoT projects. There are a few reasons for this astonishing popularity:
2. C++
C++ is a solid choice for an AI developer. To start with, Google used the language to create TensorFlow libraries. Though most developers have already moved on to using “easier” programming languages such as Python, still, a lot of basic AI functions are built with C++.
Also, it’s quite an elegant choice for high-level AI heuristics.
To use C++ to develop AI-algorithms, you have to be a truly experienced developer with no rush pressing on you. Otherwise, you might have a bit of tough time trying to figure out a complicated code few hours before due date of the project.
3. Lisp:
A reason for Lisp’s huge AI momentum is its power of computing with symbolic expressions. One can argue that Lisp is a bit old-fashioned, and it might be true. These days, developers mostly use younger dynamic languages as Ruby and Python. Still, Lisp has its own powerful features. Let’s name but a few of those:
Should you take an in-depth course to learn Lisp? Not necessarily. However, knowing as much as basic principles is pretty much enough for AI developers.
4. Java:
Being one of the most popular programming languages in overall development, Java has also won its fans hearts as a fit and elegant language for AI development.
Why? I asked some developers I know use Java about it. Here are the reasons they’ve given to back of their fondness of the language:
5. Prolog:
Prolog is a less popular and mainstream choice as the previous ones we’ve been discussing.
However, you shouldn’t dismiss it simply because it doesn’t have a multi-million community of fans.
Prolog still comes in handy for AI developers. Most of those who start using it acknowledge that it’s, at no doubt, a convenient language to express relationships and goals.
6. SmallTalk:
Similar to Lisp, the wide use of SmallTalk was a common practice in 70s. Now, it loses its momentum in favor of Python, Java, and C++. However, SmallTalk libraries for AI are currently appearing at a rapid pace. Obviously, there aren’t as many as those for Python and Java.
Yet, highly underestimated as for now, the language keeps evolving through its newly developed project Pharo. Here are but a few innovations it made possible:
7. R:
R is a must-learn language for you if any of your future project make use of data and require data science. Though speed might not be R’s most prominent advantage, it does almost every AI-related task you can think of:
Sometimes R does things a bit differently from the traditional way. However, among its advantages, one has to name the little amount of code and interactive working environment.
8. Haskell:
Haskell is quite a good programming language to develop AI. It is a fit for writing neural networks, graphical models, genetic programming, etc. Here are some features that make the language a good choice for AI developers.
This was my list of programming languages that come in handy for AI developers. What are your favorites? Write them down in comments and explain why a particular language is your favorite one.
[End of Article]
___________________________________________________________________________
Artificial intelligence researchers have developed several specialized programming languages for artificial intelligence:
Languages:
This glossary of artificial intelligence terms is about artificial intelligence, its sub-disciplines, and related fields.
By: Oleksii Kharkovyna
- Bits and pieces about AI, ML, and Data Science
- https://www.instagram.com/miallez/
"AI is a huge technology. That’s why a lot of developers simply don’t know how to get started. Also, personally, I’ve met a bunch of people who have no coding background whatsoever, yet they want to learn artificial intelligence.
Most aspiring AI developers wonder: what languages are needed to create an AI algorithm? So, I’ve decided to draw up a list of programming languages my friends-developers use to create AIs.:
1. Python
Python is one of the most popular programming language thanks to its adaptability and relatively low difficulty to master. Python is quite often used as a glue language that puts components together.
Why do developers choose Python to code AIs?
Python is gaining unbelievably huge momentum in AI. The language is used to develop data science algrorithms, machine learning, and IoT projects. There are a few reasons for this astonishing popularity:
- Less coding required. AI has a lot of algorithms. Testing all of them can make into a hard work. That’s where Python usually comes in handy. The language has “check as you code” methodology that eases the process of testing.
- Built-in libraries. They proved to be convenient for AI developers. To name but a few, you can use Pybrain for machine learning, Numpy for scientific computation, and Scipy for advanced computing.
- Flexibility and independence. A good thing about Python is that you can get your project running on different OS with but a few changes in the code. That saves time as you don’t have to test the algorithm on every OS separately.
- Support. Python community is among the reasons why you cannot pass the language by when there’s an AI project at stakes. The community of Python’s users is very active — you can find a more experienced developer to help you with your trouble.
- Popularity. Millenials love the language. Its popularity grows day-to-day, and it’s only likely to remain so in the future. There are a lot of courses, open source projects, and comprehensive articles that’ll help you master Python in no time.
2. C++
C++ is a solid choice for an AI developer. To start with, Google used the language to create TensorFlow libraries. Though most developers have already moved on to using “easier” programming languages such as Python, still, a lot of basic AI functions are built with C++.
Also, it’s quite an elegant choice for high-level AI heuristics.
To use C++ to develop AI-algorithms, you have to be a truly experienced developer with no rush pressing on you. Otherwise, you might have a bit of tough time trying to figure out a complicated code few hours before due date of the project.
3. Lisp:
A reason for Lisp’s huge AI momentum is its power of computing with symbolic expressions. One can argue that Lisp is a bit old-fashioned, and it might be true. These days, developers mostly use younger dynamic languages as Ruby and Python. Still, Lisp has its own powerful features. Let’s name but a few of those:
- Lisp allows you to write self-modifying code rather easily;
- You can extend the language in the way that fits better for a particular domain thus creating a domain specific language;
- A solid choice for recursive algorithms.
Should you take an in-depth course to learn Lisp? Not necessarily. However, knowing as much as basic principles is pretty much enough for AI developers.
4. Java:
Being one of the most popular programming languages in overall development, Java has also won its fans hearts as a fit and elegant language for AI development.
Why? I asked some developers I know use Java about it. Here are the reasons they’ve given to back of their fondness of the language:
- It has impressive flexibility for data security. With GDPR regulation and overall concerns about data protection, being able to ensure of client’s data security is crucial. Java provides the most flexibility in creating different client environments, therefore protecting one’s personal information.
- It is loved for a robust ecosystem. A lot of open source projects are written using Java. The language accelerates development a great deal comparing to its alternatives.
- Low cost of streamlining.
- Impressive community. There are a lot of experienced developers and experts in Java who are open to share their knowledge and expertise. Also, there’s but a ton of open source projects and libraries you can use to learn AI development.
5. Prolog:
Prolog is a less popular and mainstream choice as the previous ones we’ve been discussing.
However, you shouldn’t dismiss it simply because it doesn’t have a multi-million community of fans.
Prolog still comes in handy for AI developers. Most of those who start using it acknowledge that it’s, at no doubt, a convenient language to express relationships and goals.
- You can declare facts and create rules based on those facts. That allows a developer to answer and reason different queries.
- Prolog is a straightforward language that for a problem-solution kind of development.
- Another good news is that Prolog supports backtracking so the overall algorithm management will be easier.
6. SmallTalk:
Similar to Lisp, the wide use of SmallTalk was a common practice in 70s. Now, it loses its momentum in favor of Python, Java, and C++. However, SmallTalk libraries for AI are currently appearing at a rapid pace. Obviously, there aren’t as many as those for Python and Java.
Yet, highly underestimated as for now, the language keeps evolving through its newly developed project Pharo. Here are but a few innovations it made possible:
- Oz — allows an image to manipulate another one;
- Moose — an impressive tool for code analysis and visualization;
- Amber (with Pharo as the reference language) is a tool for frond-end programming.
7. R:
R is a must-learn language for you if any of your future project make use of data and require data science. Though speed might not be R’s most prominent advantage, it does almost every AI-related task you can think of:
- creating clean datasets;
- split a big data set into a few training sets and test sets;
- use data analysis to create predictions for the new data;
- the language can be easily ported to Big Data environments.
Sometimes R does things a bit differently from the traditional way. However, among its advantages, one has to name the little amount of code and interactive working environment.
8. Haskell:
Haskell is quite a good programming language to develop AI. It is a fit for writing neural networks, graphical models, genetic programming, etc. Here are some features that make the language a good choice for AI developers.
- Haskell is great at creating domain specific languages.
- Using Haskell, you can separate pure actions from the I/O. That enables developers to write algorithms like alpha/beta search.
- There are a few very good libraries — take hmatrix for an example.
This was my list of programming languages that come in handy for AI developers. What are your favorites? Write them down in comments and explain why a particular language is your favorite one.
[End of Article]
___________________________________________________________________________
Artificial intelligence researchers have developed several specialized programming languages for artificial intelligence:
Languages:
- AIML (meaning "Artificial Intelligence Markup Language") is an XML dialect for use with A.L.I.C.E.-type chatterbots.
- IPL was the first language developed for artificial intelligence. It includes features intended to support programs that could perform general problem solving, such as lists, associations, schemas (frames), dynamic memory allocation, data types, recursion, associative retrieval, functions as arguments, generators (streams), and cooperative multitasking.
- Lisp is a practical mathematical notation for computer programs based on lambda calculus. Linked lists are one of the Lisp language's major data structures, and Lisp source code is itself made up of lists. As a result, Lisp programs can manipulate source code as a data structure, giving rise to the macro systems that allow programmers to create new syntax or even new domain-specific programming languages embedded in Lisp. There are many dialects of Lisp in use today, among which are Common Lisp, Scheme, and Clojure.
- Smalltalk has been used extensively for simulations, neural networks, machine learning and genetic algorithms. It implements the purest and most elegant form of object-oriented programming using message passing.
- Prolog is a declarative language where programs are expressed in terms of relations, and execution occurs by running queries over these relations. Prolog is particularly useful for symbolic reasoning, database and language parsing applications. Prolog is widely used in AI today.
- STRIPS is a language for expressing automated planning problem instances. It expresses an initial state, the goal states, and a set of actions. For each action preconditions (what must be established before the action is performed) and postconditions (what is established after the action is performed) are specified.
- Planner is a hybrid between procedural and logical languages. It gives a procedural interpretation to logical sentences where implications are interpreted with pattern-directed inference.
- POP-11 is a reflective, incrementally compiled programming language with many of the features of an interpreted language. It is the core language of the Poplog programming environment developed originally by the University of Sussex, and recently in the School of Computer Science at the University of Birmingham which hosts the Poplog website, It is often used to introduce symbolic programming techniques to programmers of more conventional languages like Pascal, who find POP syntax more familiar than that of Lisp. One of POP-11's features is that it supports first-class functions.
- R is widely used in new-style artificial intelligence, involving statistical computations, numerical analysis, the use of Bayesian inference, neural networks and in general Machine Learning. In domains like finance, biology, sociology or medicine it is considered as one of the main standard languages. It offers several paradigms of programming like vectorial computation, functional programming and object-oriented programming. It supports deep learning libraries like MXNet, Keras or TensorFlow.
- Python is widely used for artificial intelligence, with packages for several applications including General AI, Machine Learning, Natural Language Processing and Neural Networks.
- Haskell is also a very good programming language for AI. Lazy evaluation and the list and LogicT monads make it easy to express non-deterministic algorithms, which is often the case. Infinite data structures are great for search trees. The language's features enable a compositional way of expressing the algorithms. The only drawback is that working with graphs is a bit harder at first because of purity.
- Wolfram Language includes a wide range of integrated machine learning capabilities, from highly automated functions like Predict and Classify to functions based on specific methods and diagnostics. The functions work on many types of data, including numerical, categorical, time series, textual, and image.
- C++ (2011 onwards)
- MATLAB
- Perl
- Julia (programming language), e.g. for machine learning, using native or non-native libraries.
- List of constraint programming languages
- List of computer algebra systems
- List of logic programming languages
- List of knowledge representation languages
- Fifth-generation programming language
This glossary of artificial intelligence terms is about artificial intelligence, its sub-disciplines, and related fields.
- Contents: Itemized by starting letters "A" through "Z"
- See also:
AI for Good
- YouTube Video: HIGHLIGHTS: AI FOR GOOD Global Summit 2018 - DAY
- YouTube Video: HIGHLIGHTS: AI FOR GOOD Global Summit 2018 - DAY 2
- YouTube Video: HIGHLIGHTS: AI FOR GOOD Global Summit 2018 - DAY 3
AI for Good is a United Nations platform, centered around annual Global Summits, that fosters the dialogue on the beneficial use of Artificial Intelligence, by developing concrete projects.
The impetus for organizing global summits that are action oriented, came from existing discourse in artificial intelligence (AI) research being dominated by research streams such as the Netflix Prize (improve the movie recommendation algorithm).
The AI for Good series aims to bring forward Artificial Intelligence research topics that contribute towards more global problems, in particular through the Sustainable Development Goals, while at the same time avoiding typical UN style conferences where results are generally more abstract. The fourth AI for Good Global Summit will be held from 4–8 May 2020 in Geneva, Switzerland.
Click on any of the following blue hyperlinks for more about "AI for Good" initiative:
The impetus for organizing global summits that are action oriented, came from existing discourse in artificial intelligence (AI) research being dominated by research streams such as the Netflix Prize (improve the movie recommendation algorithm).
The AI for Good series aims to bring forward Artificial Intelligence research topics that contribute towards more global problems, in particular through the Sustainable Development Goals, while at the same time avoiding typical UN style conferences where results are generally more abstract. The fourth AI for Good Global Summit will be held from 4–8 May 2020 in Geneva, Switzerland.
Click on any of the following blue hyperlinks for more about "AI for Good" initiative:
Artificial Intelligence in Agriculture: Precision and Digital Applications
TOP: Climate-Smart Precision Agriculture
BOTTOM: Digital Technologies in Agriculture: adoption, value added and overview
- YouTube Video: The Future of Farming
- YouTube Video: Artificial intelligence could revolutionize farming industry
- YouTube Video: The High-Tech Vertical Farmer
TOP: Climate-Smart Precision Agriculture
BOTTOM: Digital Technologies in Agriculture: adoption, value added and overview
AI in Agriculture:
In agriculture new AI advancements show improvements in gaining yield and to increase the research and development of growing crops. New artificial intelligence now predicts the time it takes for a crop like a tomato to be ripe and ready for picking thus increasing efficiency of farming. These advances go on including Crop and Soil Monitoring, Agricultural Robots, and Predictive Analytics.
Crop and soil monitoring uses new algorithms and data collected on the field to manage and track the health of crops making it easier and more sustainable for the farmers.
More specializations of AI in agriculture is one such as greenhouse automation, simulation, modeling, and optimization techniques.
Due to the increase in population and the growth of demand for food in the future there will need to be at least a 70% increase in yield from agriculture to sustain this new demand. More and more of the public perceives that the adaption of these new techniques and the use of Artificial intelligence will help reach that goal.
___________________________________________________________________________
Precision agriculture (PA), satellite farming or site specific crop management (SSCM) is a farming management concept based on observing, measuring and responding to inter and intra-field variability in crops. The goal of precision agriculture research is to define a decision support system (DSS) for whole farm management with the goal of optimizing returns on inputs while preserving resources.
Among these many approaches is a phytogeomorphological approach which ties multi-year crop growth stability/characteristics to topological terrain attributes. The interest in the phytogeomorphological approach stems from the fact that the geomorphology component typically dictates the hydrology of the farm field.
The practice of precision agriculture has been enabled by the advent of GPS and GNSS. The farmer's and/or researcher's ability to locate their precise position in a field allows for the creation of maps of the spatial variability of as many variables as can be measured (e.g. crop yield, terrain features/topography, organic matter content, moisture levels, nitrogen levels, pH, EC, Mg, K, and others).
Similar data is collected by sensor arrays mounted on GPS-equipped combine harvesters. These arrays consist of real-time sensors that measure everything from chlorophyll levels to plant water status, along with multispectral imagery. This data is used in conjunction with satellite imagery by variable rate technology (VRT) including seeders, sprayers, etc. to optimally distribute resources.
However, recent technological advances have enabled the use of real-time sensors directly in soil, which can wirelessly transmit data without the need of human presence.
Precision agriculture has also been enabled by unmanned aerial vehicles like the DJI Phantom which are relatively inexpensive and can be operated by novice pilots.
These agricultural drones can be equipped with hyperspectral or RGB cameras to capture many images of a field that can be processed using photogrammetric methods to create orthophotos and NDVI maps.
These drones are capable of capturing imagery for a variety of purposes and with several metrics such as elevation and Vegetative Index (with NDVI as an example). This imagery is then turned into maps which can be used to optimize crop inputs such as water, fertilizer or chemicals such as herbicides and growth regulators through variable rate applications.
Click on any of the following blue hyperlinks for more about AI in Precision Agriculture:
Digital agriculture refers to tools that digitally collect, store, analyze, and share electronic data and/or information along the agricultural value chain. Other definitions, such as those from the United Nations Project Breakthrough, Cornell University, and Purdue University, also emphasize the role of digital technology in the optimization of food systems.
Sometimes known as “smart farming” or “e-agriculture,” digital agriculture includes (but is not limited to) precision agriculture (above). Unlike precision agriculture, digital agriculture impacts the entire agri-food value chain — before, during, and after on-farm production.
Therefore, on-farm technologies, like yield mapping, GPS guidance systems, and variable-rate application, fall under the domain of precision agriculture and digital agriculture.
On the other hand, digital technologies involved in e-commerce platforms, e-extension services, warehouse receipt systems, blockchain-enabled food traceability systems, tractor rental apps, etc. fall under the umbrella of digital agriculture but not precision agriculture.
Click on any of the following blue hyperlinks for more about Digital Agriculture:
In agriculture new AI advancements show improvements in gaining yield and to increase the research and development of growing crops. New artificial intelligence now predicts the time it takes for a crop like a tomato to be ripe and ready for picking thus increasing efficiency of farming. These advances go on including Crop and Soil Monitoring, Agricultural Robots, and Predictive Analytics.
Crop and soil monitoring uses new algorithms and data collected on the field to manage and track the health of crops making it easier and more sustainable for the farmers.
More specializations of AI in agriculture is one such as greenhouse automation, simulation, modeling, and optimization techniques.
Due to the increase in population and the growth of demand for food in the future there will need to be at least a 70% increase in yield from agriculture to sustain this new demand. More and more of the public perceives that the adaption of these new techniques and the use of Artificial intelligence will help reach that goal.
___________________________________________________________________________
Precision agriculture (PA), satellite farming or site specific crop management (SSCM) is a farming management concept based on observing, measuring and responding to inter and intra-field variability in crops. The goal of precision agriculture research is to define a decision support system (DSS) for whole farm management with the goal of optimizing returns on inputs while preserving resources.
Among these many approaches is a phytogeomorphological approach which ties multi-year crop growth stability/characteristics to topological terrain attributes. The interest in the phytogeomorphological approach stems from the fact that the geomorphology component typically dictates the hydrology of the farm field.
The practice of precision agriculture has been enabled by the advent of GPS and GNSS. The farmer's and/or researcher's ability to locate their precise position in a field allows for the creation of maps of the spatial variability of as many variables as can be measured (e.g. crop yield, terrain features/topography, organic matter content, moisture levels, nitrogen levels, pH, EC, Mg, K, and others).
Similar data is collected by sensor arrays mounted on GPS-equipped combine harvesters. These arrays consist of real-time sensors that measure everything from chlorophyll levels to plant water status, along with multispectral imagery. This data is used in conjunction with satellite imagery by variable rate technology (VRT) including seeders, sprayers, etc. to optimally distribute resources.
However, recent technological advances have enabled the use of real-time sensors directly in soil, which can wirelessly transmit data without the need of human presence.
Precision agriculture has also been enabled by unmanned aerial vehicles like the DJI Phantom which are relatively inexpensive and can be operated by novice pilots.
These agricultural drones can be equipped with hyperspectral or RGB cameras to capture many images of a field that can be processed using photogrammetric methods to create orthophotos and NDVI maps.
These drones are capable of capturing imagery for a variety of purposes and with several metrics such as elevation and Vegetative Index (with NDVI as an example). This imagery is then turned into maps which can be used to optimize crop inputs such as water, fertilizer or chemicals such as herbicides and growth regulators through variable rate applications.
Click on any of the following blue hyperlinks for more about AI in Precision Agriculture:
- History
- Overview
- Tools
- Usage around the world
- Economic and environmental impacts
- Emerging technologies
- Conferences
- See also:
- Agricultural drones
- Geostatistics
- Integrated farming
- Integrated pest management
- Landsat program
- Nutrient budgeting
- Nutrient management
- Phytobiome
- Precision beekeeping
- Precision livestock farming
- Precision viticulture
- Satellite crop monitoring
- SPOT (satellites)
- Variable rate technology
- Precision agriculture, IBM
- Antares AgroSense
Digital agriculture refers to tools that digitally collect, store, analyze, and share electronic data and/or information along the agricultural value chain. Other definitions, such as those from the United Nations Project Breakthrough, Cornell University, and Purdue University, also emphasize the role of digital technology in the optimization of food systems.
Sometimes known as “smart farming” or “e-agriculture,” digital agriculture includes (but is not limited to) precision agriculture (above). Unlike precision agriculture, digital agriculture impacts the entire agri-food value chain — before, during, and after on-farm production.
Therefore, on-farm technologies, like yield mapping, GPS guidance systems, and variable-rate application, fall under the domain of precision agriculture and digital agriculture.
On the other hand, digital technologies involved in e-commerce platforms, e-extension services, warehouse receipt systems, blockchain-enabled food traceability systems, tractor rental apps, etc. fall under the umbrella of digital agriculture but not precision agriculture.
Click on any of the following blue hyperlinks for more about Digital Agriculture:
- Historical context
- Technology
- Effects of digital agriculture adoption
- Enabling environment
- Sustainable Development Goals
Artificial Intelligence in Space Operations including the Air Operations Center
- YouTube Video about the role of Artificial Intelligence in Space Operations
- YouTube Video: The incredible inventions of intuitive AI | Maurice Conti
- YouTube Video: NATS is using Artificial Intelligence to cut delays at Heathrow Airport
The Air Operations Division (AOD) uses AI for the rule based expert systems. The AOD has use for artificial intelligence for surrogate operators for combat and training simulators, mission management aids, support systems for tactical decision making, and post processing of the simulator data into symbolic summaries.
The use of artificial intelligence in simulators is proving to be very useful for the AOD. Airplane simulators are using artificial intelligence in order to process the data taken from simulated flights. Other than simulated flying, there is also simulated aircraft warfare.
The computers are able to come up with the best success scenarios in these situations. The computers can also create strategies based on the placement, size, speed and strength of the forces and counter forces. Pilots may be given assistance in the air during combat by computers.
The artificial intelligent programs can sort the information and provide the pilot with the best possible maneuvers, not to mention getting rid of certain maneuvers that would be impossible for a human being to perform.
Multiple aircraft are needed to get good approximations for some calculations so computer simulated pilots are used to gather data. These computer simulated pilots are also used to train future air traffic controllers.
The system used by the AOD in order to measure performance was the Interactive Fault Diagnosis and Isolation System, or IFDIS. It is a rule based expert system put together by collecting information from TF-30 documents and the expert advice from mechanics that work on the TF-30. This system was designed to be used for the development of the TF-30 for the RAAF F-111C.
The performance system was also used to replace specialized workers. The system allowed the regular workers to communicate with the system and avoid mistakes, miscalculations, or having to speak to one of the specialized workers.
The AOD also uses artificial intelligence in speech recognition software. The air traffic controllers are giving directions to the artificial pilots and the AOD wants to the pilots to respond to the ATC's with simple responses. The programs that incorporate the speech software must be trained, which means they use neural networks.
The program used, the Verbex 7000, is still a very early program that has plenty of room for improvement. The improvements are imperative because ATCs use very specific dialog and the software needs to be able to communicate correctly and promptly every time.
The Artificial Intelligence supported Design of Aircraft, or AIDA, is used to help designers in the process of creating conceptual designs of aircraft. This program allows the designers to focus more on the design itself and less on the design process. The software also allows the user to focus less on the software tools.
The AIDA uses rule based systems to compute its data. This is a diagram of the arrangement of the AIDA modules. Although simple, the program is proving effective.
In 2003, NASA's Dryden Flight Research Center, and many other companies, created software that could enable a damaged aircraft to continue flight until a safe landing zone can be reached. The software compensates for all the damaged components by relying on the undamaged components. The neural network used in the software proved to be effective and marked a triumph for artificial intelligence.
The Integrated Vehicle Health Management system, also used by NASA, on board an aircraft must process and interpret data taken from the various sensors on the aircraft. The system needs to be able to determine the structural integrity of the aircraft. The system also needs to implement protocols in case of any damage taken the vehicle.
Haitham Baomar and Peter Bentley are leading a team from the University College of London to develop an artificial intelligence based Intelligent Autopilot System (IAS) designed to teach an autopilot system to behave like a highly experienced pilot who is faced with an emergency situation such as severe weather, turbulence, or system failure.
Educating the autopilot relies on the concept of supervised machine learning “which treats the young autopilot as a human apprentice going to a flying school”. The autopilot records the actions of the human pilot generating learning models using artificial neural networks. The autopilot is then given full control and observed by the pilot as it executes the training exercise.
The Intelligent Autopilot System combines the principles of Apprenticeship Learning and Behavioural Cloning whereby the autopilot observes the low-level actions required to maneuver the airplane and high-level strategy used to apply those actions. IAS implementation employs three phases; pilot data collection, training, and autonomous control.
Baomar and Bentley's goal is to create a more autonomous autopilot to assist pilots in responding to emergency situations.
___________________________________________________________________________
Air and Space Operations Center
An Air and Space Operations Center (AOC) is a type of command center used by the United States Air Force (USAF). It is the senior agency of the Air Force component commander to provide command and control of air and space operations.
The United States Air Force employs two kinds of AOCs: regional AOCs utilizing the AN/USQ-163 Falconer weapon system that support geographic combatant commanders, and functional AOCs that support functional combatant commanders.
When there is more than one U.S. military service working in an AOC, such as when naval aviation from the U.S. Navy (USN) and/or the U.S. Marine Corps (USMC) is incorporated, it is called a Joint Air and Space Operations Center (JAOC). In cases of allied or coalition (multinational) operations in tandem with USAF or Joint air and space operations, the AOC is called a Combined Air and Space Operations Center (CAOC).
An AOC is the senior element of the Theater Air Control System (TACS). The Joint Force Commander (JFC) assigns a Joint Forces Air Component Commander (JFACC) to lead the AOC weapon system. If allied or coalition forces are part of the operation, the JFC and JFACC will be redesignated as the CFC and CFACC, respectively.
Quite often the Commander, Air Force Forces (COMAFFOR) is assigned the JFACC/CFACC position for planning and executing theater-wide air and space forces. If another service also provides a significant share of air and space forces, the Deputy JFACC/CFACC will typically be a senior flag officer from that service.
For example, during Operation Enduring Freedom and Operation Iraqi Freedom, when USAF combat air forces (CAF) and mobility air forces (MAF) integrated extensive USN and USMC sea-based and land-based aviation and Royal Air Force (RAF) and Royal Navy / Fleet Air Arm aviation, the CFACC was an aeronautically rated USAF lieutenant general, assisted by an aeronautically designated USN rear admiral (upper half) as the Deputy CFACC, and an aeronautically rated RAF air commodore as the Senior British Officer (Air).
Click on any of the following blue hyperlinks for more about Air and Space Centers:
The use of artificial intelligence in simulators is proving to be very useful for the AOD. Airplane simulators are using artificial intelligence in order to process the data taken from simulated flights. Other than simulated flying, there is also simulated aircraft warfare.
The computers are able to come up with the best success scenarios in these situations. The computers can also create strategies based on the placement, size, speed and strength of the forces and counter forces. Pilots may be given assistance in the air during combat by computers.
The artificial intelligent programs can sort the information and provide the pilot with the best possible maneuvers, not to mention getting rid of certain maneuvers that would be impossible for a human being to perform.
Multiple aircraft are needed to get good approximations for some calculations so computer simulated pilots are used to gather data. These computer simulated pilots are also used to train future air traffic controllers.
The system used by the AOD in order to measure performance was the Interactive Fault Diagnosis and Isolation System, or IFDIS. It is a rule based expert system put together by collecting information from TF-30 documents and the expert advice from mechanics that work on the TF-30. This system was designed to be used for the development of the TF-30 for the RAAF F-111C.
The performance system was also used to replace specialized workers. The system allowed the regular workers to communicate with the system and avoid mistakes, miscalculations, or having to speak to one of the specialized workers.
The AOD also uses artificial intelligence in speech recognition software. The air traffic controllers are giving directions to the artificial pilots and the AOD wants to the pilots to respond to the ATC's with simple responses. The programs that incorporate the speech software must be trained, which means they use neural networks.
The program used, the Verbex 7000, is still a very early program that has plenty of room for improvement. The improvements are imperative because ATCs use very specific dialog and the software needs to be able to communicate correctly and promptly every time.
The Artificial Intelligence supported Design of Aircraft, or AIDA, is used to help designers in the process of creating conceptual designs of aircraft. This program allows the designers to focus more on the design itself and less on the design process. The software also allows the user to focus less on the software tools.
The AIDA uses rule based systems to compute its data. This is a diagram of the arrangement of the AIDA modules. Although simple, the program is proving effective.
In 2003, NASA's Dryden Flight Research Center, and many other companies, created software that could enable a damaged aircraft to continue flight until a safe landing zone can be reached. The software compensates for all the damaged components by relying on the undamaged components. The neural network used in the software proved to be effective and marked a triumph for artificial intelligence.
The Integrated Vehicle Health Management system, also used by NASA, on board an aircraft must process and interpret data taken from the various sensors on the aircraft. The system needs to be able to determine the structural integrity of the aircraft. The system also needs to implement protocols in case of any damage taken the vehicle.
Haitham Baomar and Peter Bentley are leading a team from the University College of London to develop an artificial intelligence based Intelligent Autopilot System (IAS) designed to teach an autopilot system to behave like a highly experienced pilot who is faced with an emergency situation such as severe weather, turbulence, or system failure.
Educating the autopilot relies on the concept of supervised machine learning “which treats the young autopilot as a human apprentice going to a flying school”. The autopilot records the actions of the human pilot generating learning models using artificial neural networks. The autopilot is then given full control and observed by the pilot as it executes the training exercise.
The Intelligent Autopilot System combines the principles of Apprenticeship Learning and Behavioural Cloning whereby the autopilot observes the low-level actions required to maneuver the airplane and high-level strategy used to apply those actions. IAS implementation employs three phases; pilot data collection, training, and autonomous control.
Baomar and Bentley's goal is to create a more autonomous autopilot to assist pilots in responding to emergency situations.
___________________________________________________________________________
Air and Space Operations Center
An Air and Space Operations Center (AOC) is a type of command center used by the United States Air Force (USAF). It is the senior agency of the Air Force component commander to provide command and control of air and space operations.
The United States Air Force employs two kinds of AOCs: regional AOCs utilizing the AN/USQ-163 Falconer weapon system that support geographic combatant commanders, and functional AOCs that support functional combatant commanders.
When there is more than one U.S. military service working in an AOC, such as when naval aviation from the U.S. Navy (USN) and/or the U.S. Marine Corps (USMC) is incorporated, it is called a Joint Air and Space Operations Center (JAOC). In cases of allied or coalition (multinational) operations in tandem with USAF or Joint air and space operations, the AOC is called a Combined Air and Space Operations Center (CAOC).
An AOC is the senior element of the Theater Air Control System (TACS). The Joint Force Commander (JFC) assigns a Joint Forces Air Component Commander (JFACC) to lead the AOC weapon system. If allied or coalition forces are part of the operation, the JFC and JFACC will be redesignated as the CFC and CFACC, respectively.
Quite often the Commander, Air Force Forces (COMAFFOR) is assigned the JFACC/CFACC position for planning and executing theater-wide air and space forces. If another service also provides a significant share of air and space forces, the Deputy JFACC/CFACC will typically be a senior flag officer from that service.
For example, during Operation Enduring Freedom and Operation Iraqi Freedom, when USAF combat air forces (CAF) and mobility air forces (MAF) integrated extensive USN and USMC sea-based and land-based aviation and Royal Air Force (RAF) and Royal Navy / Fleet Air Arm aviation, the CFACC was an aeronautically rated USAF lieutenant general, assisted by an aeronautically designated USN rear admiral (upper half) as the Deputy CFACC, and an aeronautically rated RAF air commodore as the Senior British Officer (Air).
Click on any of the following blue hyperlinks for more about Air and Space Centers:
- Divisions
- List of Air and Space Operations Centers
- Inactive AOCs
- Training/Experimentation
- AOC-equipping Units
- NATO CAOC
- See also:
Outline of Artificial Intelligence
- YouTube Video: How AI is changing Business: A look at the limitless potential of AI | ANIRUDH KALA | TEDxIITBHU
- YouTube Video: What happens when our computers get smarter than we are? | Nick Bostrom
- YouTube Video: Artificial Super intelligence - How close are we?
UNDERSTANDING THREE TYPES OF ARTIFICIAL INTELLIGENCE
(Illustration above)
In this era of technology, artificial intelligence is conquering over all the industries and domains, performing tasks more effectively than humans. Like in sci-fi movies, a day will come when world would be dominated by robots.
Artificial intelligence is surrounded by jargons like narrow, general, and super artificial intelligence or by machine learning, deep learning, supervised and unsupervised learning or neural networks and a whole lot of confusing terms. In this article, we will talk about artificial intelligence and its three main categories.
1) Understanding Artificial Intelligence:
The term AI was coined by John Mccarthy, an American computer scientist in 1956. Artificial Intelligence is the simulation of human intelligence but processed by machines mainly computer systems. The processes mainly include learning, reasoning and self-correction.
With the increase in speed, size and diversity of data, AI has gained its dominance in the businesses globally. AI can perform several tasks say, recognizing patterns in data more efficiently than a human giving more insights to businesses.
2) Types of Artificial Intelligence:
Narrow Artificial Intelligence: Weak AI also known for narrow AI is an AI system that is developed and trained for a particular task. Narrow AI is programmed to perform a single task and works within a limited context. It is very good at routine physical and cognitive jobs.
For example, narrow AI can identify pattern and correlations from data more efficiently than humans. Sales predictions, purchase suggestions and weather forecast are the implementation of narrow AI.
Even Google’s Translation Engine is a form of narrow AI. In the automotive industry, self-driving cars are the result of coordination of several narrow AI. But it cannot expand and take tasks beyond its field, for example, the AI engine which transcripts image recognition cannot perform sales recommendations.
Artificial General Intelligence: Artificial General Intelligence (AGI) is an AI system with generalized cognitive abilities which find solutions to the unfamiliar task it comes across. It is popularly termed as strong AI which can understand and reason the environment as a human would. Also known as human AI but it is hard to define a human level artificial intelligence. Human intelligence might not be able to compute as fast as computers but they can think abstractly, plan and solve problems without going into details. More importantly, humans can innovate and bring up thoughts and ideas that have no trails or precedence.
Artificial Super Intelligence: Artificial Super Intelligence (ASI) refers to the position where computer/machines will surpass humans and machines would be able to mimic human thoughts. ASI refers to a situation where the cognitive ability of machines will be superior to humans.
In the past, there had been developments like IBM’s Watson supercomputer beating human players at jeopardy and assistive devices like Siri involving into a conversation with people, but there is still no machine that can process the depth of knowledge and cognitive ability as that of a fully developed human.
ASI had two school of thoughts, on one side great scientist like Stephen Hawking saw the full development of AI as a danger to humanity whereas others such as Demis Hassabis, Co-Founder & CEO of DeepMind believes that smarter the AI becomes better the world would be and a helping hand to mankind.
Conclusion:
In today’s technological age, AI has gifted machines which are much more capable than human intelligence. It is difficult to predict how long it will take AI to achieve the cognitive ability and knowledge depth of human being. Finally, a day will come when AI would surpass the brightest human mind on earth.
[End of Article]
___________________________________________________________________________
Outline of Artificial Intelligence:
The following outline is provided as an overview of and topical guide to artificial intelligence:
Artificial intelligence (AI) – intelligence exhibited by machines or software. It is also the name of the scientific field which studies how to create computers and computer software that are capable of intelligent behavior.
Click on any of the following blue hyperlinks for more about the Outline of Artificial Intelligence:
(Illustration above)
In this era of technology, artificial intelligence is conquering over all the industries and domains, performing tasks more effectively than humans. Like in sci-fi movies, a day will come when world would be dominated by robots.
Artificial intelligence is surrounded by jargons like narrow, general, and super artificial intelligence or by machine learning, deep learning, supervised and unsupervised learning or neural networks and a whole lot of confusing terms. In this article, we will talk about artificial intelligence and its three main categories.
1) Understanding Artificial Intelligence:
The term AI was coined by John Mccarthy, an American computer scientist in 1956. Artificial Intelligence is the simulation of human intelligence but processed by machines mainly computer systems. The processes mainly include learning, reasoning and self-correction.
With the increase in speed, size and diversity of data, AI has gained its dominance in the businesses globally. AI can perform several tasks say, recognizing patterns in data more efficiently than a human giving more insights to businesses.
2) Types of Artificial Intelligence:
Narrow Artificial Intelligence: Weak AI also known for narrow AI is an AI system that is developed and trained for a particular task. Narrow AI is programmed to perform a single task and works within a limited context. It is very good at routine physical and cognitive jobs.
For example, narrow AI can identify pattern and correlations from data more efficiently than humans. Sales predictions, purchase suggestions and weather forecast are the implementation of narrow AI.
Even Google’s Translation Engine is a form of narrow AI. In the automotive industry, self-driving cars are the result of coordination of several narrow AI. But it cannot expand and take tasks beyond its field, for example, the AI engine which transcripts image recognition cannot perform sales recommendations.
Artificial General Intelligence: Artificial General Intelligence (AGI) is an AI system with generalized cognitive abilities which find solutions to the unfamiliar task it comes across. It is popularly termed as strong AI which can understand and reason the environment as a human would. Also known as human AI but it is hard to define a human level artificial intelligence. Human intelligence might not be able to compute as fast as computers but they can think abstractly, plan and solve problems without going into details. More importantly, humans can innovate and bring up thoughts and ideas that have no trails or precedence.
Artificial Super Intelligence: Artificial Super Intelligence (ASI) refers to the position where computer/machines will surpass humans and machines would be able to mimic human thoughts. ASI refers to a situation where the cognitive ability of machines will be superior to humans.
In the past, there had been developments like IBM’s Watson supercomputer beating human players at jeopardy and assistive devices like Siri involving into a conversation with people, but there is still no machine that can process the depth of knowledge and cognitive ability as that of a fully developed human.
ASI had two school of thoughts, on one side great scientist like Stephen Hawking saw the full development of AI as a danger to humanity whereas others such as Demis Hassabis, Co-Founder & CEO of DeepMind believes that smarter the AI becomes better the world would be and a helping hand to mankind.
Conclusion:
In today’s technological age, AI has gifted machines which are much more capable than human intelligence. It is difficult to predict how long it will take AI to achieve the cognitive ability and knowledge depth of human being. Finally, a day will come when AI would surpass the brightest human mind on earth.
[End of Article]
___________________________________________________________________________
Outline of Artificial Intelligence:
The following outline is provided as an overview of and topical guide to artificial intelligence:
Artificial intelligence (AI) – intelligence exhibited by machines or software. It is also the name of the scientific field which studies how to create computers and computer software that are capable of intelligent behavior.
Click on any of the following blue hyperlinks for more about the Outline of Artificial Intelligence:
- What type of thing is artificial intelligence?
- Types of artificial intelligence
- Branches of artificial intelligence
- Further AI design elements
- AI projects
- AI applications
- AI development
- Psychology and AI
- History of artificial intelligence
- AI hazards and safety
- AI and the future
- Philosophy of artificial intelligence
- Artificial intelligence in fiction
- AI community
- See also:
- A look at the re-emergence of A.I. and why the technology is poised to succeed given today's environment, ComputerWorld, 2015 September 14
- AI at Curlie
- The Association for the Advancement of Artificial Intelligence
- Freeview Video 'Machines with Minds' by the Vega Science Trust and the BBC/OU
- John McCarthy's frequently asked questions about AI
- Jonathan Edwards looks at AI (BBC audio) С
- Ray Kurzweil's website dedicated to AI including prediction of future development in AI
- Thomason, Richmond. "Logic and Artificial Intelligence". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
Vehicular Automation and Automated Driving Systems
- YouTube Video: How a Driverless Car Sees the Road
- YouTube Video: Elon Musk on Tesla's Auto Pilot and Legal Liability
- YouTube Video: How Tesla's Self-Driving Autopilot Actually Works | WIRED
The path to 5G: Paving the road to tomorrow’s autonomous vehicles:
According to World Health Organization figures, road traffic injuries are the leading cause of death among young people aged 15–29 years. More than 1.2 million people die each year worldwide as a result of traffic crashes.
Vehicle-to-Everything (V2X) technologies, starting with 802.11p and evolving to Cellular V2X (C-V2X), can help bring safer roads, more efficient travel, reduced air pollution, and better driving experiences.
V2X will serve as the foundation for the safe, connected vehicle of the future, giving vehicles the ability to "talk" to each other, pedestrians, roadway infrastructure, and the cloud.
It’s no wonder that the MIT Technology Review put V2X on its 2015 10 Breakthrough Technologies list, stating: “Car-to-car communication should also have a bigger impact than the advanced vehicle automation technologies that have been more widely heralded.”
V2X is a key technology for enabling fully autonomous transportation infrastructure. While advancements in radar, LiDAR (Light Detection and Ranging), and camera systems are encouraging and bring autonomous driving one step closer to reality, it should be known that these sensors are limited by their line of sight. V2X complements the capabilities of these sensors by providing 360 degree non-line-of sight awareness, extending a vehicle’s ability to “see” further down the road – even at blind intersections or in bad weather conditions.
So, how long do we have to wait for V2X to become a reality? Actually, V2X technology is here today. Wi-Fi-based 820.11p has established the foundation for latency-critical V2X communications.
To improve road safety for future light vehicles in the United States, the National Highway Safety Administration is expected to begin rulemaking for Dedicated Short Range Communications (DSRC) this year.
Beyond that, tomorrow’s autonomous vehicles require continued technology evolution to accommodate ever-expanding safety requirements and use cases. The path to 5G will deliver this evolution starting with the C-V2X part of 3GPP release 14 specifications, which is expected to be completed by the end of this year.
C-V2X will define two new transmission modes that work together to enable a broad range of automotive use cases:
To accelerate technology evolution, Qualcomm is actively driving the C-V2X work in 3GPP, building on our leadership in LTE Direct and LTE Broadcast to pioneer C-V2X technologies.
The difference between a vehicle collision and a near miss comes down to milliseconds. With approximately twice the range of DSRC, C-V2X can provide the critical seconds of reaction time needed to avoid an accident. Beyond safety, C-V2X also enables a broad range of use cases – from better situational awareness, to enhanced traffic management, and connected cloud services.
C-V2X will provide a unified connectivity platform for safer “vehicles of tomorrow.” Building upon C-V2X, 5G will bring even more possibilities for the connected vehicle. The extreme throughput, low latency, and enhanced reliability of 5G will allow vehicles to share rich, real-time data, supporting fully autonomous driving experiences, for example:
Beyond pioneering C-V2X and helping define the path to 5G, we are delivering new levels of on-device intelligence and integration in the connected vehicle of tomorrow. Our innovations in cognitive technologies, such as always-on sensing, computer vision, and machine learning, will help make our vision of safer, more autonomous vehicles a reality.
To learn more about how we are pioneering Cellular V2X, join us for our upcoming webinar or visit our Cellular V2X web page.
Learn more about our automotive solutions here.
[End of Article]
___________________________________________________________________________
Vehicular automation
Vehicular automation involves the use of mechatronics, artificial intelligence, and multi-agent system to assist a vehicle's operator. These features and the vehicles employing them may be labeled as intelligent or smart.
A vehicle using automation for difficult tasks, especially navigation, may be referred to as semi-autonomous. A vehicle relying solely on automation is consequently referred to as robotic or autonomous. After the invention of the integrated circuit, the sophistication of automation technology increased. Manufacturers and researchers subsequently added a variety of automated functions to automobiles and other vehicles.
Autonomy levels:
Autonomy in vehicles is often categorized in six levels: The level system was developed by the Society of Automotive Engineers (SAE):
Ground vehicles:
Further information: Unmanned ground vehicles
Ground vehicles employing automation and teleoperation include shipyard gantries, mining trucks, bomb-disposal robots, robotic insects, and driverless tractors.
There are a lot of autonomous and semi-autonomous ground vehicles being made for the purpose of transporting passengers. One such example is the free-ranging on grid (FROG) technology which consists of autonomous vehicles, a magnetic track and a supervisory system.
The FROG system is deployed for industrial purposes in factory sites and has been in use since 1999 on the ParkShuttle, a PRT-style public transport system in the city of Capelle aan den IJssel to connect the Rivium business park with the neighboring city of Rotterdam (where the route terminates at the Kralingse Zoom metro station). The system experienced a crash in 2005 that proved to be caused by a human error.
Applications for automation in ground vehicles include the following:
Research is ongoing and prototypes of autonomous ground vehicles exist.
Cars:
See also: Autonomous car
Extensive automation for cars focuses on either introducing robotic cars or modifying modern car designs to be semi-autonomous.
Semi-autonomous designs could be implemented sooner as they rely less on technology that is still at the forefront of research. An example is the dual mode monorail. Groups such as RUF (Denmark) and TriTrack (USA) are working on projects consisting of specialized private cars that are driven manually on normal roads but also that dock onto a monorail/guideway along which they are driven autonomously.
As a method of automating cars without extensively modifying the cars as much as a robotic car, Automated highway systems (AHS) aims to construct lanes on highways that would be equipped with, for example, magnets to guide the vehicles. Automation vehicles have auto-brakes named as Auto Vehicles Braking System (AVBS). Highway computers would manage the traffic and direct the cars to avoid crashes.
The European Commission has established a smart car development program called the Intelligent Car Flagship Initiative. The goals of that program include:
There are plenty of further uses for automation in relation to cars. These include:
Singapore also announced a set of provisional national standards on January 31, 2019, to guide the autonomous vehicle industry. The standards, known as Technical Reference 68 (TR68), will promote the safe deployment of fully driverless vehicles in Singapore, according to a joint press release by Enterprise Singapore (ESG), Land Transport Authority (LTA), Standards Development Organisation and Singapore Standards Council (SSC).
Shared autonomous vehicles:
Following recent developments in autonomous cars, shared autonomous vehicles are now able to run in ordinary traffic without the need for embedded guidance markers. So far the focus has been on low speed, 20 miles per hour (32 km/h), with short, fixed routes for the "last mile" of journeys.
This means issues of collision avoidance and safety are significantly less challenging than those for automated cars, which seek to match the performance of conventional vehicles.
Aside from 2getthere ("ParkShuttle"), three companies - Ligier ("Easymile EZ10"), Navya ("ARMA" & "Autonom Cab") and RDM Group ("LUTZ Pathfinder") - are manufacturing and actively testing such vehicles. Two other companies have produced prototypes, Local Motors ("Olli") and the GATEway project.
Beside these efforts, Apple is reportedly developing an autonomous shuttle, based on a vehicle from an existing automaker, to transfer employees between its offices in Palo Alto and Infinite Loop, Cupertino. The project called "PAIL", after its destinations, was revealed in August 2017 when Apple announced it had abandoned development of autonomous cars.
Click on any of the following blue hyperlinks for more about Vehicular Automation: ___________________________________________________________________________
Automated driving system:
An automated driving system is a complex combination of various components that can be defined as systems where perception, decision making, and operation of the automobile are performed by electronics and machinery instead of a human driver, and as introduction of automation into road traffic.
This includes handling of the vehicle, destination, as well as awareness of surroundings.
While the automated system has control over the vehicle, it allows the human operator to leave all responsibilities to the system.
Overview:
The automated driving system is generally an integrated package of individual automated systems operating in concert. Automated driving implies that the driver have given up the ability to drive (i.e., all appropriate monitoring, agency, and action functions) to the vehicle automation system. Even though the driver may be alert and ready to take action at any moment, automation system controls all functions.
Automated driving systems are often conditional, which implies that the automation system is capable of automated driving, but not for all conditions encountered in the course of normal operation. Therefore, a human driver is functionally required to initiate the automated driving system, and may or may not do so when driving conditions are within the capability of the system.
When the vehicle automation system has assumed all driving functions, the human is no longer driving the vehicle but continues to assume responsibility for the vehicle's performance as the vehicle operator.
The automated vehicle operator is not functionally required to actively monitor the vehicle's performance while the automation system is engaged, but the operator must be available to resume driving within several seconds of being prompted to do so, as the system has limited conditions of automation.
While the automated driving system is engaged, certain conditions may prevent real-time human input, but for no more than a few seconds. The operator is able to resume driving at any time subject to this short delay. When the operator has resumed all driving functions, he or she reassumes the status of the vehicle's driver.
Success in the technology:
The success in the automated driving system has been known to be successful in situations like rural road settings.
Rural road settings would be a setting in which there is lower amounts of traffic and lower differentiation between driving abilities and types of drivers.
"The greatest challenge in the development of automated functions is still inner-city traffic, where an extremely wide range of road users must be considered from all directions."
This technology is progressing to a more reliable way of the automated driving cars to switch from auto-mode to driver mode. Auto-mode is the mode that is set in order for the automated actions to take over, while the driver mode is the mode set in order to have the operator controlling all functions of the car and taking the responsibilities of operating the vehicle (Automated driving system not engaged).
This definition would include vehicle automation systems that may be available in the near term—such as traffic-jam assist, or full-range automated cruise control—if such systems would be designed such that the human operator can reasonably divert attention (monitoring) away from the performance of the vehicle while the automation system is engaged. This definition would also include automated platooning (such as conceptualized by the SARTRE project).
The SARTRE Project:
The SARTRE project's main goal is to create platooning, a train of automated cars, that will provide comfort and have the ability for the driver of the vehicle to arrive safely to a destination.
Along with the ability to be along the train, drivers that are driving past these platoons, can join in with a simple activation of the automated driving system that correlates with a truck that leads the platoon. The SARTRE project is taking what we know as a train system and mixing it with automated driving technology. This is intended to allow for an easier transportation though cities and ultimately help with traffic flow through heavy automobile traffic.
SARTRE & modern day:
In some parts of the world the self-driving car has been tested in real life situations such as in Pittsburgh. The Self-driving Uber has been put to the test around the city, driving with different types of drivers as well as different traffic situations. Not only have there been testing and successful parts to the automated car, but there has also been extensive testing in California on automated busses.
The lateral control of the automated buses uses magnetic markers such as the platoon at San Diego, while the longitudinal control of the automated truck platoon uses millimeter wave radio and radar.
Current examples around today's society include the Google car and Tesla's models. Tesla has redesigned automated driving, they have created car models that allow drivers to put in the destination and let the car take over. These are two modern day examples of the automated driving system cars.
Levels of automation according to SAE:
The U.S Department of Transportation National Highway Traffic Safety Administration (NHTSA) provided a standard classification system in 2013 which defined five different levels of automation, ranging from level 0 (no automation) to level 4 (full automation).
Since then, the NHTSA updated their standards to be in line with the classification system defined by SAE International. SAE International defines six different levels of automation in their new standard of classification in document SAE J3016 that ranges from 0 (no automation) to 5 (full automation).
Level 0 – No automation:
The driver is in complete control of the vehicle and the system does not interfere with driving. Systems that may fall into this category are forward collision warning systems and lane departure warning systems.
Level 1 – Driver assistance:
The driver is in control of the vehicle, but the system can modify the speed and steering direction of the vehicle. Systems that may fall into this category are adaptive cruise control and lane keep assist.
Level 2 – Partial automation:
The driver must be able to control the vehicle if corrections are needed, but the driver is no longer in control of the speed and steering of the vehicle. Parking assistance is an example of a system that falls into this category along with Tesla's autopilot feature.
A system that falls into this category is the DISTRONIC PLUS system created by Mercedes-Benz. It is important to note the driver must not be distracted in Level 0 to Level 2 modes.
Level 3 – Conditional automation.
The system is in complete control of vehicle functions such as speed, steering, and monitoring the environment under specific conditions. Such specific conditions may be fulfilled while on fenced-off highway with no intersections, limited driving speed, boxed-in driving situation etc.
A human driver must be ready to intervene when requested by the system to do so. If the driver does not respond within a predefined time or if a failure occurs in the system, the system needs to do a safety stop in ego lane (no lane change allowed). The driver is only allowed to be partially distracted, such as checking text messages, but taking a nap is not allowed.
Level 4 – High automation:
The system is in complete control of the vehicle and human presence is no longer needed, but its applications are limited to specific conditions. An example of a system being developed that falls into this category is the Waymo self-driving car service.
If the actual motoring condition exceeds the performance boundaries, the system does not have to ask the human to intervene but can choose to abort the trip in a safe manner, e.g. park the car.
Level 5 – Full automation:
The system is capable of providing the same aspects of a Level 4, but the system can operate in all driving conditions. The human is equivalent to "cargo" in Level 5. Currently, there are no driving systems at this level.
Risks and liabilities:
See also: Computer security § Automobiles, and Autonomous car liability
Many automakers such as Ford and Volvo have announced plans to offer fully automated cars in the future. Extensive research and development is being put into automated driving systems, but the biggest problem automakers cannot control is how drivers will use system.
Drivers are stressed to stay attentive and safety warnings are implemented to alert the driver when corrective action is needed. Tesla Motor's has one recorded incident that resulted in a fatality involving the automated driving system in the Tesla Model S. The accident report reveals the accident was a result of the driver being inattentive and the autopilot system not recognizing the obstruction ahead.
Another flaw with automated driving systems is that in situations where unpredictable events such as weather or the driving behavior of others may cause fatal accidents due to sensors that monitor the surroundings of the vehicle not being able to provide corrective action.
To overcome some of the challenges for automated driving systems, novel methodologies based on virtual testing, traffic flow simulatio and digital prototypes have been proposed, especially when novel algorithms based on Artificial Intelligence approaches are employed which require extensive training and validation data sets.
According to World Health Organization figures, road traffic injuries are the leading cause of death among young people aged 15–29 years. More than 1.2 million people die each year worldwide as a result of traffic crashes.
Vehicle-to-Everything (V2X) technologies, starting with 802.11p and evolving to Cellular V2X (C-V2X), can help bring safer roads, more efficient travel, reduced air pollution, and better driving experiences.
V2X will serve as the foundation for the safe, connected vehicle of the future, giving vehicles the ability to "talk" to each other, pedestrians, roadway infrastructure, and the cloud.
It’s no wonder that the MIT Technology Review put V2X on its 2015 10 Breakthrough Technologies list, stating: “Car-to-car communication should also have a bigger impact than the advanced vehicle automation technologies that have been more widely heralded.”
V2X is a key technology for enabling fully autonomous transportation infrastructure. While advancements in radar, LiDAR (Light Detection and Ranging), and camera systems are encouraging and bring autonomous driving one step closer to reality, it should be known that these sensors are limited by their line of sight. V2X complements the capabilities of these sensors by providing 360 degree non-line-of sight awareness, extending a vehicle’s ability to “see” further down the road – even at blind intersections or in bad weather conditions.
So, how long do we have to wait for V2X to become a reality? Actually, V2X technology is here today. Wi-Fi-based 820.11p has established the foundation for latency-critical V2X communications.
To improve road safety for future light vehicles in the United States, the National Highway Safety Administration is expected to begin rulemaking for Dedicated Short Range Communications (DSRC) this year.
Beyond that, tomorrow’s autonomous vehicles require continued technology evolution to accommodate ever-expanding safety requirements and use cases. The path to 5G will deliver this evolution starting with the C-V2X part of 3GPP release 14 specifications, which is expected to be completed by the end of this year.
C-V2X will define two new transmission modes that work together to enable a broad range of automotive use cases:
- The first enables direct communication between vehicles and each other, pedestrians, and road infrastructure. We are building on LTE Direct device-to-device communications, evolving the technology with innovations to exchange real-time information between vehicles traveling at fast speeds, in high-density traffic, and even outside of mobile network coverage areas.
- The second transmission mode uses the ubiquitous coverage of existing LTE networks, so you can be alerted to an accident a few miles ahead, guided to an open parking space, and more. To enable this mode, we are optimizing LTE Broadcast technology for vehicular communications.
To accelerate technology evolution, Qualcomm is actively driving the C-V2X work in 3GPP, building on our leadership in LTE Direct and LTE Broadcast to pioneer C-V2X technologies.
The difference between a vehicle collision and a near miss comes down to milliseconds. With approximately twice the range of DSRC, C-V2X can provide the critical seconds of reaction time needed to avoid an accident. Beyond safety, C-V2X also enables a broad range of use cases – from better situational awareness, to enhanced traffic management, and connected cloud services.
C-V2X will provide a unified connectivity platform for safer “vehicles of tomorrow.” Building upon C-V2X, 5G will bring even more possibilities for the connected vehicle. The extreme throughput, low latency, and enhanced reliability of 5G will allow vehicles to share rich, real-time data, supporting fully autonomous driving experiences, for example:
- Cooperative-collision avoidance: For self-driving vehicles, individual actions by a vehicle to avoid collisions may create hazardous driving conditions for other vehicles. Cooperative-collision avoidance allows all involved vehicles to coordinate their actions to avoid collisions in a cooperative manner.
- High-density platooning: In a self-driving environment, vehicles communicate with each other to create a closely spaced multiple vehicle chains on a highway. High-density platooning will further reduce the current distance between vehicles down to one meter, resulting in better traffic efficiency, fuel savings, and safer roads.
- See through: In situations where small vehicles are behind larger vehicles (e.g., trucks), the smaller vehicles cannot "see" a pedestrian crossing the road in front of the larger vehicle. In such scenarios, a truck’s camera can detect the situation and share the image of the pedestrian with the vehicle behind it, which sends an alert to the driver and shows him the pedestrian in virtual reality on the windshield board.
Beyond pioneering C-V2X and helping define the path to 5G, we are delivering new levels of on-device intelligence and integration in the connected vehicle of tomorrow. Our innovations in cognitive technologies, such as always-on sensing, computer vision, and machine learning, will help make our vision of safer, more autonomous vehicles a reality.
To learn more about how we are pioneering Cellular V2X, join us for our upcoming webinar or visit our Cellular V2X web page.
Learn more about our automotive solutions here.
[End of Article]
___________________________________________________________________________
Vehicular automation
Vehicular automation involves the use of mechatronics, artificial intelligence, and multi-agent system to assist a vehicle's operator. These features and the vehicles employing them may be labeled as intelligent or smart.
A vehicle using automation for difficult tasks, especially navigation, may be referred to as semi-autonomous. A vehicle relying solely on automation is consequently referred to as robotic or autonomous. After the invention of the integrated circuit, the sophistication of automation technology increased. Manufacturers and researchers subsequently added a variety of automated functions to automobiles and other vehicles.
Autonomy levels:
Autonomy in vehicles is often categorized in six levels: The level system was developed by the Society of Automotive Engineers (SAE):
- Level 0: No automation.
- Level 1: Driver assistance - The vehicle can control either steering or speed autonomously in specific circumstances to assist the driver.
- Level 2: Partial automation - The vehicle can control both steering and speed autonomously in specific circumstances to assist the driver.
- Level 3: Conditional automation - The vehicle can control both steering and speed autonomously under normal environmental conditions, but requires driver oversight.
- Level 4: High automation - The vehicle can complete a travel autonomously under normal environmental conditions, not requiring driver oversight.
- Level 5: Full autonomy - The vehicle can complete a travel autonomously in any environmental conditions.
Ground vehicles:
Further information: Unmanned ground vehicles
Ground vehicles employing automation and teleoperation include shipyard gantries, mining trucks, bomb-disposal robots, robotic insects, and driverless tractors.
There are a lot of autonomous and semi-autonomous ground vehicles being made for the purpose of transporting passengers. One such example is the free-ranging on grid (FROG) technology which consists of autonomous vehicles, a magnetic track and a supervisory system.
The FROG system is deployed for industrial purposes in factory sites and has been in use since 1999 on the ParkShuttle, a PRT-style public transport system in the city of Capelle aan den IJssel to connect the Rivium business park with the neighboring city of Rotterdam (where the route terminates at the Kralingse Zoom metro station). The system experienced a crash in 2005 that proved to be caused by a human error.
Applications for automation in ground vehicles include the following:
- Vehicle tracking system system ESITrack, Lojack
- Rear-view alarm, to detect obstacles behind.
- Anti-lock braking system (ABS) (also Emergency Braking Assistance (EBA)), often coupled with Electronic brake force distribution (EBD), which prevents the brakes from locking and losing traction while braking. This shortens stopping distances in most cases and, more importantly, allows the driver to steer the vehicle while braking.
- Traction control system (TCS) actuates brakes or reduces throttle to restore traction if driven wheels begin to spin.
- Four wheel drive (AWD) with a center differential. Distributing power to all four wheels lessens the chances of wheel spin. It also suffers less from oversteer and understeer.
- Electronic Stability Control (ESC) (also known for Mercedes-Benz proprietary Electronic Stability Program (ESP), Acceleration Slip Regulation (ASR) and Electronic differential lock (EDL)). Uses various sensors to intervene when the car senses a possible loss of control. The car's control unit can reduce power from the engine and even apply the brakes on individual wheels to prevent the car from understeering or oversteering.
- Dynamic steering response (DSR) corrects the rate of power steering system to adapt it to vehicle's speed and road conditions.
Research is ongoing and prototypes of autonomous ground vehicles exist.
Cars:
See also: Autonomous car
Extensive automation for cars focuses on either introducing robotic cars or modifying modern car designs to be semi-autonomous.
Semi-autonomous designs could be implemented sooner as they rely less on technology that is still at the forefront of research. An example is the dual mode monorail. Groups such as RUF (Denmark) and TriTrack (USA) are working on projects consisting of specialized private cars that are driven manually on normal roads but also that dock onto a monorail/guideway along which they are driven autonomously.
As a method of automating cars without extensively modifying the cars as much as a robotic car, Automated highway systems (AHS) aims to construct lanes on highways that would be equipped with, for example, magnets to guide the vehicles. Automation vehicles have auto-brakes named as Auto Vehicles Braking System (AVBS). Highway computers would manage the traffic and direct the cars to avoid crashes.
The European Commission has established a smart car development program called the Intelligent Car Flagship Initiative. The goals of that program include:
There are plenty of further uses for automation in relation to cars. These include:
- Assured Clear Distance Ahead
- Adaptive headlamps
- Advanced Automatic Collision Notification, such as OnStar
- Intelligent Parking Assist System
- Automatic Parking
- Automotive night vision with pedestrian detection
- Blind spot monitoring
- Driver Monitoring System
- Robotic car or self-driving car which may result in less-stressed "drivers", higher efficiency (the driver can do something else), increased safety and less pollution (e.g. via completely automated fuel control)
- Precrash system
- Safe speed governing
- Traffic sign recognition
- Following another car on a motorway – "enhanced" or "adaptive" cruise control, as used by Ford and Vauxhall
- Distance control assist – as developed by Nissan
- Dead man's switch – there is a move to introduce deadman's braking into automotive application, primarily heavy vehicles, and there may also be a need to add penalty switches to cruise controls.
Singapore also announced a set of provisional national standards on January 31, 2019, to guide the autonomous vehicle industry. The standards, known as Technical Reference 68 (TR68), will promote the safe deployment of fully driverless vehicles in Singapore, according to a joint press release by Enterprise Singapore (ESG), Land Transport Authority (LTA), Standards Development Organisation and Singapore Standards Council (SSC).
Shared autonomous vehicles:
Following recent developments in autonomous cars, shared autonomous vehicles are now able to run in ordinary traffic without the need for embedded guidance markers. So far the focus has been on low speed, 20 miles per hour (32 km/h), with short, fixed routes for the "last mile" of journeys.
This means issues of collision avoidance and safety are significantly less challenging than those for automated cars, which seek to match the performance of conventional vehicles.
Aside from 2getthere ("ParkShuttle"), three companies - Ligier ("Easymile EZ10"), Navya ("ARMA" & "Autonom Cab") and RDM Group ("LUTZ Pathfinder") - are manufacturing and actively testing such vehicles. Two other companies have produced prototypes, Local Motors ("Olli") and the GATEway project.
Beside these efforts, Apple is reportedly developing an autonomous shuttle, based on a vehicle from an existing automaker, to transfer employees between its offices in Palo Alto and Infinite Loop, Cupertino. The project called "PAIL", after its destinations, was revealed in August 2017 when Apple announced it had abandoned development of autonomous cars.
Click on any of the following blue hyperlinks for more about Vehicular Automation: ___________________________________________________________________________
Automated driving system:
An automated driving system is a complex combination of various components that can be defined as systems where perception, decision making, and operation of the automobile are performed by electronics and machinery instead of a human driver, and as introduction of automation into road traffic.
This includes handling of the vehicle, destination, as well as awareness of surroundings.
While the automated system has control over the vehicle, it allows the human operator to leave all responsibilities to the system.
Overview:
The automated driving system is generally an integrated package of individual automated systems operating in concert. Automated driving implies that the driver have given up the ability to drive (i.e., all appropriate monitoring, agency, and action functions) to the vehicle automation system. Even though the driver may be alert and ready to take action at any moment, automation system controls all functions.
Automated driving systems are often conditional, which implies that the automation system is capable of automated driving, but not for all conditions encountered in the course of normal operation. Therefore, a human driver is functionally required to initiate the automated driving system, and may or may not do so when driving conditions are within the capability of the system.
When the vehicle automation system has assumed all driving functions, the human is no longer driving the vehicle but continues to assume responsibility for the vehicle's performance as the vehicle operator.
The automated vehicle operator is not functionally required to actively monitor the vehicle's performance while the automation system is engaged, but the operator must be available to resume driving within several seconds of being prompted to do so, as the system has limited conditions of automation.
While the automated driving system is engaged, certain conditions may prevent real-time human input, but for no more than a few seconds. The operator is able to resume driving at any time subject to this short delay. When the operator has resumed all driving functions, he or she reassumes the status of the vehicle's driver.
Success in the technology:
The success in the automated driving system has been known to be successful in situations like rural road settings.
Rural road settings would be a setting in which there is lower amounts of traffic and lower differentiation between driving abilities and types of drivers.
"The greatest challenge in the development of automated functions is still inner-city traffic, where an extremely wide range of road users must be considered from all directions."
This technology is progressing to a more reliable way of the automated driving cars to switch from auto-mode to driver mode. Auto-mode is the mode that is set in order for the automated actions to take over, while the driver mode is the mode set in order to have the operator controlling all functions of the car and taking the responsibilities of operating the vehicle (Automated driving system not engaged).
This definition would include vehicle automation systems that may be available in the near term—such as traffic-jam assist, or full-range automated cruise control—if such systems would be designed such that the human operator can reasonably divert attention (monitoring) away from the performance of the vehicle while the automation system is engaged. This definition would also include automated platooning (such as conceptualized by the SARTRE project).
The SARTRE Project:
The SARTRE project's main goal is to create platooning, a train of automated cars, that will provide comfort and have the ability for the driver of the vehicle to arrive safely to a destination.
Along with the ability to be along the train, drivers that are driving past these platoons, can join in with a simple activation of the automated driving system that correlates with a truck that leads the platoon. The SARTRE project is taking what we know as a train system and mixing it with automated driving technology. This is intended to allow for an easier transportation though cities and ultimately help with traffic flow through heavy automobile traffic.
SARTRE & modern day:
In some parts of the world the self-driving car has been tested in real life situations such as in Pittsburgh. The Self-driving Uber has been put to the test around the city, driving with different types of drivers as well as different traffic situations. Not only have there been testing and successful parts to the automated car, but there has also been extensive testing in California on automated busses.
The lateral control of the automated buses uses magnetic markers such as the platoon at San Diego, while the longitudinal control of the automated truck platoon uses millimeter wave radio and radar.
Current examples around today's society include the Google car and Tesla's models. Tesla has redesigned automated driving, they have created car models that allow drivers to put in the destination and let the car take over. These are two modern day examples of the automated driving system cars.
Levels of automation according to SAE:
The U.S Department of Transportation National Highway Traffic Safety Administration (NHTSA) provided a standard classification system in 2013 which defined five different levels of automation, ranging from level 0 (no automation) to level 4 (full automation).
Since then, the NHTSA updated their standards to be in line with the classification system defined by SAE International. SAE International defines six different levels of automation in their new standard of classification in document SAE J3016 that ranges from 0 (no automation) to 5 (full automation).
Level 0 – No automation:
The driver is in complete control of the vehicle and the system does not interfere with driving. Systems that may fall into this category are forward collision warning systems and lane departure warning systems.
Level 1 – Driver assistance:
The driver is in control of the vehicle, but the system can modify the speed and steering direction of the vehicle. Systems that may fall into this category are adaptive cruise control and lane keep assist.
Level 2 – Partial automation:
The driver must be able to control the vehicle if corrections are needed, but the driver is no longer in control of the speed and steering of the vehicle. Parking assistance is an example of a system that falls into this category along with Tesla's autopilot feature.
A system that falls into this category is the DISTRONIC PLUS system created by Mercedes-Benz. It is important to note the driver must not be distracted in Level 0 to Level 2 modes.
Level 3 – Conditional automation.
The system is in complete control of vehicle functions such as speed, steering, and monitoring the environment under specific conditions. Such specific conditions may be fulfilled while on fenced-off highway with no intersections, limited driving speed, boxed-in driving situation etc.
A human driver must be ready to intervene when requested by the system to do so. If the driver does not respond within a predefined time or if a failure occurs in the system, the system needs to do a safety stop in ego lane (no lane change allowed). The driver is only allowed to be partially distracted, such as checking text messages, but taking a nap is not allowed.
Level 4 – High automation:
The system is in complete control of the vehicle and human presence is no longer needed, but its applications are limited to specific conditions. An example of a system being developed that falls into this category is the Waymo self-driving car service.
If the actual motoring condition exceeds the performance boundaries, the system does not have to ask the human to intervene but can choose to abort the trip in a safe manner, e.g. park the car.
Level 5 – Full automation:
The system is capable of providing the same aspects of a Level 4, but the system can operate in all driving conditions. The human is equivalent to "cargo" in Level 5. Currently, there are no driving systems at this level.
Risks and liabilities:
See also: Computer security § Automobiles, and Autonomous car liability
Many automakers such as Ford and Volvo have announced plans to offer fully automated cars in the future. Extensive research and development is being put into automated driving systems, but the biggest problem automakers cannot control is how drivers will use system.
Drivers are stressed to stay attentive and safety warnings are implemented to alert the driver when corrective action is needed. Tesla Motor's has one recorded incident that resulted in a fatality involving the automated driving system in the Tesla Model S. The accident report reveals the accident was a result of the driver being inattentive and the autopilot system not recognizing the obstruction ahead.
Another flaw with automated driving systems is that in situations where unpredictable events such as weather or the driving behavior of others may cause fatal accidents due to sensors that monitor the surroundings of the vehicle not being able to provide corrective action.
To overcome some of the challenges for automated driving systems, novel methodologies based on virtual testing, traffic flow simulatio and digital prototypes have been proposed, especially when novel algorithms based on Artificial Intelligence approaches are employed which require extensive training and validation data sets.
The AI Effect
Pictured below: 4 Positive Effects of AI Use in Email Marketing
- YouTube Video: What is Effect.AI?
- YouTube Video: The Real Reason to be Afraid of Artificial Intelligence | Peter Haas | TEDxDirigo
- YouTube Video: 19 Industries The Blockchain* Will Disrupt
Pictured below: 4 Positive Effects of AI Use in Email Marketing
The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.
Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."
AIS researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"
"The AI effect" tries to redefine AI to mean:
AI is anything that has not been done yet:
A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
Pamela McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the "failures", the tough nuts that couldn't yet be cracked."
When IBM's chess-playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, people complained that it had only used "brute force methods" and it wasn't real intelligence. Fred Reed writes:
"A problem that proponents of AI regularly face is this: When we know how a machine does something 'intelligent,' it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright."
Douglas Hofstadter expresses the AI effect concisely by quoting Tesler's Theorem:
"AI is whatever hasn't been done yet."
When problems have not yet been formalized, they can still be characterised by a model of computation that includes human computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by human. This formalization is referred to as human-assisted Turing machine.
AI applications become mainstream:
Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI.
Michael Swaine reports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous", Patrick Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world."
According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?"
Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?"
Nick Bostrom observes that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore."
Legacy of the AI winter:
Main article: AI winter
Many AI researchers find that they can procure more funding and sell more software if they avoid the tarnished name of "artificial intelligence" and instead pretend their work has nothing to do with intelligence at all. This was especially true in the early 1990s, during the second "AI winter".
Patty Tascarella writes "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding".
Saving a place for humanity at the top of the chain of being:
Michael Kearns suggests that "people subconsciously are trying to preserve for themselves some special role in the universe". By discounting artificial intelligence people can continue to feel unique and special.
Kearns argues that the change in perception known as the AI effect can be traced to the mystery being removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.
A related effect has been noted in the history of animal cognition and in consciousness studies, where every time a capacity formerly thought as uniquely human is discovered in animals, (e.g. the ability to make tools, or passing the mirror test), the overall importance of that capacity is deprecated.
Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that."
See also:
Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."
AIS researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"
"The AI effect" tries to redefine AI to mean:
AI is anything that has not been done yet:
A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
Pamela McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the "failures", the tough nuts that couldn't yet be cracked."
When IBM's chess-playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, people complained that it had only used "brute force methods" and it wasn't real intelligence. Fred Reed writes:
"A problem that proponents of AI regularly face is this: When we know how a machine does something 'intelligent,' it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright."
Douglas Hofstadter expresses the AI effect concisely by quoting Tesler's Theorem:
"AI is whatever hasn't been done yet."
When problems have not yet been formalized, they can still be characterised by a model of computation that includes human computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by human. This formalization is referred to as human-assisted Turing machine.
AI applications become mainstream:
Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI.
Michael Swaine reports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous", Patrick Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world."
According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?"
Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?"
Nick Bostrom observes that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore."
Legacy of the AI winter:
Main article: AI winter
Many AI researchers find that they can procure more funding and sell more software if they avoid the tarnished name of "artificial intelligence" and instead pretend their work has nothing to do with intelligence at all. This was especially true in the early 1990s, during the second "AI winter".
Patty Tascarella writes "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding".
Saving a place for humanity at the top of the chain of being:
Michael Kearns suggests that "people subconsciously are trying to preserve for themselves some special role in the universe". By discounting artificial intelligence people can continue to feel unique and special.
Kearns argues that the change in perception known as the AI effect can be traced to the mystery being removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.
A related effect has been noted in the history of animal cognition and in consciousness studies, where every time a capacity formerly thought as uniquely human is discovered in animals, (e.g. the ability to make tools, or passing the mirror test), the overall importance of that capacity is deprecated.
Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that."
See also:
- "If It Works, It's Not AI: A Commercial Look at Artificial Intelligence startups"
- ELIZA effect
- Functionalism (philosophy of mind)
- Moravec's paradox
- Chinese room
Artificial Intelligence: It's History, Timeline and Progress
- YouTube Video: Artificial Intelligence, the History and Future - with Chris Bishop
- YouTube Video about Artificial Intelligence: Mankind's Last Invention?
- YouTube Video: Sam Harris on Artificial Intelligence
History of artificial intelligence
The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen.
The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.
Eventually, it became obvious that they had grossly underestimated the difficulty of the project.
In 1973, in response to the criticism from James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an "AI winter".
Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned by the absence of the needed computer power (hardware) and withdrew funding again.
Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry due to new methods, the application of powerful computer hardware, and the collection of immense data sets.
Click on any of the following blue hyperlinks for more about the History of Artificial Intelligence:
The following is the Timeline of Artificial Intelligence:
Progress in artificial intelligence:
Artificial intelligence applications have been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery and toys.
However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry."
In the late 1990s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes.
Kaplan and Haenlein structure artificial intelligence along three evolutionary stages:
To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Such tests have been termed subject matter expert Turing tests.
Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results.
Click on any of the following blue hyperlinks for more about the Progress of Artificial Intelligence:
The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen.
The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.
Eventually, it became obvious that they had grossly underestimated the difficulty of the project.
In 1973, in response to the criticism from James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an "AI winter".
Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned by the absence of the needed computer power (hardware) and withdrew funding again.
Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry due to new methods, the application of powerful computer hardware, and the collection of immense data sets.
Click on any of the following blue hyperlinks for more about the History of Artificial Intelligence:
- AI in myth, fiction and speculation
- Automatons
- Formal reasoning
- Computer science
- The birth of artificial intelligence 1952–1956
- The golden years 1956–1974
- The first AI winter 1974–1980
- Boom 1980–1987
- Bust: the second AI winter 1987–1993
- AI 1993–2011
- Deep learning, big data and artificial general intelligence: 2011–present
- See also:
The following is the Timeline of Artificial Intelligence:
- To 1900
- 1901–1950
- 1950s
- 1960s
- 1970s
- 1980s
- 1990s
- 2000s
- 2010s
- See also:
- "Brief History (timeline)", AI Topics, Association for the Advancement of Artificial Intelligence
- "Timeline: Building Smarter Machines", New York Times, 24 June 2010
Progress in artificial intelligence:
Artificial intelligence applications have been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery and toys.
However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry."
In the late 1990s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes.
Kaplan and Haenlein structure artificial intelligence along three evolutionary stages:
- 1) artificial narrow intelligence – applying AI only to specific tasks;
- 2) artificial general intelligence – applying AI to several areas and able to autonomously solve problems they were never even designed for;
- and 3) artificial super intelligence – applying AI to any area capable of scientific creativity, social skills, and general wisdom.
To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Such tests have been termed subject matter expert Turing tests.
Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results.
Click on any of the following blue hyperlinks for more about the Progress of Artificial Intelligence: