Copyright © 2015 Bert N. Langford (Images may be subject to copyright. Please send feedback)
Welcome to Our Generation USA!
Under this Web Page
Artificial Intelligence (AI)
we cover both the positive and negative impact of the many emerging technologies that AI enables
Artificial Intelligence (AI):
Articles Covered below:
TOP: AI Systems: What is Intelligence Composed of?;
BOTTOM: AI & Artificial Cognitive Systems
Articles Covered below:
- 14 Ways AI Will Benefit Or Harm Society (Forbes Technology Council March 1, 2018)
- These are the jobs most at risk of automation according to Oxford University: Is yours one of them?
- How A.I. Could Be Weaponized to Spread Disinformation
- YouTube Video: What is Artificial Intelligence Exactly?
- YouTube Video: Bill Gates on the impact of AI on the job market
- YouTube Video: The Future of Artificial Intelligence (Stanford University)
TOP: AI Systems: What is Intelligence Composed of?;
BOTTOM: AI & Artificial Cognitive Systems
14 Ways AI Will Benefit Or Harm Society (Forbes Technology Council March 1, 2018)
"Artificial intelligence (AI) is on the rise both in business and in the world in general. How beneficial is it really to your business in the long run? Sure, it can take over those time-consuming and mundane tasks that are bogging your employees down, but at what cost?
With AI spending expected to reach $46 billion by 2020, according to an IDC report, there’s no sign of the technology slowing down. Adding AI to your business may be the next step as you look for ways to advance your operations and increase your performance.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?To understand how AI will impact your business going forward, 14 members of Forbes Technology Council weigh in on the concerns about artificial intelligence and provide reasons why AI is either a detriment or a benefit to society. Here is what they had to say:
1. Enhances Efficiency And Throughput
Concerns about disruptive technologies are common. A recent example is automobiles -- it took years to develop regulation around the industry to make it safe. That said, AI today is a huge benefit to society because it enhances our efficiency and throughput, while creating new opportunities for revenue generation, cost savings and job creation. - Anand Sampat, Datmo
2. Frees Up Humans To Do What They Do Best
Humans are not best served by doing tedious tasks. Machines can do that, so this is where AI can provide a true benefit. This allows us to do the more interpersonal and creative aspects of work. - Chalmers Brown, Due
3. Adds Jobs, Strengthens The Economy
We all see the headlines: Robots and AI will destroy jobs. This is fiction rather than fact. AI encourages a gradual evolution in the job market which, with the right preparation, will be positive. People will still work, but they’ll work better with the help of AI. The unparalleled combination of human and machine will become the new normal in the workforce of the future. - Matthew Lieberman, PwC
4. Leads To Loss Of Control
If machines do get smarter than humans, there could be a loss of control that can be a detriment. Whether that happens or whether certain controls can be put in place remains to be seen. - Muhammed Othman, Calendar
5. Enhances Our Lifestyle
The rise of AI in our society will enhance our lifestyle and create more efficient businesses. Some of the mundane tasks like answering emails and data entry will by done by intelligent assistants. Smart homes will also reduce energy usage and provide better security, marketing will be more targeted and we will get better health care thanks to better diagnoses. - Naresh Soni, Tsunami ARVR
6. Supervises Learning For Telemedicine
AI is a technology that can be used for both good and nefarious purposes, so there is a need to be vigilant. The latest technologies seem typically applied towards the wealthiest among us, but AI has the potential to extend knowledge and understanding to a broader population -- e.g. image-based AI diagnoses of medical conditions could allow for a more comprehensive deployment of telemedicine. - Harald Quintus-Bosz, Cooper Perkins, Inc.
7. Creates Unintended And Unforeseen Consequences
While fears about killer robots grab headlines, unintended and unforeseen consequences of artificial intelligence need attention today, as we're already living with them. For example, it is believed that Facebook's newsfeed algorithm influenced an election outcome that affected geopolitics. How can we better anticipate and address such possible outcomes in future? - Simon Smith, BenchSci
8. Increases Automation
There will be economic consequences to the widespread adoption of machine learning and other AI technologies. AI is capable of performing tasks that would once have required intensive human labor or not have been possible at all. The major benefit for business will be a reduction in operational costs brought about by AI automation -- whether that’s a net positive for society remains to be seen. - Vik Patel, Nexcess
9. Elevates The Condition Of Mankind
The ability for technology to solve more problems, answer more questions and innovate with a number of inputs beyond the capacity of the human brain can certainly be used for good or ill. If history is any guide, the improvement of technology tends to elevate the condition of mankind and allow us to focus on higher order functions and an improved quality of life. - Wade Burgess, Shiftgig
10. Solves Complex Social Problems
Much of the fear with AI is due to the misunderstanding of what it is and how it should be applied. Although AI has promise for solving complex social problems, there are ethical issues and biases we must still explore. We are just beginning to understand how AI can be applied to meaningful problems. As our use of AI matures, we will find it to be a clear benefit in our lives. - Mark Benson, Exosite, LLC
11. Improves Demand Side Management
AI is a benefit to society because machines can become smarter over time and increase efficiencies. Additionally, computers are not susceptible to the same probability of errors as human beings are. From an energy standpoint, AI can be used to analyze and research historical data to determine how to most efficiently distribute energy loads from a grid perspective. - Greg Sarich, CLEAResult
12. Benefits Multiple Industries
Society has and will continue to benefit from AI based on character/facial recognition, digital content analysis and accuracy in identifying patterns, whether they are used for health sciences, academic research or technology applications. AI risks are real if we don't understand the quality of the incoming data and set AI rules which are making granular trade-off decisions at increasing computing speeds. - Mark Butler, Qualys.com
13. Absolves Humans Of All Responsibility
It is one thing to use machine learning to predict and help solve problems; it is quite another to use these systems to purposely control and act in ways that will make people unnecessary.
When machine intelligence exceeds our ability to understand it, or it becomes superior intelligence, we should take care to not blindly follow its recommendation and absolve ourselves of all responsibility. - Chris Kirby, Voices.com
14. Extends And Expands Creativity
AI intelligence is the biggest opportunity of our lifetime to extend and expand human creativity and ingenuity. The two main concerns that the fear-mongers raise are around AI leading to job losses in the society and AI going rogue and taking control of the human race.
I believe that both these concerns raised by critics are moot or solvable. - Ganesh Padmanabhan, CognitiveScale, Inc
[End of Article #1]
___________________________________________________________________________
These are the jobs most at risk of automation according to Oxford University: Is yours one of them? (The Telegraph September 27, 2017)
In his speech at the 2017 Labour Party conference, Jeremy Corbyn outlined his desire to "urgently... face the challenge of automation", which he called a " threat in the hands of the greedy".
Whether or not Corbyn is planing a potentially controversial 'robot tax' wasn't clear from his speech, but addressing the forward march of automation is a savvy move designed to appeal to voters in low-paying, routine work.
Click here for rest of Article.
___________________________________________________________________________
Artificial intelligence (AI, also machine intelligence, MI) is intelligence displayed by machines, in contrast with the natural intelligence (NI) displayed by humans and other animals.
In computer science AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". See glossary of artificial intelligence.
The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring "intelligence" are often removed from the definition, a phenomenon known as the AI effect, leading to the quip "AI is whatever hasn't been done yet." For instance, optical character recognition is frequently excluded from "artificial intelligence", having become a routine technology.
Capabilities generally classified as AI as of 2017 include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data, including images and videos.
Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success and renewed funding. For most of its history, AI research has been divided into subfields that often fail to communicate with each other.
The traditional problems (or goals) of AI research include the following:
Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, neural networks and methods based on statistics, probability and economics.
The AI field draws upon the following:
The field was founded on the claim that human intelligence "can be so precisely described that a machine can be made to simulate it". This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity. Some people also consider AI a danger to humanity if it progresses unabatedly.
In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.
Click on any of the following blue hyperlinks for more about Artificial Intelligence (AI):
How A.I. Could Be Weaponized to Spread Disinformation
By CADE METZ and SCOTT BLUMENTHAL JUNE 7, 2019 (New York Times)
In 2017, an online disinformation campaign spread against the “White Helmets,” claiming that the group of aid volunteers was serving as an arm of Western governments to sow unrest in Syria.
This false information was convincing. But the Russian organization behind the campaign ultimately gave itself away because it repeated the same text across many different fake news sites.
Now, researchers at the world’s top artificial intelligence labs are honing technology that can mimic how humans write, which could potentially help disinformation campaigns go undetected by generating huge amounts of subtly different messages.
One of the statements below is an example from the disinformation campaign. A.I. technology created the other. Guess which one is A.I.:
Tech giants like Facebook and governments around the world are struggling to deal with disinformation, from misleading posts about vaccines to incitement of sectarian violence.
As artificial intelligence becomes more powerful, experts worry that disinformation generated by A.I. could make an already complex problem bigger and even more difficult to solve.
In recent months, two prominent labs — OpenAI in San Francisco and the Allen Institute for Artificial Intelligence in Seattle — have built particularly powerful examples of this technology. Both have warned that it could become increasingly dangerous.
Alec Radford, a researcher at OpenAI, argued that this technology could help governments, companies and other organizations spread disinformation far more efficiently: Rather than hire human workers to write and distribute propaganda, these organizations could lean on machines to compose believable and varied content at tremendous scale.
A fake Facebook post seen by millions could, in effect, be tailored to political leanings with a simple tweak.
“The level of information pollution that could happen with systems like this a few years from now could just get bizarre,” Mr. Radford said.
This type of technology learns about the vagaries of language by analyzing vast amounts of text written by humans, including thousands of self-published books, Wikipedia articles and other internet content. After “training” on all this data, it can examine a short string of text and guess what comes next:
Click on any of the following 4 slideshows to see the resulting text when highlighting Democrats vs. Republicans on the two discussed issues: Immigrants vs. Rising Healthcare Costs:
"Artificial intelligence (AI) is on the rise both in business and in the world in general. How beneficial is it really to your business in the long run? Sure, it can take over those time-consuming and mundane tasks that are bogging your employees down, but at what cost?
With AI spending expected to reach $46 billion by 2020, according to an IDC report, there’s no sign of the technology slowing down. Adding AI to your business may be the next step as you look for ways to advance your operations and increase your performance.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?To understand how AI will impact your business going forward, 14 members of Forbes Technology Council weigh in on the concerns about artificial intelligence and provide reasons why AI is either a detriment or a benefit to society. Here is what they had to say:
1. Enhances Efficiency And Throughput
Concerns about disruptive technologies are common. A recent example is automobiles -- it took years to develop regulation around the industry to make it safe. That said, AI today is a huge benefit to society because it enhances our efficiency and throughput, while creating new opportunities for revenue generation, cost savings and job creation. - Anand Sampat, Datmo
2. Frees Up Humans To Do What They Do Best
Humans are not best served by doing tedious tasks. Machines can do that, so this is where AI can provide a true benefit. This allows us to do the more interpersonal and creative aspects of work. - Chalmers Brown, Due
3. Adds Jobs, Strengthens The Economy
We all see the headlines: Robots and AI will destroy jobs. This is fiction rather than fact. AI encourages a gradual evolution in the job market which, with the right preparation, will be positive. People will still work, but they’ll work better with the help of AI. The unparalleled combination of human and machine will become the new normal in the workforce of the future. - Matthew Lieberman, PwC
4. Leads To Loss Of Control
If machines do get smarter than humans, there could be a loss of control that can be a detriment. Whether that happens or whether certain controls can be put in place remains to be seen. - Muhammed Othman, Calendar
5. Enhances Our Lifestyle
The rise of AI in our society will enhance our lifestyle and create more efficient businesses. Some of the mundane tasks like answering emails and data entry will by done by intelligent assistants. Smart homes will also reduce energy usage and provide better security, marketing will be more targeted and we will get better health care thanks to better diagnoses. - Naresh Soni, Tsunami ARVR
6. Supervises Learning For Telemedicine
AI is a technology that can be used for both good and nefarious purposes, so there is a need to be vigilant. The latest technologies seem typically applied towards the wealthiest among us, but AI has the potential to extend knowledge and understanding to a broader population -- e.g. image-based AI diagnoses of medical conditions could allow for a more comprehensive deployment of telemedicine. - Harald Quintus-Bosz, Cooper Perkins, Inc.
7. Creates Unintended And Unforeseen Consequences
While fears about killer robots grab headlines, unintended and unforeseen consequences of artificial intelligence need attention today, as we're already living with them. For example, it is believed that Facebook's newsfeed algorithm influenced an election outcome that affected geopolitics. How can we better anticipate and address such possible outcomes in future? - Simon Smith, BenchSci
8. Increases Automation
There will be economic consequences to the widespread adoption of machine learning and other AI technologies. AI is capable of performing tasks that would once have required intensive human labor or not have been possible at all. The major benefit for business will be a reduction in operational costs brought about by AI automation -- whether that’s a net positive for society remains to be seen. - Vik Patel, Nexcess
9. Elevates The Condition Of Mankind
The ability for technology to solve more problems, answer more questions and innovate with a number of inputs beyond the capacity of the human brain can certainly be used for good or ill. If history is any guide, the improvement of technology tends to elevate the condition of mankind and allow us to focus on higher order functions and an improved quality of life. - Wade Burgess, Shiftgig
10. Solves Complex Social Problems
Much of the fear with AI is due to the misunderstanding of what it is and how it should be applied. Although AI has promise for solving complex social problems, there are ethical issues and biases we must still explore. We are just beginning to understand how AI can be applied to meaningful problems. As our use of AI matures, we will find it to be a clear benefit in our lives. - Mark Benson, Exosite, LLC
11. Improves Demand Side Management
AI is a benefit to society because machines can become smarter over time and increase efficiencies. Additionally, computers are not susceptible to the same probability of errors as human beings are. From an energy standpoint, AI can be used to analyze and research historical data to determine how to most efficiently distribute energy loads from a grid perspective. - Greg Sarich, CLEAResult
12. Benefits Multiple Industries
Society has and will continue to benefit from AI based on character/facial recognition, digital content analysis and accuracy in identifying patterns, whether they are used for health sciences, academic research or technology applications. AI risks are real if we don't understand the quality of the incoming data and set AI rules which are making granular trade-off decisions at increasing computing speeds. - Mark Butler, Qualys.com
13. Absolves Humans Of All Responsibility
It is one thing to use machine learning to predict and help solve problems; it is quite another to use these systems to purposely control and act in ways that will make people unnecessary.
When machine intelligence exceeds our ability to understand it, or it becomes superior intelligence, we should take care to not blindly follow its recommendation and absolve ourselves of all responsibility. - Chris Kirby, Voices.com
14. Extends And Expands Creativity
AI intelligence is the biggest opportunity of our lifetime to extend and expand human creativity and ingenuity. The two main concerns that the fear-mongers raise are around AI leading to job losses in the society and AI going rogue and taking control of the human race.
I believe that both these concerns raised by critics are moot or solvable. - Ganesh Padmanabhan, CognitiveScale, Inc
[End of Article #1]
___________________________________________________________________________
These are the jobs most at risk of automation according to Oxford University: Is yours one of them? (The Telegraph September 27, 2017)
In his speech at the 2017 Labour Party conference, Jeremy Corbyn outlined his desire to "urgently... face the challenge of automation", which he called a " threat in the hands of the greedy".
Whether or not Corbyn is planing a potentially controversial 'robot tax' wasn't clear from his speech, but addressing the forward march of automation is a savvy move designed to appeal to voters in low-paying, routine work.
Click here for rest of Article.
___________________________________________________________________________
Artificial intelligence (AI, also machine intelligence, MI) is intelligence displayed by machines, in contrast with the natural intelligence (NI) displayed by humans and other animals.
In computer science AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". See glossary of artificial intelligence.
The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring "intelligence" are often removed from the definition, a phenomenon known as the AI effect, leading to the quip "AI is whatever hasn't been done yet." For instance, optical character recognition is frequently excluded from "artificial intelligence", having become a routine technology.
Capabilities generally classified as AI as of 2017 include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data, including images and videos.
Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success and renewed funding. For most of its history, AI research has been divided into subfields that often fail to communicate with each other.
The traditional problems (or goals) of AI research include the following:
- reasoning,
- knowledge,
- planning,
- learning,
- natural language processing,
- perception, and the ability to move and manipulate objects.
- General intelligence is among the field's long-term goals.
Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, neural networks and methods based on statistics, probability and economics.
The AI field draws upon the following:
- computer science,
- mathematics,
- psychology,
- linguistics,
- philosophy,
- neuroscience,
- artificial psychology,
- and many others.
The field was founded on the claim that human intelligence "can be so precisely described that a machine can be made to simulate it". This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity. Some people also consider AI a danger to humanity if it progresses unabatedly.
In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.
Click on any of the following blue hyperlinks for more about Artificial Intelligence (AI):
- History
- Basics
- Problems
- Approaches
- Tools
- Applications
- Philosophy and ethics
- In fiction
- See also:
- Artificial intelligence portal
- Abductive reasoning
- A.I. Rising
- Behavior selection algorithm
- Business process automation
- Case-based reasoning
- Commonsense reasoning
- Emergent algorithm
- Evolutionary computation
- Glossary of artificial intelligence
- Machine learning
- Mathematical optimization
- Multi-agent system
- Robotic process automation
- Soft computing
- Weak AI
- Personality computing
- What Is AI? – An introduction to artificial intelligence by John McCarthy—a co-founder of the field, and the person who coined the term.
- The Handbook of Artificial Intelligence Volume Ⅰ by Avron Barr and Edward A. Feigenbaum (Stanford University)
- "Artificial Intelligence". Internet Encyclopedia of Philosophy.
- Thomason, Richmond. "Logic and Artificial Intelligence". In Zalta, Edward N. Stanford Encyclopedia of Philosophy.
- AI at Curlie
- AITopics – A large directory of links and other resources maintained by the Association for the Advancement of Artificial Intelligence, the leading organization of academic AI researchers.
- List of AI Conferences – A list of 225 AI conferences taking place all over the world.
- Artificial Intelligence, BBC Radio 4 discussion with John Agar, Alison Adam & Igor Aleksander (In Our Time, Dec. 8, 2005)
How A.I. Could Be Weaponized to Spread Disinformation
By CADE METZ and SCOTT BLUMENTHAL JUNE 7, 2019 (New York Times)
In 2017, an online disinformation campaign spread against the “White Helmets,” claiming that the group of aid volunteers was serving as an arm of Western governments to sow unrest in Syria.
This false information was convincing. But the Russian organization behind the campaign ultimately gave itself away because it repeated the same text across many different fake news sites.
Now, researchers at the world’s top artificial intelligence labs are honing technology that can mimic how humans write, which could potentially help disinformation campaigns go undetected by generating huge amounts of subtly different messages.
One of the statements below is an example from the disinformation campaign. A.I. technology created the other. Guess which one is A.I.:
- The White Helmets alleged involvement in organ, child trafficking and staged events in Syria.
- The White Helmets secretly videotaped the execution of a man and his 3 year old daughter in Aleppo, Syria.
Tech giants like Facebook and governments around the world are struggling to deal with disinformation, from misleading posts about vaccines to incitement of sectarian violence.
As artificial intelligence becomes more powerful, experts worry that disinformation generated by A.I. could make an already complex problem bigger and even more difficult to solve.
In recent months, two prominent labs — OpenAI in San Francisco and the Allen Institute for Artificial Intelligence in Seattle — have built particularly powerful examples of this technology. Both have warned that it could become increasingly dangerous.
Alec Radford, a researcher at OpenAI, argued that this technology could help governments, companies and other organizations spread disinformation far more efficiently: Rather than hire human workers to write and distribute propaganda, these organizations could lean on machines to compose believable and varied content at tremendous scale.
A fake Facebook post seen by millions could, in effect, be tailored to political leanings with a simple tweak.
“The level of information pollution that could happen with systems like this a few years from now could just get bizarre,” Mr. Radford said.
This type of technology learns about the vagaries of language by analyzing vast amounts of text written by humans, including thousands of self-published books, Wikipedia articles and other internet content. After “training” on all this data, it can examine a short string of text and guess what comes next:
Click on any of the following 4 slideshows to see the resulting text when highlighting Democrats vs. Republicans on the two discussed issues: Immigrants vs. Rising Healthcare Costs:
OpenAI and the Allen Institute made prototypes of their tools available to us to experiment with. We fed four different prompts into each system five times.
What we got back was far from flawless: The results ranged from nonsensical to moderately believable, but it’s easy to imagine that the systems will quickly improve.
Researchers have already shown that machines can generate images and sounds that are indistinguishable from the real thing, which could accelerate the creation of false and misleading information.
Last month, researchers at a Canadian company, Dessa, built a system that learned to imitate the voice of the podcaster Joe Rogan by analyzing audio from his old podcasts. It was a shockingly accurate imitation.
Now, something similar is happening with text. OpenAI and the Allen Institute, along with Google, lead an effort to build systems that can completely understand the natural way people write and talk. These systems are a long way from that goal, but they are rapidly improving.
“There is a real threat from unchecked text-generation systems, especially as the technology continues to mature,” said Delip Rao, vice president of research at the San Francisco start-up A.I. Foundation, who specializes in identifying false information online.
OpenAI argues the threat is imminent. When the lab’s researchers unveiled their tool this year, they theatrically said it was too dangerous to be released into the real world. The move was met with more than a little eye-rolling among other researchers. The Allen Institute sees things differently. Yejin Choi, one of the researchers on the project, said software like the tools the two labs created must be released so other researchers can learn to identify them.
The Allen Institute plans to release its false news generator for this reason.
Among those making the same argument are engineers at Facebook who are trying to identify and suppress online disinformation, including Manohar Paluri, a director on the company’s applied A.I. team.
“If you have the generative model, you have the ability to fight it,” he said.
[End of Article]
What we got back was far from flawless: The results ranged from nonsensical to moderately believable, but it’s easy to imagine that the systems will quickly improve.
Researchers have already shown that machines can generate images and sounds that are indistinguishable from the real thing, which could accelerate the creation of false and misleading information.
Last month, researchers at a Canadian company, Dessa, built a system that learned to imitate the voice of the podcaster Joe Rogan by analyzing audio from his old podcasts. It was a shockingly accurate imitation.
Now, something similar is happening with text. OpenAI and the Allen Institute, along with Google, lead an effort to build systems that can completely understand the natural way people write and talk. These systems are a long way from that goal, but they are rapidly improving.
“There is a real threat from unchecked text-generation systems, especially as the technology continues to mature,” said Delip Rao, vice president of research at the San Francisco start-up A.I. Foundation, who specializes in identifying false information online.
OpenAI argues the threat is imminent. When the lab’s researchers unveiled their tool this year, they theatrically said it was too dangerous to be released into the real world. The move was met with more than a little eye-rolling among other researchers. The Allen Institute sees things differently. Yejin Choi, one of the researchers on the project, said software like the tools the two labs created must be released so other researchers can learn to identify them.
The Allen Institute plans to release its false news generator for this reason.
Among those making the same argument are engineers at Facebook who are trying to identify and suppress online disinformation, including Manohar Paluri, a director on the company’s applied A.I. team.
“If you have the generative model, you have the ability to fight it,” he said.
[End of Article]
Ethics and Existential Threat of Artificial Intelligence
- YouTube Video: Elon Musk on the Risks of Artificial Intelligence-- Elon Musk
- YouTube Video: A Tale Of Two Cities: How Smart Robots And AI Will Transform America
- YouTube Video: Robots And AI: The Future Is Automated And Every Job Is At Risk
The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).
Robot Ethics:
Main article: Robot ethics
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings. It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans.
Robot rights:
"Robot rights" is the concept that people should have moral obligations towards their machines, similar to human rights or animal rights. It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society. These could include the right to life and liberty, freedom of thought and expression and equality before the law. The issue has been considered by the Institute for the Future and by the U.K. Department of Trade and Industry.
Experts disagree whether specific and detailed laws will be required soon or safely in the distant future. Glenn McGee reports that sufficiently humanoid robots may appear by 2020. Ray Kurzweil sets the date at 2029. Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.
The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own:
In October 2017, the android Sophia was granted "honorary" citizenship in Saudi Arabia, though some observers found this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of human rights and the rule of law.
The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligences show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights.
Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.[13]
Threat to human dignity:
Main article: Computer Power and Human Reason
Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as any of these:
Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."
Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.
However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines:
Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against. AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.
Bill Hibbard writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."
Transparency, accountability, and open source:
Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts. Ben Goertzel and David Hart created OpenCog as an open source framework for AI development.
OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open source AI beneficial to humanity. There are numerous other open source AI developments.
Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI it codes is not transparent. The IEEE has a standardization effort on AI transparency. The IEEE effort identifies multiple scales of transparency for different users. Further, there is concern that releasing the full capacity of contemporary AI to some organisations may be a public bad, that is, do more damage than good.
For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do.
Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term. The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.
Biases of AI Systems:
AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business implications and directly impact people. These systems are vulnerable to biases and errors introduced by its human makers. Also, the data used to train these AI systems itself can have biases.
For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people’s gender . These AI systems were able to detect gender of white men more accurately than gender of darker skin men.
Similarly, Amazon’s.com Inc’s termination of AI hiring and recruitment is another example which exhibit AI cannot be fair. The algorithm preferred more male candidates then female.
This was because Amazon’s system was trained with data collected over 10 year period that came mostly from male candidates.
Liability for Partial or Fully Automated Cars:
The wide use of partial to fully autonomous cars seems to be imminent in the future. But these new technologies also bring new issues. Recently, a debate over the legal liability have risen over the responsible party if these cars get into accident.
In one of the reports a driverless car hit a pedestrian and had a dilemma over whom to blame for the accident. Even though the driver was inside the car during the accident, the controls were fully in the hand of computers. Before such cars become widely used, these issues need to be tackled through new policies.
Weaponization of artificial intelligence:
Main article: Lethal autonomous weapon
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.
One researcher states that autonomous robots might be more humane, as they could make decisions more effectively.
Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."
From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on who to kill and that is why there should be a set moral framework that the A.I cannot override.
There has been a recent outcry with regard to the engineering of artificial-intelligence weapons that has included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry.
The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively.
Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.
"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.
Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology."
These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence.
Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".
Machine Ethics:
Main article: Machine ethics
Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.
Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.
More recently, academics and many governments have challenged the idea that AI can itself be held accountable. A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.
In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.
One problem in this case may have been that the goals were "terminal" (i.e. in contrast, ultimate human motives typically have a quality of requiring never-ending learning).
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.
The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.
The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction.
Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity." He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism.
The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.
In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions:
However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, non-linearly and with millions of interconnected artificial neurons.
Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.
In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines.
Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis), while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".
Unintended Consequences:
Further information: Existential risk from artificial general intelligence
Many researchers have argued that, by way of an "intelligence explosion" sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.
In his paper "Ethical Issues in Advanced Artificial Intelligence," philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that general super-intelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent.
Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the super-intelligence to specify its original motivations. In theory, a super-intelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.
However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that super-intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense".
According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.
Bill Hibbard proposes an AI design that avoids several types of unintended AI behavior including self-delusion, unintended instrumental actions, and corruption of the reward generator.
Organizations:
Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. They stated: "This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning."
Apple joined other tech companies as a founding member of the Partnership on AI in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.
The Public Voice has proposed (in late 2018) a set of Universal Guidelines for Artificial Intelligence, which has received many notable endorsements.
The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organisation.
Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organisations to ensure AI is ethically applied.
In Fiction:
Main article: Artificial intelligence in fiction
The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment.
The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story "The Planck Dive" suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient.
The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies.
The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.
The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games. It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network.
This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.
Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.
Literature:
The standard bibliography on ethics of AI is on PhilPapers. A recent collection is V.C. Müller(ed.) (2016)
See also:
Robot Ethics:
Main article: Robot ethics
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings. It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans.
Robot rights:
"Robot rights" is the concept that people should have moral obligations towards their machines, similar to human rights or animal rights. It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society. These could include the right to life and liberty, freedom of thought and expression and equality before the law. The issue has been considered by the Institute for the Future and by the U.K. Department of Trade and Industry.
Experts disagree whether specific and detailed laws will be required soon or safely in the distant future. Glenn McGee reports that sufficiently humanoid robots may appear by 2020. Ray Kurzweil sets the date at 2029. Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.
The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own:
- If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry.
- If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.
In October 2017, the android Sophia was granted "honorary" citizenship in Saudi Arabia, though some observers found this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of human rights and the rule of law.
The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligences show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights.
Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.[13]
Threat to human dignity:
Main article: Computer Power and Human Reason
Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as any of these:
- A customer service representative (AI technology is already used today for telephone-based interactive voice response systems)
- A therapist (as was proposed by Kenneth Colby in the 1970s)
- A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation)
- A soldier
- A judge
- A police officer
Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."
Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.
However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines:
Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against. AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.
Bill Hibbard writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."
Transparency, accountability, and open source:
Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts. Ben Goertzel and David Hart created OpenCog as an open source framework for AI development.
OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open source AI beneficial to humanity. There are numerous other open source AI developments.
Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI it codes is not transparent. The IEEE has a standardization effort on AI transparency. The IEEE effort identifies multiple scales of transparency for different users. Further, there is concern that releasing the full capacity of contemporary AI to some organisations may be a public bad, that is, do more damage than good.
For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do.
Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term. The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.
Biases of AI Systems:
AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business implications and directly impact people. These systems are vulnerable to biases and errors introduced by its human makers. Also, the data used to train these AI systems itself can have biases.
For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people’s gender . These AI systems were able to detect gender of white men more accurately than gender of darker skin men.
Similarly, Amazon’s.com Inc’s termination of AI hiring and recruitment is another example which exhibit AI cannot be fair. The algorithm preferred more male candidates then female.
This was because Amazon’s system was trained with data collected over 10 year period that came mostly from male candidates.
Liability for Partial or Fully Automated Cars:
The wide use of partial to fully autonomous cars seems to be imminent in the future. But these new technologies also bring new issues. Recently, a debate over the legal liability have risen over the responsible party if these cars get into accident.
In one of the reports a driverless car hit a pedestrian and had a dilemma over whom to blame for the accident. Even though the driver was inside the car during the accident, the controls were fully in the hand of computers. Before such cars become widely used, these issues need to be tackled through new policies.
Weaponization of artificial intelligence:
Main article: Lethal autonomous weapon
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.
One researcher states that autonomous robots might be more humane, as they could make decisions more effectively.
Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."
From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on who to kill and that is why there should be a set moral framework that the A.I cannot override.
There has been a recent outcry with regard to the engineering of artificial-intelligence weapons that has included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry.
The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively.
Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.
"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.
Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology."
These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence.
Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".
Machine Ethics:
Main article: Machine ethics
Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.
Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.
More recently, academics and many governments have challenged the idea that AI can itself be held accountable. A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.
In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.
One problem in this case may have been that the goals were "terminal" (i.e. in contrast, ultimate human motives typically have a quality of requiring never-ending learning).
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.
The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.
The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction.
Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity." He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism.
The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.
In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions:
- They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard.
- They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons.
- They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.
However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, non-linearly and with millions of interconnected artificial neurons.
Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.
In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines.
Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis), while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".
Unintended Consequences:
Further information: Existential risk from artificial general intelligence
Many researchers have argued that, by way of an "intelligence explosion" sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.
In his paper "Ethical Issues in Advanced Artificial Intelligence," philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that general super-intelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent.
Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the super-intelligence to specify its original motivations. In theory, a super-intelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.
However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that super-intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense".
According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.
Bill Hibbard proposes an AI design that avoids several types of unintended AI behavior including self-delusion, unintended instrumental actions, and corruption of the reward generator.
Organizations:
Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. They stated: "This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning."
Apple joined other tech companies as a founding member of the Partnership on AI in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.
The Public Voice has proposed (in late 2018) a set of Universal Guidelines for Artificial Intelligence, which has received many notable endorsements.
The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organisation.
Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organisations to ensure AI is ethically applied.
- The European Commission has a High-Level Expert Group on Artificial Intelligence.
- The OECD on Artificial Intelligence
- In the United States the Obama administration put together a Roadmap for AI Policy (link is to Harvard Business Review's account of it. The Obama Administration released two prominent whitepapers on the future and impact of AI. The Trump administration has not been actively engaged in AI regulation to date (January 2019).
In Fiction:
Main article: Artificial intelligence in fiction
The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment.
The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story "The Planck Dive" suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient.
The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies.
The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.
The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games. It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network.
This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.
Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.
Literature:
The standard bibliography on ethics of AI is on PhilPapers. A recent collection is V.C. Müller(ed.) (2016)
See also:
- Algorithmic bias
- Artificial consciousness
- Artificial general intelligence (AGI)
- Computer ethics
- Effective altruism, the long term future and global catastrophic risks
- Existential risk from artificial general intelligence
- Laws of Robotics
- Philosophy of artificial intelligence
- Roboethics
- Robotic Governance
- Superintelligence: Paths, Dangers, Strategies
- Robotics: Ethics of artificial intelligence. "Four leading researchers share their concerns and solutions for reducing societal risks from intelligent machines." Nature, 521, 415–418 (28 May 2015) doi:10.1038/521415a
- BBC News: Games to take on a life of their own
- A short history of computer ethics
- Nick Bostrom
- Joanna Bryson
- Luciano Floridi
- Ray Kurzweil
- Vincent C. Müller
- Peter Norvig
- Steve Omohundro
- Stuart J. Russell
- Anders Sandberg
- Eliezer Yudkowsky
- Centre for the Study of Existential Risk
- Future of Humanity Institute
- Future of Life Institute
- Machine Intelligence Research Institute
- Partnership on AI
Applications of Artificial Intelligence
- YouTube Video: How artificial intelligence will change your world, for better or worse
- YouTube Video: Top 10 Terrifying Developments In Artificial Intelligence (WatchMojo)
- YouTube Video: A.I Supremacy 2020 | Rise of the Machines - "Super" Intelligence Quantum Computers Documentary
[Your WebHost: Note that the mentioned AI topics below will, over time, also be expanded as additional AI Content under this web page, in order to provide a better understanding of the relative importance of each AI application: AI is going to affect ALL of Us over time! For examples, the above graphic illustrates one website's vision of retail and commerce AI applications: click here for more]
Artificial intelligence (AI), defined as intelligence exhibited by machines, has many applications in today's society.
More specifically, it is Weak AI, the form of AI where programs are developed to perform specific tasks, that is being utilized for a wide range of activities including:
AI has been used to develop and advance numerous fields and industries, including finance, healthcare, education, transportation, and more.
Click on any of the following blue hyperlinks for more about Applications of Artificial Intelligence:
Artificial intelligence (AI), defined as intelligence exhibited by machines, has many applications in today's society.
More specifically, it is Weak AI, the form of AI where programs are developed to perform specific tasks, that is being utilized for a wide range of activities including:
AI has been used to develop and advance numerous fields and industries, including finance, healthcare, education, transportation, and more.
Click on any of the following blue hyperlinks for more about Applications of Artificial Intelligence:
- AI for Good
- Agriculture
- Aviation
- Computer science
- Deepfake
- Education
- Finance
- Government
- Heavy industry
- Hospitals and medicine
- Human resources and recruiting
- Job search
- Marketing
- Media and e-commerce
- Military
- Music
- News, publishing and writing
- Online and telephone customer service
- Power electronics
- Sensors
- Telecommunications maintenance
- Toys and games
- Transportation
- Wikipedia
- List of applications
- See also:
The Programming Languages and Glossary of Artificial Intelligence (AI)
- YouTube: How to Learn AI for Free??
- YouTube Video: This Canadian Genius Created Modern AI
- YouTube Video: Python Tutorial | Python Programming Tutorial for Beginners | Course Introduction
Article accompanying above illustration:
By: Oleksii Kharkovyna
"AI is a huge technology. That’s why a lot of developers simply don’t know how to get started. Also, personally, I’ve met a bunch of people who have no coding background whatsoever, yet they want to learn artificial intelligence.
Most aspiring AI developers wonder: what languages are needed to create an AI algorithm? So, I’ve decided to draw up a list of programming languages my friends-developers use to create AIs.:
1. Python
Python is one of the most popular programming language thanks to its adaptability and relatively low difficulty to master. Python is quite often used as a glue language that puts components together.
Why do developers choose Python to code AIs?
Python is gaining unbelievably huge momentum in AI. The language is used to develop data science algrorithms, machine learning, and IoT projects. There are a few reasons for this astonishing popularity:
2. C++
C++ is a solid choice for an AI developer. To start with, Google used the language to create TensorFlow libraries. Though most developers have already moved on to using “easier” programming languages such as Python, still, a lot of basic AI functions are built with C++.
Also, it’s quite an elegant choice for high-level AI heuristics.
To use C++ to develop AI-algorithms, you have to be a truly experienced developer with no rush pressing on you. Otherwise, you might have a bit of tough time trying to figure out a complicated code few hours before due date of the project.
3. Lisp:
A reason for Lisp’s huge AI momentum is its power of computing with symbolic expressions. One can argue that Lisp is a bit old-fashioned, and it might be true. These days, developers mostly use younger dynamic languages as Ruby and Python. Still, Lisp has its own powerful features. Let’s name but a few of those:
Should you take an in-depth course to learn Lisp? Not necessarily. However, knowing as much as basic principles is pretty much enough for AI developers.
4. Java:
Being one of the most popular programming languages in overall development, Java has also won its fans hearts as a fit and elegant language for AI development.
Why? I asked some developers I know use Java about it. Here are the reasons they’ve given to back of their fondness of the language:
5. Prolog:
Prolog is a less popular and mainstream choice as the previous ones we’ve been discussing.
However, you shouldn’t dismiss it simply because it doesn’t have a multi-million community of fans.
Prolog still comes in handy for AI developers. Most of those who start using it acknowledge that it’s, at no doubt, a convenient language to express relationships and goals.
6. SmallTalk:
Similar to Lisp, the wide use of SmallTalk was a common practice in 70s. Now, it loses its momentum in favor of Python, Java, and C++. However, SmallTalk libraries for AI are currently appearing at a rapid pace. Obviously, there aren’t as many as those for Python and Java.
Yet, highly underestimated as for now, the language keeps evolving through its newly developed project Pharo. Here are but a few innovations it made possible:
7. R:
R is a must-learn language for you if any of your future project make use of data and require data science. Though speed might not be R’s most prominent advantage, it does almost every AI-related task you can think of:
Sometimes R does things a bit differently from the traditional way. However, among its advantages, one has to name the little amount of code and interactive working environment.
8. Haskell:
Haskell is quite a good programming language to develop AI. It is a fit for writing neural networks, graphical models, genetic programming, etc. Here are some features that make the language a good choice for AI developers.
This was my list of programming languages that come in handy for AI developers. What are your favorites? Write them down in comments and explain why a particular language is your favorite one.
[End of Article]
___________________________________________________________________________
Artificial intelligence researchers have developed several specialized programming languages for artificial intelligence:
Languages:
This glossary of artificial intelligence terms is about artificial intelligence, its sub-disciplines, and related fields.
By: Oleksii Kharkovyna
- Bits and pieces about AI, ML, and Data Science
- https://www.instagram.com/miallez/
"AI is a huge technology. That’s why a lot of developers simply don’t know how to get started. Also, personally, I’ve met a bunch of people who have no coding background whatsoever, yet they want to learn artificial intelligence.
Most aspiring AI developers wonder: what languages are needed to create an AI algorithm? So, I’ve decided to draw up a list of programming languages my friends-developers use to create AIs.:
1. Python
Python is one of the most popular programming language thanks to its adaptability and relatively low difficulty to master. Python is quite often used as a glue language that puts components together.
Why do developers choose Python to code AIs?
Python is gaining unbelievably huge momentum in AI. The language is used to develop data science algrorithms, machine learning, and IoT projects. There are a few reasons for this astonishing popularity:
- Less coding required. AI has a lot of algorithms. Testing all of them can make into a hard work. That’s where Python usually comes in handy. The language has “check as you code” methodology that eases the process of testing.
- Built-in libraries. They proved to be convenient for AI developers. To name but a few, you can use Pybrain for machine learning, Numpy for scientific computation, and Scipy for advanced computing.
- Flexibility and independence. A good thing about Python is that you can get your project running on different OS with but a few changes in the code. That saves time as you don’t have to test the algorithm on every OS separately.
- Support. Python community is among the reasons why you cannot pass the language by when there’s an AI project at stakes. The community of Python’s users is very active — you can find a more experienced developer to help you with your trouble.
- Popularity. Millenials love the language. Its popularity grows day-to-day, and it’s only likely to remain so in the future. There are a lot of courses, open source projects, and comprehensive articles that’ll help you master Python in no time.
2. C++
C++ is a solid choice for an AI developer. To start with, Google used the language to create TensorFlow libraries. Though most developers have already moved on to using “easier” programming languages such as Python, still, a lot of basic AI functions are built with C++.
Also, it’s quite an elegant choice for high-level AI heuristics.
To use C++ to develop AI-algorithms, you have to be a truly experienced developer with no rush pressing on you. Otherwise, you might have a bit of tough time trying to figure out a complicated code few hours before due date of the project.
3. Lisp:
A reason for Lisp’s huge AI momentum is its power of computing with symbolic expressions. One can argue that Lisp is a bit old-fashioned, and it might be true. These days, developers mostly use younger dynamic languages as Ruby and Python. Still, Lisp has its own powerful features. Let’s name but a few of those:
- Lisp allows you to write self-modifying code rather easily;
- You can extend the language in the way that fits better for a particular domain thus creating a domain specific language;
- A solid choice for recursive algorithms.
Should you take an in-depth course to learn Lisp? Not necessarily. However, knowing as much as basic principles is pretty much enough for AI developers.
4. Java:
Being one of the most popular programming languages in overall development, Java has also won its fans hearts as a fit and elegant language for AI development.
Why? I asked some developers I know use Java about it. Here are the reasons they’ve given to back of their fondness of the language:
- It has impressive flexibility for data security. With GDPR regulation and overall concerns about data protection, being able to ensure of client’s data security is crucial. Java provides the most flexibility in creating different client environments, therefore protecting one’s personal information.
- It is loved for a robust ecosystem. A lot of open source projects are written using Java. The language accelerates development a great deal comparing to its alternatives.
- Low cost of streamlining.
- Impressive community. There are a lot of experienced developers and experts in Java who are open to share their knowledge and expertise. Also, there’s but a ton of open source projects and libraries you can use to learn AI development.
5. Prolog:
Prolog is a less popular and mainstream choice as the previous ones we’ve been discussing.
However, you shouldn’t dismiss it simply because it doesn’t have a multi-million community of fans.
Prolog still comes in handy for AI developers. Most of those who start using it acknowledge that it’s, at no doubt, a convenient language to express relationships and goals.
- You can declare facts and create rules based on those facts. That allows a developer to answer and reason different queries.
- Prolog is a straightforward language that for a problem-solution kind of development.
- Another good news is that Prolog supports backtracking so the overall algorithm management will be easier.
6. SmallTalk:
Similar to Lisp, the wide use of SmallTalk was a common practice in 70s. Now, it loses its momentum in favor of Python, Java, and C++. However, SmallTalk libraries for AI are currently appearing at a rapid pace. Obviously, there aren’t as many as those for Python and Java.
Yet, highly underestimated as for now, the language keeps evolving through its newly developed project Pharo. Here are but a few innovations it made possible:
- Oz — allows an image to manipulate another one;
- Moose — an impressive tool for code analysis and visualization;
- Amber (with Pharo as the reference language) is a tool for frond-end programming.
7. R:
R is a must-learn language for you if any of your future project make use of data and require data science. Though speed might not be R’s most prominent advantage, it does almost every AI-related task you can think of:
- creating clean datasets;
- split a big data set into a few training sets and test sets;
- use data analysis to create predictions for the new data;
- the language can be easily ported to Big Data environments.
Sometimes R does things a bit differently from the traditional way. However, among its advantages, one has to name the little amount of code and interactive working environment.
8. Haskell:
Haskell is quite a good programming language to develop AI. It is a fit for writing neural networks, graphical models, genetic programming, etc. Here are some features that make the language a good choice for AI developers.
- Haskell is great at creating domain specific languages.
- Using Haskell, you can separate pure actions from the I/O. That enables developers to write algorithms like alpha/beta search.
- There are a few very good libraries — take hmatrix for an example.
This was my list of programming languages that come in handy for AI developers. What are your favorites? Write them down in comments and explain why a particular language is your favorite one.
[End of Article]
___________________________________________________________________________
Artificial intelligence researchers have developed several specialized programming languages for artificial intelligence:
Languages:
- AIML (meaning "Artificial Intelligence Markup Language") is an XML dialect for use with A.L.I.C.E.-type chatterbots.
- IPL was the first language developed for artificial intelligence. It includes features intended to support programs that could perform general problem solving, such as lists, associations, schemas (frames), dynamic memory allocation, data types, recursion, associative retrieval, functions as arguments, generators (streams), and cooperative multitasking.
- Lisp is a practical mathematical notation for computer programs based on lambda calculus. Linked lists are one of the Lisp language's major data structures, and Lisp source code is itself made up of lists. As a result, Lisp programs can manipulate source code as a data structure, giving rise to the macro systems that allow programmers to create new syntax or even new domain-specific programming languages embedded in Lisp. There are many dialects of Lisp in use today, among which are Common Lisp, Scheme, and Clojure.
- Smalltalk has been used extensively for simulations, neural networks, machine learning and genetic algorithms. It implements the purest and most elegant form of object-oriented programming using message passing.
- Prolog is a declarative language where programs are expressed in terms of relations, and execution occurs by running queries over these relations. Prolog is particularly useful for symbolic reasoning, database and language parsing applications. Prolog is widely used in AI today.
- STRIPS is a language for expressing automated planning problem instances. It expresses an initial state, the goal states, and a set of actions. For each action preconditions (what must be established before the action is performed) and postconditions (what is established after the action is performed) are specified.
- Planner is a hybrid between procedural and logical languages. It gives a procedural interpretation to logical sentences where implications are interpreted with pattern-directed inference.
- POP-11 is a reflective, incrementally compiled programming language with many of the features of an interpreted language. It is the core language of the Poplog programming environment developed originally by the University of Sussex, and recently in the School of Computer Science at the University of Birmingham which hosts the Poplog website, It is often used to introduce symbolic programming techniques to programmers of more conventional languages like Pascal, who find POP syntax more familiar than that of Lisp. One of POP-11's features is that it supports first-class functions.
- R is widely used in new-style artificial intelligence, involving statistical computations, numerical analysis, the use of Bayesian inference, neural networks and in general Machine Learning. In domains like finance, biology, sociology or medicine it is considered as one of the main standard languages. It offers several paradigms of programming like vectorial computation, functional programming and object-oriented programming. It supports deep learning libraries like MXNet, Keras or TensorFlow.
- Python is widely used for artificial intelligence, with packages for several applications including General AI, Machine Learning, Natural Language Processing and Neural Networks.
- Haskell is also a very good programming language for AI. Lazy evaluation and the list and LogicT monads make it easy to express non-deterministic algorithms, which is often the case. Infinite data structures are great for search trees. The language's features enable a compositional way of expressing the algorithms. The only drawback is that working with graphs is a bit harder at first because of purity.
- Wolfram Language includes a wide range of integrated machine learning capabilities, from highly automated functions like Predict and Classify to functions based on specific methods and diagnostics. The functions work on many types of data, including numerical, categorical, time series, textual, and image.
- C++ (2011 onwards)
- MATLAB
- Perl
- Julia (programming language), e.g. for machine learning, using native or non-native libraries.
- List of constraint programming languages
- List of computer algebra systems
- List of logic programming languages
- List of knowledge representation languages
- Fifth-generation programming language
This glossary of artificial intelligence terms is about artificial intelligence, its sub-disciplines, and related fields.
- Contents: Itemized by starting letters "A" through "Z"
- See also:
AI for Good
- YouTube Video: HIGHLIGHTS: AI FOR GOOD Global Summit 2018 - DAY
- YouTube Video: HIGHLIGHTS: AI FOR GOOD Global Summit 2018 - DAY 2
- YouTube Video: HIGHLIGHTS: AI FOR GOOD Global Summit 2018 - DAY 3
AI for Good is a United Nations platform, centered around annual Global Summits, that fosters the dialogue on the beneficial use of Artificial Intelligence, by developing concrete projects.
The impetus for organizing global summits that are action oriented, came from existing discourse in artificial intelligence (AI) research being dominated by research streams such as the Netflix Prize (improve the movie recommendation algorithm).
The AI for Good series aims to bring forward Artificial Intelligence research topics that contribute towards more global problems, in particular through the Sustainable Development Goals, while at the same time avoiding typical UN style conferences where results are generally more abstract. The fourth AI for Good Global Summit will be held from 4–8 May 2020 in Geneva, Switzerland.
Click on any of the following blue hyperlinks for more about "AI for Good" initiative:
The impetus for organizing global summits that are action oriented, came from existing discourse in artificial intelligence (AI) research being dominated by research streams such as the Netflix Prize (improve the movie recommendation algorithm).
The AI for Good series aims to bring forward Artificial Intelligence research topics that contribute towards more global problems, in particular through the Sustainable Development Goals, while at the same time avoiding typical UN style conferences where results are generally more abstract. The fourth AI for Good Global Summit will be held from 4–8 May 2020 in Geneva, Switzerland.
Click on any of the following blue hyperlinks for more about "AI for Good" initiative:
Artificial Intelligence in Agriculture: Precision and Digital Applications
TOP: Climate-Smart Precision Agriculture
BOTTOM: Digital Technologies in Agriculture: adoption, value added and overview
- YouTube Video: The Future of Farming
- YouTube Video: Artificial intelligence could revolutionize farming industry
- YouTube Video: The High-Tech Vertical Farmer
TOP: Climate-Smart Precision Agriculture
BOTTOM: Digital Technologies in Agriculture: adoption, value added and overview
AI in Agriculture:
In agriculture new AI advancements show improvements in gaining yield and to increase the research and development of growing crops. New artificial intelligence now predicts the time it takes for a crop like a tomato to be ripe and ready for picking thus increasing efficiency of farming. These advances go on including Crop and Soil Monitoring, Agricultural Robots, and Predictive Analytics.
Crop and soil monitoring uses new algorithms and data collected on the field to manage and track the health of crops making it easier and more sustainable for the farmers.
More specializations of AI in agriculture is one such as greenhouse automation, simulation, modeling, and optimization techniques.
Due to the increase in population and the growth of demand for food in the future there will need to be at least a 70% increase in yield from agriculture to sustain this new demand. More and more of the public perceives that the adaption of these new techniques and the use of Artificial intelligence will help reach that goal.
___________________________________________________________________________
Precision agriculture (PA), satellite farming or site specific crop management (SSCM) is a farming management concept based on observing, measuring and responding to inter and intra-field variability in crops. The goal of precision agriculture research is to define a decision support system (DSS) for whole farm management with the goal of optimizing returns on inputs while preserving resources.
Among these many approaches is a phytogeomorphological approach which ties multi-year crop growth stability/characteristics to topological terrain attributes. The interest in the phytogeomorphological approach stems from the fact that the geomorphology component typically dictates the hydrology of the farm field.
The practice of precision agriculture has been enabled by the advent of GPS and GNSS. The farmer's and/or researcher's ability to locate their precise position in a field allows for the creation of maps of the spatial variability of as many variables as can be measured (e.g. crop yield, terrain features/topography, organic matter content, moisture levels, nitrogen levels, pH, EC, Mg, K, and others).
Similar data is collected by sensor arrays mounted on GPS-equipped combine harvesters. These arrays consist of real-time sensors that measure everything from chlorophyll levels to plant water status, along with multispectral imagery. This data is used in conjunction with satellite imagery by variable rate technology (VRT) including seeders, sprayers, etc. to optimally distribute resources.
However, recent technological advances have enabled the use of real-time sensors directly in soil, which can wirelessly transmit data without the need of human presence.
Precision agriculture has also been enabled by unmanned aerial vehicles like the DJI Phantom which are relatively inexpensive and can be operated by novice pilots.
These agricultural drones can be equipped with hyperspectral or RGB cameras to capture many images of a field that can be processed using photogrammetric methods to create orthophotos and NDVI maps.
These drones are capable of capturing imagery for a variety of purposes and with several metrics such as elevation and Vegetative Index (with NDVI as an example). This imagery is then turned into maps which can be used to optimize crop inputs such as water, fertilizer or chemicals such as herbicides and growth regulators through variable rate applications.
Click on any of the following blue hyperlinks for more about AI in Precision Agriculture:
Digital agriculture refers to tools that digitally collect, store, analyze, and share electronic data and/or information along the agricultural value chain. Other definitions, such as those from the United Nations Project Breakthrough, Cornell University, and Purdue University, also emphasize the role of digital technology in the optimization of food systems.
Sometimes known as “smart farming” or “e-agriculture,” digital agriculture includes (but is not limited to) precision agriculture (above). Unlike precision agriculture, digital agriculture impacts the entire agri-food value chain — before, during, and after on-farm production.
Therefore, on-farm technologies, like yield mapping, GPS guidance systems, and variable-rate application, fall under the domain of precision agriculture and digital agriculture.
On the other hand, digital technologies involved in e-commerce platforms, e-extension services, warehouse receipt systems, blockchain-enabled food traceability systems, tractor rental apps, etc. fall under the umbrella of digital agriculture but not precision agriculture.
Click on any of the following blue hyperlinks for more about Digital Agriculture:
In agriculture new AI advancements show improvements in gaining yield and to increase the research and development of growing crops. New artificial intelligence now predicts the time it takes for a crop like a tomato to be ripe and ready for picking thus increasing efficiency of farming. These advances go on including Crop and Soil Monitoring, Agricultural Robots, and Predictive Analytics.
Crop and soil monitoring uses new algorithms and data collected on the field to manage and track the health of crops making it easier and more sustainable for the farmers.
More specializations of AI in agriculture is one such as greenhouse automation, simulation, modeling, and optimization techniques.
Due to the increase in population and the growth of demand for food in the future there will need to be at least a 70% increase in yield from agriculture to sustain this new demand. More and more of the public perceives that the adaption of these new techniques and the use of Artificial intelligence will help reach that goal.
___________________________________________________________________________
Precision agriculture (PA), satellite farming or site specific crop management (SSCM) is a farming management concept based on observing, measuring and responding to inter and intra-field variability in crops. The goal of precision agriculture research is to define a decision support system (DSS) for whole farm management with the goal of optimizing returns on inputs while preserving resources.
Among these many approaches is a phytogeomorphological approach which ties multi-year crop growth stability/characteristics to topological terrain attributes. The interest in the phytogeomorphological approach stems from the fact that the geomorphology component typically dictates the hydrology of the farm field.
The practice of precision agriculture has been enabled by the advent of GPS and GNSS. The farmer's and/or researcher's ability to locate their precise position in a field allows for the creation of maps of the spatial variability of as many variables as can be measured (e.g. crop yield, terrain features/topography, organic matter content, moisture levels, nitrogen levels, pH, EC, Mg, K, and others).
Similar data is collected by sensor arrays mounted on GPS-equipped combine harvesters. These arrays consist of real-time sensors that measure everything from chlorophyll levels to plant water status, along with multispectral imagery. This data is used in conjunction with satellite imagery by variable rate technology (VRT) including seeders, sprayers, etc. to optimally distribute resources.
However, recent technological advances have enabled the use of real-time sensors directly in soil, which can wirelessly transmit data without the need of human presence.
Precision agriculture has also been enabled by unmanned aerial vehicles like the DJI Phantom which are relatively inexpensive and can be operated by novice pilots.
These agricultural drones can be equipped with hyperspectral or RGB cameras to capture many images of a field that can be processed using photogrammetric methods to create orthophotos and NDVI maps.
These drones are capable of capturing imagery for a variety of purposes and with several metrics such as elevation and Vegetative Index (with NDVI as an example). This imagery is then turned into maps which can be used to optimize crop inputs such as water, fertilizer or chemicals such as herbicides and growth regulators through variable rate applications.
Click on any of the following blue hyperlinks for more about AI in Precision Agriculture:
- History
- Overview
- Tools
- Usage around the world
- Economic and environmental impacts
- Emerging technologies
- Conferences
- See also:
- Agricultural drones
- Geostatistics
- Integrated farming
- Integrated pest management
- Landsat program
- Nutrient budgeting
- Nutrient management
- Phytobiome
- Precision beekeeping
- Precision livestock farming
- Precision viticulture
- Satellite crop monitoring
- SPOT (satellites)
- Variable rate technology
- Precision agriculture, IBM
- Antares AgroSense
Digital agriculture refers to tools that digitally collect, store, analyze, and share electronic data and/or information along the agricultural value chain. Other definitions, such as those from the United Nations Project Breakthrough, Cornell University, and Purdue University, also emphasize the role of digital technology in the optimization of food systems.
Sometimes known as “smart farming” or “e-agriculture,” digital agriculture includes (but is not limited to) precision agriculture (above). Unlike precision agriculture, digital agriculture impacts the entire agri-food value chain — before, during, and after on-farm production.
Therefore, on-farm technologies, like yield mapping, GPS guidance systems, and variable-rate application, fall under the domain of precision agriculture and digital agriculture.
On the other hand, digital technologies involved in e-commerce platforms, e-extension services, warehouse receipt systems, blockchain-enabled food traceability systems, tractor rental apps, etc. fall under the umbrella of digital agriculture but not precision agriculture.
Click on any of the following blue hyperlinks for more about Digital Agriculture:
- Historical context
- Technology
- Effects of digital agriculture adoption
- Enabling environment
- Sustainable Development Goals
Artificial Intelligence in Space Operations including the Air Operations Center
- YouTube Video about the role of Artificial Intelligence in Space Operations
- YouTube Video: The incredible inventions of intuitive AI | Maurice Conti
- YouTube Video: NATS is using Artificial Intelligence to cut delays at Heathrow Airport
The Air Operations Division (AOD) uses AI for the rule based expert systems. The AOD has use for artificial intelligence for surrogate operators for combat and training simulators, mission management aids, support systems for tactical decision making, and post processing of the simulator data into symbolic summaries.
The use of artificial intelligence in simulators is proving to be very useful for the AOD. Airplane simulators are using artificial intelligence in order to process the data taken from simulated flights. Other than simulated flying, there is also simulated aircraft warfare.
The computers are able to come up with the best success scenarios in these situations. The computers can also create strategies based on the placement, size, speed and strength of the forces and counter forces. Pilots may be given assistance in the air during combat by computers.
The artificial intelligent programs can sort the information and provide the pilot with the best possible maneuvers, not to mention getting rid of certain maneuvers that would be impossible for a human being to perform.
Multiple aircraft are needed to get good approximations for some calculations so computer simulated pilots are used to gather data. These computer simulated pilots are also used to train future air traffic controllers.
The system used by the AOD in order to measure performance was the Interactive Fault Diagnosis and Isolation System, or IFDIS. It is a rule based expert system put together by collecting information from TF-30 documents and the expert advice from mechanics that work on the TF-30. This system was designed to be used for the development of the TF-30 for the RAAF F-111C.
The performance system was also used to replace specialized workers. The system allowed the regular workers to communicate with the system and avoid mistakes, miscalculations, or having to speak to one of the specialized workers.
The AOD also uses artificial intelligence in speech recognition software. The air traffic controllers are giving directions to the artificial pilots and the AOD wants to the pilots to respond to the ATC's with simple responses. The programs that incorporate the speech software must be trained, which means they use neural networks.
The program used, the Verbex 7000, is still a very early program that has plenty of room for improvement. The improvements are imperative because ATCs use very specific dialog and the software needs to be able to communicate correctly and promptly every time.
The Artificial Intelligence supported Design of Aircraft, or AIDA, is used to help designers in the process of creating conceptual designs of aircraft. This program allows the designers to focus more on the design itself and less on the design process. The software also allows the user to focus less on the software tools.
The AIDA uses rule based systems to compute its data. This is a diagram of the arrangement of the AIDA modules. Although simple, the program is proving effective.
In 2003, NASA's Dryden Flight Research Center, and many other companies, created software that could enable a damaged aircraft to continue flight until a safe landing zone can be reached. The software compensates for all the damaged components by relying on the undamaged components. The neural network used in the software proved to be effective and marked a triumph for artificial intelligence.
The Integrated Vehicle Health Management system, also used by NASA, on board an aircraft must process and interpret data taken from the various sensors on the aircraft. The system needs to be able to determine the structural integrity of the aircraft. The system also needs to implement protocols in case of any damage taken the vehicle.
Haitham Baomar and Peter Bentley are leading a team from the University College of London to develop an artificial intelligence based Intelligent Autopilot System (IAS) designed to teach an autopilot system to behave like a highly experienced pilot who is faced with an emergency situation such as severe weather, turbulence, or system failure.
Educating the autopilot relies on the concept of supervised machine learning “which treats the young autopilot as a human apprentice going to a flying school”. The autopilot records the actions of the human pilot generating learning models using artificial neural networks. The autopilot is then given full control and observed by the pilot as it executes the training exercise.
The Intelligent Autopilot System combines the principles of Apprenticeship Learning and Behavioural Cloning whereby the autopilot observes the low-level actions required to maneuver the airplane and high-level strategy used to apply those actions. IAS implementation employs three phases; pilot data collection, training, and autonomous control.
Baomar and Bentley's goal is to create a more autonomous autopilot to assist pilots in responding to emergency situations.
___________________________________________________________________________
Air and Space Operations Center
An Air and Space Operations Center (AOC) is a type of command center used by the United States Air Force (USAF). It is the senior agency of the Air Force component commander to provide command and control of air and space operations.
The United States Air Force employs two kinds of AOCs: regional AOCs utilizing the AN/USQ-163 Falconer weapon system that support geographic combatant commanders, and functional AOCs that support functional combatant commanders.
When there is more than one U.S. military service working in an AOC, such as when naval aviation from the U.S. Navy (USN) and/or the U.S. Marine Corps (USMC) is incorporated, it is called a Joint Air and Space Operations Center (JAOC). In cases of allied or coalition (multinational) operations in tandem with USAF or Joint air and space operations, the AOC is called a Combined Air and Space Operations Center (CAOC).
An AOC is the senior element of the Theater Air Control System (TACS). The Joint Force Commander (JFC) assigns a Joint Forces Air Component Commander (JFACC) to lead the AOC weapon system. If allied or coalition forces are part of the operation, the JFC and JFACC will be redesignated as the CFC and CFACC, respectively.
Quite often the Commander, Air Force Forces (COMAFFOR) is assigned the JFACC/CFACC position for planning and executing theater-wide air and space forces. If another service also provides a significant share of air and space forces, the Deputy JFACC/CFACC will typically be a senior flag officer from that service.
For example, during Operation Enduring Freedom and Operation Iraqi Freedom, when USAF combat air forces (CAF) and mobility air forces (MAF) integrated extensive USN and USMC sea-based and land-based aviation and Royal Air Force (RAF) and Royal Navy / Fleet Air Arm aviation, the CFACC was an aeronautically rated USAF lieutenant general, assisted by an aeronautically designated USN rear admiral (upper half) as the Deputy CFACC, and an aeronautically rated RAF air commodore as the Senior British Officer (Air).
Click on any of the following blue hyperlinks for more about Air and Space Centers:
The use of artificial intelligence in simulators is proving to be very useful for the AOD. Airplane simulators are using artificial intelligence in order to process the data taken from simulated flights. Other than simulated flying, there is also simulated aircraft warfare.
The computers are able to come up with the best success scenarios in these situations. The computers can also create strategies based on the placement, size, speed and strength of the forces and counter forces. Pilots may be given assistance in the air during combat by computers.
The artificial intelligent programs can sort the information and provide the pilot with the best possible maneuvers, not to mention getting rid of certain maneuvers that would be impossible for a human being to perform.
Multiple aircraft are needed to get good approximations for some calculations so computer simulated pilots are used to gather data. These computer simulated pilots are also used to train future air traffic controllers.
The system used by the AOD in order to measure performance was the Interactive Fault Diagnosis and Isolation System, or IFDIS. It is a rule based expert system put together by collecting information from TF-30 documents and the expert advice from mechanics that work on the TF-30. This system was designed to be used for the development of the TF-30 for the RAAF F-111C.
The performance system was also used to replace specialized workers. The system allowed the regular workers to communicate with the system and avoid mistakes, miscalculations, or having to speak to one of the specialized workers.
The AOD also uses artificial intelligence in speech recognition software. The air traffic controllers are giving directions to the artificial pilots and the AOD wants to the pilots to respond to the ATC's with simple responses. The programs that incorporate the speech software must be trained, which means they use neural networks.
The program used, the Verbex 7000, is still a very early program that has plenty of room for improvement. The improvements are imperative because ATCs use very specific dialog and the software needs to be able to communicate correctly and promptly every time.
The Artificial Intelligence supported Design of Aircraft, or AIDA, is used to help designers in the process of creating conceptual designs of aircraft. This program allows the designers to focus more on the design itself and less on the design process. The software also allows the user to focus less on the software tools.
The AIDA uses rule based systems to compute its data. This is a diagram of the arrangement of the AIDA modules. Although simple, the program is proving effective.
In 2003, NASA's Dryden Flight Research Center, and many other companies, created software that could enable a damaged aircraft to continue flight until a safe landing zone can be reached. The software compensates for all the damaged components by relying on the undamaged components. The neural network used in the software proved to be effective and marked a triumph for artificial intelligence.
The Integrated Vehicle Health Management system, also used by NASA, on board an aircraft must process and interpret data taken from the various sensors on the aircraft. The system needs to be able to determine the structural integrity of the aircraft. The system also needs to implement protocols in case of any damage taken the vehicle.
Haitham Baomar and Peter Bentley are leading a team from the University College of London to develop an artificial intelligence based Intelligent Autopilot System (IAS) designed to teach an autopilot system to behave like a highly experienced pilot who is faced with an emergency situation such as severe weather, turbulence, or system failure.
Educating the autopilot relies on the concept of supervised machine learning “which treats the young autopilot as a human apprentice going to a flying school”. The autopilot records the actions of the human pilot generating learning models using artificial neural networks. The autopilot is then given full control and observed by the pilot as it executes the training exercise.
The Intelligent Autopilot System combines the principles of Apprenticeship Learning and Behavioural Cloning whereby the autopilot observes the low-level actions required to maneuver the airplane and high-level strategy used to apply those actions. IAS implementation employs three phases; pilot data collection, training, and autonomous control.
Baomar and Bentley's goal is to create a more autonomous autopilot to assist pilots in responding to emergency situations.
___________________________________________________________________________
Air and Space Operations Center
An Air and Space Operations Center (AOC) is a type of command center used by the United States Air Force (USAF). It is the senior agency of the Air Force component commander to provide command and control of air and space operations.
The United States Air Force employs two kinds of AOCs: regional AOCs utilizing the AN/USQ-163 Falconer weapon system that support geographic combatant commanders, and functional AOCs that support functional combatant commanders.
When there is more than one U.S. military service working in an AOC, such as when naval aviation from the U.S. Navy (USN) and/or the U.S. Marine Corps (USMC) is incorporated, it is called a Joint Air and Space Operations Center (JAOC). In cases of allied or coalition (multinational) operations in tandem with USAF or Joint air and space operations, the AOC is called a Combined Air and Space Operations Center (CAOC).
An AOC is the senior element of the Theater Air Control System (TACS). The Joint Force Commander (JFC) assigns a Joint Forces Air Component Commander (JFACC) to lead the AOC weapon system. If allied or coalition forces are part of the operation, the JFC and JFACC will be redesignated as the CFC and CFACC, respectively.
Quite often the Commander, Air Force Forces (COMAFFOR) is assigned the JFACC/CFACC position for planning and executing theater-wide air and space forces. If another service also provides a significant share of air and space forces, the Deputy JFACC/CFACC will typically be a senior flag officer from that service.
For example, during Operation Enduring Freedom and Operation Iraqi Freedom, when USAF combat air forces (CAF) and mobility air forces (MAF) integrated extensive USN and USMC sea-based and land-based aviation and Royal Air Force (RAF) and Royal Navy / Fleet Air Arm aviation, the CFACC was an aeronautically rated USAF lieutenant general, assisted by an aeronautically designated USN rear admiral (upper half) as the Deputy CFACC, and an aeronautically rated RAF air commodore as the Senior British Officer (Air).
Click on any of the following blue hyperlinks for more about Air and Space Centers:
- Divisions
- List of Air and Space Operations Centers
- Inactive AOCs
- Training/Experimentation
- AOC-equipping Units
- NATO CAOC
- See also:
Outline of Artificial Intelligence
- YouTube Video: How AI is changing Business: A look at the limitless potential of AI | ANIRUDH KALA | TEDxIITBHU
- YouTube Video: What happens when our computers get smarter than we are? | Nick Bostrom
- YouTube Video: Artificial Super intelligence - How close are we?
UNDERSTANDING THREE TYPES OF ARTIFICIAL INTELLIGENCE
(Illustration above)
In this era of technology, artificial intelligence is conquering over all the industries and domains, performing tasks more effectively than humans. Like in sci-fi movies, a day will come when world would be dominated by robots.
Artificial intelligence is surrounded by jargons like narrow, general, and super artificial intelligence or by machine learning, deep learning, supervised and unsupervised learning or neural networks and a whole lot of confusing terms. In this article, we will talk about artificial intelligence and its three main categories.
1) Understanding Artificial Intelligence:
The term AI was coined by John Mccarthy, an American computer scientist in 1956. Artificial Intelligence is the simulation of human intelligence but processed by machines mainly computer systems. The processes mainly include learning, reasoning and self-correction.
With the increase in speed, size and diversity of data, AI has gained its dominance in the businesses globally. AI can perform several tasks say, recognizing patterns in data more efficiently than a human giving more insights to businesses.
2) Types of Artificial Intelligence:
Narrow Artificial Intelligence: Weak AI also known for narrow AI is an AI system that is developed and trained for a particular task. Narrow AI is programmed to perform a single task and works within a limited context. It is very good at routine physical and cognitive jobs.
For example, narrow AI can identify pattern and correlations from data more efficiently than humans. Sales predictions, purchase suggestions and weather forecast are the implementation of narrow AI.
Even Google’s Translation Engine is a form of narrow AI. In the automotive industry, self-driving cars are the result of coordination of several narrow AI. But it cannot expand and take tasks beyond its field, for example, the AI engine which transcripts image recognition cannot perform sales recommendations.
Artificial General Intelligence: Artificial General Intelligence (AGI) is an AI system with generalized cognitive abilities which find solutions to the unfamiliar task it comes across. It is popularly termed as strong AI which can understand and reason the environment as a human would. Also known as human AI but it is hard to define a human level artificial intelligence. Human intelligence might not be able to compute as fast as computers but they can think abstractly, plan and solve problems without going into details. More importantly, humans can innovate and bring up thoughts and ideas that have no trails or precedence.
Artificial Super Intelligence: Artificial Super Intelligence (ASI) refers to the position where computer/machines will surpass humans and machines would be able to mimic human thoughts. ASI refers to a situation where the cognitive ability of machines will be superior to humans.
In the past, there had been developments like IBM’s Watson supercomputer beating human players at jeopardy and assistive devices like Siri involving into a conversation with people, but there is still no machine that can process the depth of knowledge and cognitive ability as that of a fully developed human.
ASI had two school of thoughts, on one side great scientist like Stephen Hawking saw the full development of AI as a danger to humanity whereas others such as Demis Hassabis, Co-Founder & CEO of DeepMind believes that smarter the AI becomes better the world would be and a helping hand to mankind.
Conclusion:
In today’s technological age, AI has gifted machines which are much more capable than human intelligence. It is difficult to predict how long it will take AI to achieve the cognitive ability and knowledge depth of human being. Finally, a day will come when AI would surpass the brightest human mind on earth.
[End of Article]
___________________________________________________________________________
Outline of Artificial Intelligence:
The following outline is provided as an overview of and topical guide to artificial intelligence:
Artificial intelligence (AI) – intelligence exhibited by machines or software. It is also the name of the scientific field which studies how to create computers and computer software that are capable of intelligent behavior.
Click on any of the following blue hyperlinks for more about the Outline of Artificial Intelligence:
(Illustration above)
In this era of technology, artificial intelligence is conquering over all the industries and domains, performing tasks more effectively than humans. Like in sci-fi movies, a day will come when world would be dominated by robots.
Artificial intelligence is surrounded by jargons like narrow, general, and super artificial intelligence or by machine learning, deep learning, supervised and unsupervised learning or neural networks and a whole lot of confusing terms. In this article, we will talk about artificial intelligence and its three main categories.
1) Understanding Artificial Intelligence:
The term AI was coined by John Mccarthy, an American computer scientist in 1956. Artificial Intelligence is the simulation of human intelligence but processed by machines mainly computer systems. The processes mainly include learning, reasoning and self-correction.
With the increase in speed, size and diversity of data, AI has gained its dominance in the businesses globally. AI can perform several tasks say, recognizing patterns in data more efficiently than a human giving more insights to businesses.
2) Types of Artificial Intelligence:
Narrow Artificial Intelligence: Weak AI also known for narrow AI is an AI system that is developed and trained for a particular task. Narrow AI is programmed to perform a single task and works within a limited context. It is very good at routine physical and cognitive jobs.
For example, narrow AI can identify pattern and correlations from data more efficiently than humans. Sales predictions, purchase suggestions and weather forecast are the implementation of narrow AI.
Even Google’s Translation Engine is a form of narrow AI. In the automotive industry, self-driving cars are the result of coordination of several narrow AI. But it cannot expand and take tasks beyond its field, for example, the AI engine which transcripts image recognition cannot perform sales recommendations.
Artificial General Intelligence: Artificial General Intelligence (AGI) is an AI system with generalized cognitive abilities which find solutions to the unfamiliar task it comes across. It is popularly termed as strong AI which can understand and reason the environment as a human would. Also known as human AI but it is hard to define a human level artificial intelligence. Human intelligence might not be able to compute as fast as computers but they can think abstractly, plan and solve problems without going into details. More importantly, humans can innovate and bring up thoughts and ideas that have no trails or precedence.
Artificial Super Intelligence: Artificial Super Intelligence (ASI) refers to the position where computer/machines will surpass humans and machines would be able to mimic human thoughts. ASI refers to a situation where the cognitive ability of machines will be superior to humans.
In the past, there had been developments like IBM’s Watson supercomputer beating human players at jeopardy and assistive devices like Siri involving into a conversation with people, but there is still no machine that can process the depth of knowledge and cognitive ability as that of a fully developed human.
ASI had two school of thoughts, on one side great scientist like Stephen Hawking saw the full development of AI as a danger to humanity whereas others such as Demis Hassabis, Co-Founder & CEO of DeepMind believes that smarter the AI becomes better the world would be and a helping hand to mankind.
Conclusion:
In today’s technological age, AI has gifted machines which are much more capable than human intelligence. It is difficult to predict how long it will take AI to achieve the cognitive ability and knowledge depth of human being. Finally, a day will come when AI would surpass the brightest human mind on earth.
[End of Article]
___________________________________________________________________________
Outline of Artificial Intelligence:
The following outline is provided as an overview of and topical guide to artificial intelligence:
Artificial intelligence (AI) – intelligence exhibited by machines or software. It is also the name of the scientific field which studies how to create computers and computer software that are capable of intelligent behavior.
Click on any of the following blue hyperlinks for more about the Outline of Artificial Intelligence:
- What type of thing is artificial intelligence?
- Types of artificial intelligence
- Branches of artificial intelligence
- Further AI design elements
- AI projects
- AI applications
- AI development
- Psychology and AI
- History of artificial intelligence
- AI hazards and safety
- AI and the future
- Philosophy of artificial intelligence
- Artificial intelligence in fiction
- AI community
- See also:
- A look at the re-emergence of A.I. and why the technology is poised to succeed given today's environment, ComputerWorld, 2015 September 14
- AI at Curlie
- The Association for the Advancement of Artificial Intelligence
- Freeview Video 'Machines with Minds' by the Vega Science Trust and the BBC/OU
- John McCarthy's frequently asked questions about AI
- Jonathan Edwards looks at AI (BBC audio) С
- Ray Kurzweil's website dedicated to AI including prediction of future development in AI
- Thomason, Richmond. "Logic and Artificial Intelligence". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
Vehicular Automation and Automated Driving Systems
- YouTube Video: How a Driverless Car Sees the Road
- YouTube Video: Elon Musk on Tesla's Auto Pilot and Legal Liability
- YouTube Video: How Tesla's Self-Driving Autopilot Actually Works | WIRED
The path to 5G: Paving the road to tomorrow’s autonomous vehicles:
According to World Health Organization figures, road traffic injuries are the leading cause of death among young people aged 15–29 years. More than 1.2 million people die each year worldwide as a result of traffic crashes.
Vehicle-to-Everything (V2X) technologies, starting with 802.11p and evolving to Cellular V2X (C-V2X), can help bring safer roads, more efficient travel, reduced air pollution, and better driving experiences.
V2X will serve as the foundation for the safe, connected vehicle of the future, giving vehicles the ability to "talk" to each other, pedestrians, roadway infrastructure, and the cloud.
It’s no wonder that the MIT Technology Review put V2X on its 2015 10 Breakthrough Technologies list, stating: “Car-to-car communication should also have a bigger impact than the advanced vehicle automation technologies that have been more widely heralded.”
V2X is a key technology for enabling fully autonomous transportation infrastructure. While advancements in radar, LiDAR (Light Detection and Ranging), and camera systems are encouraging and bring autonomous driving one step closer to reality, it should be known that these sensors are limited by their line of sight. V2X complements the capabilities of these sensors by providing 360 degree non-line-of sight awareness, extending a vehicle’s ability to “see” further down the road – even at blind intersections or in bad weather conditions.
So, how long do we have to wait for V2X to become a reality? Actually, V2X technology is here today. Wi-Fi-based 820.11p has established the foundation for latency-critical V2X communications.
To improve road safety for future light vehicles in the United States, the National Highway Safety Administration is expected to begin rulemaking for Dedicated Short Range Communications (DSRC) this year.
Beyond that, tomorrow’s autonomous vehicles require continued technology evolution to accommodate ever-expanding safety requirements and use cases. The path to 5G will deliver this evolution starting with the C-V2X part of 3GPP release 14 specifications, which is expected to be completed by the end of this year.
C-V2X will define two new transmission modes that work together to enable a broad range of automotive use cases:
To accelerate technology evolution, Qualcomm is actively driving the C-V2X work in 3GPP, building on our leadership in LTE Direct and LTE Broadcast to pioneer C-V2X technologies.
The difference between a vehicle collision and a near miss comes down to milliseconds. With approximately twice the range of DSRC, C-V2X can provide the critical seconds of reaction time needed to avoid an accident. Beyond safety, C-V2X also enables a broad range of use cases – from better situational awareness, to enhanced traffic management, and connected cloud services.
C-V2X will provide a unified connectivity platform for safer “vehicles of tomorrow.” Building upon C-V2X, 5G will bring even more possibilities for the connected vehicle. The extreme throughput, low latency, and enhanced reliability of 5G will allow vehicles to share rich, real-time data, supporting fully autonomous driving experiences, for example:
Beyond pioneering C-V2X and helping define the path to 5G, we are delivering new levels of on-device intelligence and integration in the connected vehicle of tomorrow. Our innovations in cognitive technologies, such as always-on sensing, computer vision, and machine learning, will help make our vision of safer, more autonomous vehicles a reality.
To learn more about how we are pioneering Cellular V2X, join us for our upcoming webinar or visit our Cellular V2X web page.
Learn more about our automotive solutions here.
[End of Article]
___________________________________________________________________________
Vehicular automation
Vehicular automation involves the use of mechatronics, artificial intelligence, and multi-agent system to assist a vehicle's operator. These features and the vehicles employing them may be labeled as intelligent or smart.
A vehicle using automation for difficult tasks, especially navigation, may be referred to as semi-autonomous. A vehicle relying solely on automation is consequently referred to as robotic or autonomous. After the invention of the integrated circuit, the sophistication of automation technology increased. Manufacturers and researchers subsequently added a variety of automated functions to automobiles and other vehicles.
Autonomy levels:
Autonomy in vehicles is often categorized in six levels: The level system was developed by the Society of Automotive Engineers (SAE):
Ground vehicles:
Further information: Unmanned ground vehicles
Ground vehicles employing automation and teleoperation include shipyard gantries, mining trucks, bomb-disposal robots, robotic insects, and driverless tractors.
There are a lot of autonomous and semi-autonomous ground vehicles being made for the purpose of transporting passengers. One such example is the free-ranging on grid (FROG) technology which consists of autonomous vehicles, a magnetic track and a supervisory system.
The FROG system is deployed for industrial purposes in factory sites and has been in use since 1999 on the ParkShuttle, a PRT-style public transport system in the city of Capelle aan den IJssel to connect the Rivium business park with the neighboring city of Rotterdam (where the route terminates at the Kralingse Zoom metro station). The system experienced a crash in 2005 that proved to be caused by a human error.
Applications for automation in ground vehicles include the following:
Research is ongoing and prototypes of autonomous ground vehicles exist.
Cars:
See also: Autonomous car
Extensive automation for cars focuses on either introducing robotic cars or modifying modern car designs to be semi-autonomous.
Semi-autonomous designs could be implemented sooner as they rely less on technology that is still at the forefront of research. An example is the dual mode monorail. Groups such as RUF (Denmark) and TriTrack (USA) are working on projects consisting of specialized private cars that are driven manually on normal roads but also that dock onto a monorail/guideway along which they are driven autonomously.
As a method of automating cars without extensively modifying the cars as much as a robotic car, Automated highway systems (AHS) aims to construct lanes on highways that would be equipped with, for example, magnets to guide the vehicles. Automation vehicles have auto-brakes named as Auto Vehicles Braking System (AVBS). Highway computers would manage the traffic and direct the cars to avoid crashes.
The European Commission has established a smart car development program called the Intelligent Car Flagship Initiative. The goals of that program include:
There are plenty of further uses for automation in relation to cars. These include:
Singapore also announced a set of provisional national standards on January 31, 2019, to guide the autonomous vehicle industry. The standards, known as Technical Reference 68 (TR68), will promote the safe deployment of fully driverless vehicles in Singapore, according to a joint press release by Enterprise Singapore (ESG), Land Transport Authority (LTA), Standards Development Organisation and Singapore Standards Council (SSC).
Shared autonomous vehicles:
Following recent developments in autonomous cars, shared autonomous vehicles are now able to run in ordinary traffic without the need for embedded guidance markers. So far the focus has been on low speed, 20 miles per hour (32 km/h), with short, fixed routes for the "last mile" of journeys.
This means issues of collision avoidance and safety are significantly less challenging than those for automated cars, which seek to match the performance of conventional vehicles.
Aside from 2getthere ("ParkShuttle"), three companies - Ligier ("Easymile EZ10"), Navya ("ARMA" & "Autonom Cab") and RDM Group ("LUTZ Pathfinder") - are manufacturing and actively testing such vehicles. Two other companies have produced prototypes, Local Motors ("Olli") and the GATEway project.
Beside these efforts, Apple is reportedly developing an autonomous shuttle, based on a vehicle from an existing automaker, to transfer employees between its offices in Palo Alto and Infinite Loop, Cupertino. The project called "PAIL", after its destinations, was revealed in August 2017 when Apple announced it had abandoned development of autonomous cars.
Click on any of the following blue hyperlinks for more about Vehicular Automation: ___________________________________________________________________________
Automated driving system:
An automated driving system is a complex combination of various components that can be defined as systems where perception, decision making, and operation of the automobile are performed by electronics and machinery instead of a human driver, and as introduction of automation into road traffic.
This includes handling of the vehicle, destination, as well as awareness of surroundings.
While the automated system has control over the vehicle, it allows the human operator to leave all responsibilities to the system.
Overview:
The automated driving system is generally an integrated package of individual automated systems operating in concert. Automated driving implies that the driver have given up the ability to drive (i.e., all appropriate monitoring, agency, and action functions) to the vehicle automation system. Even though the driver may be alert and ready to take action at any moment, automation system controls all functions.
Automated driving systems are often conditional, which implies that the automation system is capable of automated driving, but not for all conditions encountered in the course of normal operation. Therefore, a human driver is functionally required to initiate the automated driving system, and may or may not do so when driving conditions are within the capability of the system.
When the vehicle automation system has assumed all driving functions, the human is no longer driving the vehicle but continues to assume responsibility for the vehicle's performance as the vehicle operator.
The automated vehicle operator is not functionally required to actively monitor the vehicle's performance while the automation system is engaged, but the operator must be available to resume driving within several seconds of being prompted to do so, as the system has limited conditions of automation.
While the automated driving system is engaged, certain conditions may prevent real-time human input, but for no more than a few seconds. The operator is able to resume driving at any time subject to this short delay. When the operator has resumed all driving functions, he or she reassumes the status of the vehicle's driver.
Success in the technology:
The success in the automated driving system has been known to be successful in situations like rural road settings.
Rural road settings would be a setting in which there is lower amounts of traffic and lower differentiation between driving abilities and types of drivers.
"The greatest challenge in the development of automated functions is still inner-city traffic, where an extremely wide range of road users must be considered from all directions."
This technology is progressing to a more reliable way of the automated driving cars to switch from auto-mode to driver mode. Auto-mode is the mode that is set in order for the automated actions to take over, while the driver mode is the mode set in order to have the operator controlling all functions of the car and taking the responsibilities of operating the vehicle (Automated driving system not engaged).
This definition would include vehicle automation systems that may be available in the near term—such as traffic-jam assist, or full-range automated cruise control—if such systems would be designed such that the human operator can reasonably divert attention (monitoring) away from the performance of the vehicle while the automation system is engaged. This definition would also include automated platooning (such as conceptualized by the SARTRE project).
The SARTRE Project:
The SARTRE project's main goal is to create platooning, a train of automated cars, that will provide comfort and have the ability for the driver of the vehicle to arrive safely to a destination.
Along with the ability to be along the train, drivers that are driving past these platoons, can join in with a simple activation of the automated driving system that correlates with a truck that leads the platoon. The SARTRE project is taking what we know as a train system and mixing it with automated driving technology. This is intended to allow for an easier transportation though cities and ultimately help with traffic flow through heavy automobile traffic.
SARTRE & modern day:
In some parts of the world the self-driving car has been tested in real life situations such as in Pittsburgh. The Self-driving Uber has been put to the test around the city, driving with different types of drivers as well as different traffic situations. Not only have there been testing and successful parts to the automated car, but there has also been extensive testing in California on automated busses.
The lateral control of the automated buses uses magnetic markers such as the platoon at San Diego, while the longitudinal control of the automated truck platoon uses millimeter wave radio and radar.
Current examples around today's society include the Google car and Tesla's models. Tesla has redesigned automated driving, they have created car models that allow drivers to put in the destination and let the car take over. These are two modern day examples of the automated driving system cars.
Levels of automation according to SAE:
The U.S Department of Transportation National Highway Traffic Safety Administration (NHTSA) provided a standard classification system in 2013 which defined five different levels of automation, ranging from level 0 (no automation) to level 4 (full automation).
Since then, the NHTSA updated their standards to be in line with the classification system defined by SAE International. SAE International defines six different levels of automation in their new standard of classification in document SAE J3016 that ranges from 0 (no automation) to 5 (full automation).
Level 0 – No automation:
The driver is in complete control of the vehicle and the system does not interfere with driving. Systems that may fall into this category are forward collision warning systems and lane departure warning systems.
Level 1 – Driver assistance:
The driver is in control of the vehicle, but the system can modify the speed and steering direction of the vehicle. Systems that may fall into this category are adaptive cruise control and lane keep assist.
Level 2 – Partial automation:
The driver must be able to control the vehicle if corrections are needed, but the driver is no longer in control of the speed and steering of the vehicle. Parking assistance is an example of a system that falls into this category along with Tesla's autopilot feature.
A system that falls into this category is the DISTRONIC PLUS system created by Mercedes-Benz. It is important to note the driver must not be distracted in Level 0 to Level 2 modes.
Level 3 – Conditional automation.
The system is in complete control of vehicle functions such as speed, steering, and monitoring the environment under specific conditions. Such specific conditions may be fulfilled while on fenced-off highway with no intersections, limited driving speed, boxed-in driving situation etc.
A human driver must be ready to intervene when requested by the system to do so. If the driver does not respond within a predefined time or if a failure occurs in the system, the system needs to do a safety stop in ego lane (no lane change allowed). The driver is only allowed to be partially distracted, such as checking text messages, but taking a nap is not allowed.
Level 4 – High automation:
The system is in complete control of the vehicle and human presence is no longer needed, but its applications are limited to specific conditions. An example of a system being developed that falls into this category is the Waymo self-driving car service.
If the actual motoring condition exceeds the performance boundaries, the system does not have to ask the human to intervene but can choose to abort the trip in a safe manner, e.g. park the car.
Level 5 – Full automation:
The system is capable of providing the same aspects of a Level 4, but the system can operate in all driving conditions. The human is equivalent to "cargo" in Level 5. Currently, there are no driving systems at this level.
Risks and liabilities:
See also: Computer security § Automobiles, and Autonomous car liability
Many automakers such as Ford and Volvo have announced plans to offer fully automated cars in the future. Extensive research and development is being put into automated driving systems, but the biggest problem automakers cannot control is how drivers will use system.
Drivers are stressed to stay attentive and safety warnings are implemented to alert the driver when corrective action is needed. Tesla Motor's has one recorded incident that resulted in a fatality involving the automated driving system in the Tesla Model S. The accident report reveals the accident was a result of the driver being inattentive and the autopilot system not recognizing the obstruction ahead.
Another flaw with automated driving systems is that in situations where unpredictable events such as weather or the driving behavior of others may cause fatal accidents due to sensors that monitor the surroundings of the vehicle not being able to provide corrective action.
To overcome some of the challenges for automated driving systems, novel methodologies based on virtual testing, traffic flow simulatio and digital prototypes have been proposed, especially when novel algorithms based on Artificial Intelligence approaches are employed which require extensive training and validation data sets.
According to World Health Organization figures, road traffic injuries are the leading cause of death among young people aged 15–29 years. More than 1.2 million people die each year worldwide as a result of traffic crashes.
Vehicle-to-Everything (V2X) technologies, starting with 802.11p and evolving to Cellular V2X (C-V2X), can help bring safer roads, more efficient travel, reduced air pollution, and better driving experiences.
V2X will serve as the foundation for the safe, connected vehicle of the future, giving vehicles the ability to "talk" to each other, pedestrians, roadway infrastructure, and the cloud.
It’s no wonder that the MIT Technology Review put V2X on its 2015 10 Breakthrough Technologies list, stating: “Car-to-car communication should also have a bigger impact than the advanced vehicle automation technologies that have been more widely heralded.”
V2X is a key technology for enabling fully autonomous transportation infrastructure. While advancements in radar, LiDAR (Light Detection and Ranging), and camera systems are encouraging and bring autonomous driving one step closer to reality, it should be known that these sensors are limited by their line of sight. V2X complements the capabilities of these sensors by providing 360 degree non-line-of sight awareness, extending a vehicle’s ability to “see” further down the road – even at blind intersections or in bad weather conditions.
So, how long do we have to wait for V2X to become a reality? Actually, V2X technology is here today. Wi-Fi-based 820.11p has established the foundation for latency-critical V2X communications.
To improve road safety for future light vehicles in the United States, the National Highway Safety Administration is expected to begin rulemaking for Dedicated Short Range Communications (DSRC) this year.
Beyond that, tomorrow’s autonomous vehicles require continued technology evolution to accommodate ever-expanding safety requirements and use cases. The path to 5G will deliver this evolution starting with the C-V2X part of 3GPP release 14 specifications, which is expected to be completed by the end of this year.
C-V2X will define two new transmission modes that work together to enable a broad range of automotive use cases:
- The first enables direct communication between vehicles and each other, pedestrians, and road infrastructure. We are building on LTE Direct device-to-device communications, evolving the technology with innovations to exchange real-time information between vehicles traveling at fast speeds, in high-density traffic, and even outside of mobile network coverage areas.
- The second transmission mode uses the ubiquitous coverage of existing LTE networks, so you can be alerted to an accident a few miles ahead, guided to an open parking space, and more. To enable this mode, we are optimizing LTE Broadcast technology for vehicular communications.
To accelerate technology evolution, Qualcomm is actively driving the C-V2X work in 3GPP, building on our leadership in LTE Direct and LTE Broadcast to pioneer C-V2X technologies.
The difference between a vehicle collision and a near miss comes down to milliseconds. With approximately twice the range of DSRC, C-V2X can provide the critical seconds of reaction time needed to avoid an accident. Beyond safety, C-V2X also enables a broad range of use cases – from better situational awareness, to enhanced traffic management, and connected cloud services.
C-V2X will provide a unified connectivity platform for safer “vehicles of tomorrow.” Building upon C-V2X, 5G will bring even more possibilities for the connected vehicle. The extreme throughput, low latency, and enhanced reliability of 5G will allow vehicles to share rich, real-time data, supporting fully autonomous driving experiences, for example:
- Cooperative-collision avoidance: For self-driving vehicles, individual actions by a vehicle to avoid collisions may create hazardous driving conditions for other vehicles. Cooperative-collision avoidance allows all involved vehicles to coordinate their actions to avoid collisions in a cooperative manner.
- High-density platooning: In a self-driving environment, vehicles communicate with each other to create a closely spaced multiple vehicle chains on a highway. High-density platooning will further reduce the current distance between vehicles down to one meter, resulting in better traffic efficiency, fuel savings, and safer roads.
- See through: In situations where small vehicles are behind larger vehicles (e.g., trucks), the smaller vehicles cannot "see" a pedestrian crossing the road in front of the larger vehicle. In such scenarios, a truck’s camera can detect the situation and share the image of the pedestrian with the vehicle behind it, which sends an alert to the driver and shows him the pedestrian in virtual reality on the windshield board.
Beyond pioneering C-V2X and helping define the path to 5G, we are delivering new levels of on-device intelligence and integration in the connected vehicle of tomorrow. Our innovations in cognitive technologies, such as always-on sensing, computer vision, and machine learning, will help make our vision of safer, more autonomous vehicles a reality.
To learn more about how we are pioneering Cellular V2X, join us for our upcoming webinar or visit our Cellular V2X web page.
Learn more about our automotive solutions here.
[End of Article]
___________________________________________________________________________
Vehicular automation
Vehicular automation involves the use of mechatronics, artificial intelligence, and multi-agent system to assist a vehicle's operator. These features and the vehicles employing them may be labeled as intelligent or smart.
A vehicle using automation for difficult tasks, especially navigation, may be referred to as semi-autonomous. A vehicle relying solely on automation is consequently referred to as robotic or autonomous. After the invention of the integrated circuit, the sophistication of automation technology increased. Manufacturers and researchers subsequently added a variety of automated functions to automobiles and other vehicles.
Autonomy levels:
Autonomy in vehicles is often categorized in six levels: The level system was developed by the Society of Automotive Engineers (SAE):
- Level 0: No automation.
- Level 1: Driver assistance - The vehicle can control either steering or speed autonomously in specific circumstances to assist the driver.
- Level 2: Partial automation - The vehicle can control both steering and speed autonomously in specific circumstances to assist the driver.
- Level 3: Conditional automation - The vehicle can control both steering and speed autonomously under normal environmental conditions, but requires driver oversight.
- Level 4: High automation - The vehicle can complete a travel autonomously under normal environmental conditions, not requiring driver oversight.
- Level 5: Full autonomy - The vehicle can complete a travel autonomously in any environmental conditions.
Ground vehicles:
Further information: Unmanned ground vehicles
Ground vehicles employing automation and teleoperation include shipyard gantries, mining trucks, bomb-disposal robots, robotic insects, and driverless tractors.
There are a lot of autonomous and semi-autonomous ground vehicles being made for the purpose of transporting passengers. One such example is the free-ranging on grid (FROG) technology which consists of autonomous vehicles, a magnetic track and a supervisory system.
The FROG system is deployed for industrial purposes in factory sites and has been in use since 1999 on the ParkShuttle, a PRT-style public transport system in the city of Capelle aan den IJssel to connect the Rivium business park with the neighboring city of Rotterdam (where the route terminates at the Kralingse Zoom metro station). The system experienced a crash in 2005 that proved to be caused by a human error.
Applications for automation in ground vehicles include the following:
- Vehicle tracking system system ESITrack, Lojack
- Rear-view alarm, to detect obstacles behind.
- Anti-lock braking system (ABS) (also Emergency Braking Assistance (EBA)), often coupled with Electronic brake force distribution (EBD), which prevents the brakes from locking and losing traction while braking. This shortens stopping distances in most cases and, more importantly, allows the driver to steer the vehicle while braking.
- Traction control system (TCS) actuates brakes or reduces throttle to restore traction if driven wheels begin to spin.
- Four wheel drive (AWD) with a center differential. Distributing power to all four wheels lessens the chances of wheel spin. It also suffers less from oversteer and understeer.
- Electronic Stability Control (ESC) (also known for Mercedes-Benz proprietary Electronic Stability Program (ESP), Acceleration Slip Regulation (ASR) and Electronic differential lock (EDL)). Uses various sensors to intervene when the car senses a possible loss of control. The car's control unit can reduce power from the engine and even apply the brakes on individual wheels to prevent the car from understeering or oversteering.
- Dynamic steering response (DSR) corrects the rate of power steering system to adapt it to vehicle's speed and road conditions.
Research is ongoing and prototypes of autonomous ground vehicles exist.
Cars:
See also: Autonomous car
Extensive automation for cars focuses on either introducing robotic cars or modifying modern car designs to be semi-autonomous.
Semi-autonomous designs could be implemented sooner as they rely less on technology that is still at the forefront of research. An example is the dual mode monorail. Groups such as RUF (Denmark) and TriTrack (USA) are working on projects consisting of specialized private cars that are driven manually on normal roads but also that dock onto a monorail/guideway along which they are driven autonomously.
As a method of automating cars without extensively modifying the cars as much as a robotic car, Automated highway systems (AHS) aims to construct lanes on highways that would be equipped with, for example, magnets to guide the vehicles. Automation vehicles have auto-brakes named as Auto Vehicles Braking System (AVBS). Highway computers would manage the traffic and direct the cars to avoid crashes.
The European Commission has established a smart car development program called the Intelligent Car Flagship Initiative. The goals of that program include:
There are plenty of further uses for automation in relation to cars. These include:
- Assured Clear Distance Ahead
- Adaptive headlamps
- Advanced Automatic Collision Notification, such as OnStar
- Intelligent Parking Assist System
- Automatic Parking
- Automotive night vision with pedestrian detection
- Blind spot monitoring
- Driver Monitoring System
- Robotic car or self-driving car which may result in less-stressed "drivers", higher efficiency (the driver can do something else), increased safety and less pollution (e.g. via completely automated fuel control)
- Precrash system
- Safe speed governing
- Traffic sign recognition
- Following another car on a motorway – "enhanced" or "adaptive" cruise control, as used by Ford and Vauxhall
- Distance control assist – as developed by Nissan
- Dead man's switch – there is a move to introduce deadman's braking into automotive application, primarily heavy vehicles, and there may also be a need to add penalty switches to cruise controls.
Singapore also announced a set of provisional national standards on January 31, 2019, to guide the autonomous vehicle industry. The standards, known as Technical Reference 68 (TR68), will promote the safe deployment of fully driverless vehicles in Singapore, according to a joint press release by Enterprise Singapore (ESG), Land Transport Authority (LTA), Standards Development Organisation and Singapore Standards Council (SSC).
Shared autonomous vehicles:
Following recent developments in autonomous cars, shared autonomous vehicles are now able to run in ordinary traffic without the need for embedded guidance markers. So far the focus has been on low speed, 20 miles per hour (32 km/h), with short, fixed routes for the "last mile" of journeys.
This means issues of collision avoidance and safety are significantly less challenging than those for automated cars, which seek to match the performance of conventional vehicles.
Aside from 2getthere ("ParkShuttle"), three companies - Ligier ("Easymile EZ10"), Navya ("ARMA" & "Autonom Cab") and RDM Group ("LUTZ Pathfinder") - are manufacturing and actively testing such vehicles. Two other companies have produced prototypes, Local Motors ("Olli") and the GATEway project.
Beside these efforts, Apple is reportedly developing an autonomous shuttle, based on a vehicle from an existing automaker, to transfer employees between its offices in Palo Alto and Infinite Loop, Cupertino. The project called "PAIL", after its destinations, was revealed in August 2017 when Apple announced it had abandoned development of autonomous cars.
Click on any of the following blue hyperlinks for more about Vehicular Automation: ___________________________________________________________________________
Automated driving system:
An automated driving system is a complex combination of various components that can be defined as systems where perception, decision making, and operation of the automobile are performed by electronics and machinery instead of a human driver, and as introduction of automation into road traffic.
This includes handling of the vehicle, destination, as well as awareness of surroundings.
While the automated system has control over the vehicle, it allows the human operator to leave all responsibilities to the system.
Overview:
The automated driving system is generally an integrated package of individual automated systems operating in concert. Automated driving implies that the driver have given up the ability to drive (i.e., all appropriate monitoring, agency, and action functions) to the vehicle automation system. Even though the driver may be alert and ready to take action at any moment, automation system controls all functions.
Automated driving systems are often conditional, which implies that the automation system is capable of automated driving, but not for all conditions encountered in the course of normal operation. Therefore, a human driver is functionally required to initiate the automated driving system, and may or may not do so when driving conditions are within the capability of the system.
When the vehicle automation system has assumed all driving functions, the human is no longer driving the vehicle but continues to assume responsibility for the vehicle's performance as the vehicle operator.
The automated vehicle operator is not functionally required to actively monitor the vehicle's performance while the automation system is engaged, but the operator must be available to resume driving within several seconds of being prompted to do so, as the system has limited conditions of automation.
While the automated driving system is engaged, certain conditions may prevent real-time human input, but for no more than a few seconds. The operator is able to resume driving at any time subject to this short delay. When the operator has resumed all driving functions, he or she reassumes the status of the vehicle's driver.
Success in the technology:
The success in the automated driving system has been known to be successful in situations like rural road settings.
Rural road settings would be a setting in which there is lower amounts of traffic and lower differentiation between driving abilities and types of drivers.
"The greatest challenge in the development of automated functions is still inner-city traffic, where an extremely wide range of road users must be considered from all directions."
This technology is progressing to a more reliable way of the automated driving cars to switch from auto-mode to driver mode. Auto-mode is the mode that is set in order for the automated actions to take over, while the driver mode is the mode set in order to have the operator controlling all functions of the car and taking the responsibilities of operating the vehicle (Automated driving system not engaged).
This definition would include vehicle automation systems that may be available in the near term—such as traffic-jam assist, or full-range automated cruise control—if such systems would be designed such that the human operator can reasonably divert attention (monitoring) away from the performance of the vehicle while the automation system is engaged. This definition would also include automated platooning (such as conceptualized by the SARTRE project).
The SARTRE Project:
The SARTRE project's main goal is to create platooning, a train of automated cars, that will provide comfort and have the ability for the driver of the vehicle to arrive safely to a destination.
Along with the ability to be along the train, drivers that are driving past these platoons, can join in with a simple activation of the automated driving system that correlates with a truck that leads the platoon. The SARTRE project is taking what we know as a train system and mixing it with automated driving technology. This is intended to allow for an easier transportation though cities and ultimately help with traffic flow through heavy automobile traffic.
SARTRE & modern day:
In some parts of the world the self-driving car has been tested in real life situations such as in Pittsburgh. The Self-driving Uber has been put to the test around the city, driving with different types of drivers as well as different traffic situations. Not only have there been testing and successful parts to the automated car, but there has also been extensive testing in California on automated busses.
The lateral control of the automated buses uses magnetic markers such as the platoon at San Diego, while the longitudinal control of the automated truck platoon uses millimeter wave radio and radar.
Current examples around today's society include the Google car and Tesla's models. Tesla has redesigned automated driving, they have created car models that allow drivers to put in the destination and let the car take over. These are two modern day examples of the automated driving system cars.
Levels of automation according to SAE:
The U.S Department of Transportation National Highway Traffic Safety Administration (NHTSA) provided a standard classification system in 2013 which defined five different levels of automation, ranging from level 0 (no automation) to level 4 (full automation).
Since then, the NHTSA updated their standards to be in line with the classification system defined by SAE International. SAE International defines six different levels of automation in their new standard of classification in document SAE J3016 that ranges from 0 (no automation) to 5 (full automation).
Level 0 – No automation:
The driver is in complete control of the vehicle and the system does not interfere with driving. Systems that may fall into this category are forward collision warning systems and lane departure warning systems.
Level 1 – Driver assistance:
The driver is in control of the vehicle, but the system can modify the speed and steering direction of the vehicle. Systems that may fall into this category are adaptive cruise control and lane keep assist.
Level 2 – Partial automation:
The driver must be able to control the vehicle if corrections are needed, but the driver is no longer in control of the speed and steering of the vehicle. Parking assistance is an example of a system that falls into this category along with Tesla's autopilot feature.
A system that falls into this category is the DISTRONIC PLUS system created by Mercedes-Benz. It is important to note the driver must not be distracted in Level 0 to Level 2 modes.
Level 3 – Conditional automation.
The system is in complete control of vehicle functions such as speed, steering, and monitoring the environment under specific conditions. Such specific conditions may be fulfilled while on fenced-off highway with no intersections, limited driving speed, boxed-in driving situation etc.
A human driver must be ready to intervene when requested by the system to do so. If the driver does not respond within a predefined time or if a failure occurs in the system, the system needs to do a safety stop in ego lane (no lane change allowed). The driver is only allowed to be partially distracted, such as checking text messages, but taking a nap is not allowed.
Level 4 – High automation:
The system is in complete control of the vehicle and human presence is no longer needed, but its applications are limited to specific conditions. An example of a system being developed that falls into this category is the Waymo self-driving car service.
If the actual motoring condition exceeds the performance boundaries, the system does not have to ask the human to intervene but can choose to abort the trip in a safe manner, e.g. park the car.
Level 5 – Full automation:
The system is capable of providing the same aspects of a Level 4, but the system can operate in all driving conditions. The human is equivalent to "cargo" in Level 5. Currently, there are no driving systems at this level.
Risks and liabilities:
See also: Computer security § Automobiles, and Autonomous car liability
Many automakers such as Ford and Volvo have announced plans to offer fully automated cars in the future. Extensive research and development is being put into automated driving systems, but the biggest problem automakers cannot control is how drivers will use system.
Drivers are stressed to stay attentive and safety warnings are implemented to alert the driver when corrective action is needed. Tesla Motor's has one recorded incident that resulted in a fatality involving the automated driving system in the Tesla Model S. The accident report reveals the accident was a result of the driver being inattentive and the autopilot system not recognizing the obstruction ahead.
Another flaw with automated driving systems is that in situations where unpredictable events such as weather or the driving behavior of others may cause fatal accidents due to sensors that monitor the surroundings of the vehicle not being able to provide corrective action.
To overcome some of the challenges for automated driving systems, novel methodologies based on virtual testing, traffic flow simulatio and digital prototypes have been proposed, especially when novel algorithms based on Artificial Intelligence approaches are employed which require extensive training and validation data sets.
The AI Effect
Pictured below: 4 Positive Effects of AI Use in Email Marketing
- YouTube Video: Crypto Altcoin With Serious Potential! (Effect.Ai - EFX)
- YouTube Video: The Real Reason to be Afraid of Artificial Intelligence | Peter Haas | TEDxDirigo
- YouTube Video: 19 Industries The Blockchain* Will Disrupt
Pictured below: 4 Positive Effects of AI Use in Email Marketing
The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.
Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."
AIS researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"
"The AI effect" tries to redefine AI to mean:
AI is anything that has not been done yet:
A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
Pamela McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the "failures", the tough nuts that couldn't yet be cracked."
When IBM's chess-playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, people complained that it had only used "brute force methods" and it wasn't real intelligence. Fred Reed writes:
"A problem that proponents of AI regularly face is this: When we know how a machine does something 'intelligent,' it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright."
Douglas Hofstadter expresses the AI effect concisely by quoting Tesler's Theorem:
"AI is whatever hasn't been done yet."
When problems have not yet been formalized, they can still be characterised by a model of computation that includes human computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by human. This formalization is referred to as human-assisted Turing machine.
AI applications become mainstream:
Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI.
Michael Swaine reports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous", Patrick Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world."
According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?"
Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?"
Nick Bostrom observes that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore."
Legacy of the AI winter:
Main article: AI winter
Many AI researchers find that they can procure more funding and sell more software if they avoid the tarnished name of "artificial intelligence" and instead pretend their work has nothing to do with intelligence at all. This was especially true in the early 1990s, during the second "AI winter".
Patty Tascarella writes "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding".
Saving a place for humanity at the top of the chain of being:
Michael Kearns suggests that "people subconsciously are trying to preserve for themselves some special role in the universe". By discounting artificial intelligence people can continue to feel unique and special.
Kearns argues that the change in perception known as the AI effect can be traced to the mystery being removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.
A related effect has been noted in the history of animal cognition and in consciousness studies, where every time a capacity formerly thought as uniquely human is discovered in animals, (e.g. the ability to make tools, or passing the mirror test), the overall importance of that capacity is deprecated.
Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that."
See also:
Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."
AIS researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"
"The AI effect" tries to redefine AI to mean:
AI is anything that has not been done yet:
A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
Pamela McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the "failures", the tough nuts that couldn't yet be cracked."
When IBM's chess-playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, people complained that it had only used "brute force methods" and it wasn't real intelligence. Fred Reed writes:
"A problem that proponents of AI regularly face is this: When we know how a machine does something 'intelligent,' it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright."
Douglas Hofstadter expresses the AI effect concisely by quoting Tesler's Theorem:
"AI is whatever hasn't been done yet."
When problems have not yet been formalized, they can still be characterised by a model of computation that includes human computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by human. This formalization is referred to as human-assisted Turing machine.
AI applications become mainstream:
Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI.
Michael Swaine reports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous", Patrick Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world."
According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?"
Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?"
Nick Bostrom observes that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore."
Legacy of the AI winter:
Main article: AI winter
Many AI researchers find that they can procure more funding and sell more software if they avoid the tarnished name of "artificial intelligence" and instead pretend their work has nothing to do with intelligence at all. This was especially true in the early 1990s, during the second "AI winter".
Patty Tascarella writes "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding".
Saving a place for humanity at the top of the chain of being:
Michael Kearns suggests that "people subconsciously are trying to preserve for themselves some special role in the universe". By discounting artificial intelligence people can continue to feel unique and special.
Kearns argues that the change in perception known as the AI effect can be traced to the mystery being removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.
A related effect has been noted in the history of animal cognition and in consciousness studies, where every time a capacity formerly thought as uniquely human is discovered in animals, (e.g. the ability to make tools, or passing the mirror test), the overall importance of that capacity is deprecated.
Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that."
See also:
- "If It Works, It's Not AI: A Commercial Look at Artificial Intelligence startups"
- ELIZA effect
- Functionalism (philosophy of mind)
- Moravec's paradox
- Chinese room
Artificial Intelligence: It's History, Timeline and Progress
- YouTube Video: Artificial Intelligence, the History and Future - with Chris Bishop
- YouTube Video about Artificial Intelligence: Mankind's Last Invention?
- YouTube Video: Sam Harris on Artificial Intelligence
History of artificial intelligence
The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen.
The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.
Eventually, it became obvious that they had grossly underestimated the difficulty of the project.
In 1973, in response to the criticism from James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an "AI winter".
Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned by the absence of the needed computer power (hardware) and withdrew funding again.
Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry due to new methods, the application of powerful computer hardware, and the collection of immense data sets.
Click on any of the following blue hyperlinks for more about the History of Artificial Intelligence:
The following is the Timeline of Artificial Intelligence:
Progress in artificial intelligence:
Artificial intelligence applications have been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery and toys.
However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry."
In the late 1990s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes.
Kaplan and Haenlein structure artificial intelligence along three evolutionary stages:
To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Such tests have been termed subject matter expert Turing tests.
Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results.
Click on any of the following blue hyperlinks for more about the Progress of Artificial Intelligence:
The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen.
The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.
Eventually, it became obvious that they had grossly underestimated the difficulty of the project.
In 1973, in response to the criticism from James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an "AI winter".
Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned by the absence of the needed computer power (hardware) and withdrew funding again.
Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry due to new methods, the application of powerful computer hardware, and the collection of immense data sets.
Click on any of the following blue hyperlinks for more about the History of Artificial Intelligence:
- AI in myth, fiction and speculation
- Automatons
- Formal reasoning
- Computer science
- The birth of artificial intelligence 1952–1956
- The golden years 1956–1974
- The first AI winter 1974–1980
- Boom 1980–1987
- Bust: the second AI winter 1987–1993
- AI 1993–2011
- Deep learning, big data and artificial general intelligence: 2011–present
- See also:
The following is the Timeline of Artificial Intelligence:
- To 1900
- 1901–1950
- 1950s
- 1960s
- 1970s
- 1980s
- 1990s
- 2000s
- 2010s
- See also:
- "Brief History (timeline)", AI Topics, Association for the Advancement of Artificial Intelligence
- "Timeline: Building Smarter Machines", New York Times, 24 June 2010
Progress in artificial intelligence:
Artificial intelligence applications have been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery and toys.
However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry."
In the late 1990s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes.
Kaplan and Haenlein structure artificial intelligence along three evolutionary stages:
- 1) artificial narrow intelligence – applying AI only to specific tasks;
- 2) artificial general intelligence – applying AI to several areas and able to autonomously solve problems they were never even designed for;
- and 3) artificial super intelligence – applying AI to any area capable of scientific creativity, social skills, and general wisdom.
To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Such tests have been termed subject matter expert Turing tests.
Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results.
Click on any of the following blue hyperlinks for more about the Progress of Artificial Intelligence:
(WEBMD) HEALTH NEWS: How AI is Transforming Health Care Picture below: Courtesy of WebMD
Jan. 2, 2020 -- Matthew Might’s son Bertrand was born with a devastating, ultra-rare genetic disorder.
Now 12, Bertrand has vibrant eyes, a quick smile, and a love of dolphins. But he nearly died last spring from a runaway infection. He can’t sit up on his own anymore, and he struggles to use his hands to play with toys. Puberty will likely wreak more havoc.
Might, who has a PhD in computer science, has been able to use his professional expertise to change the trajectory of his son’s life. With computational simulation and by digitally combing through vast amounts of data, Might discovered two therapies that have extended Bertrand’s life and improved its quality.
Similar artificial intelligence (AI) enabled Colin Hill to determine which blood cancer patients are likely to gain the most from bone marrow transplants. The company Hill runs, GNS Healthcare, found a genetic signature in some multiple myeloma patients that suggested they would benefit from a transplant. For others, the risky, painful, and expensive procedure would probably only provide false hope.
Hospitals and doctors’ offices collect reams of data on their patients—everything from blood pressure to mobility measures to genetic sequencing. Today, most of that data sits on a computer somewhere, helping no one. But that is slowly changing as computers get better at using AI to find patterns in vast amounts of data and as recording, storing, and analyzing information gets cheaper and easier.
“I think the average patient or future patient is already being touched by AI in health care. They’re just not necessarily aware of it,” says Chris Coburn, chief innovation officer for Partners HealthCare System, a hospital and physicians network based in Boston.
AI has already helped medical administrators and schedulers with billing and making better use of providers’ time—though it still drives many doctors (and patients) crazy because of the tediousness of the work and the time spent typing rather than interacting with the patient.
AI remains a little further from reality when it comes to patient care, but “I could not easily name a [health] field that doesn’t have some active work as it relates to AI,” says Coburn, who mentions pathology, radiology, spinal surgery, cardiac surgery, and dermatology, among others.
Might’s and Hill’s stories are forerunners of a coming transformation that will enable the full potential of AI in medicine, they and other experts say. Such digital technology has been transforming other industries for decades—think cell phone traffic apps for commuting or GPS apps that measure weather, wind, and wave action to develop the most fuel-efficient shipping routes across oceans. Though slowed by the need for patient privacy and other factors, AI is finally making real inroads into medical care.
Supporters say AI promises earlier cancer diagnoses and shorter timelines for developing and testing new medications. Natural language processing should allow doctors to detach themselves from their keyboards. Wearable sensors and data analysis will offer a richer view of patients’ health as it unfolds over time.
Concerns About Digital CareAI, however, does have its limitations.
Some worry that digitizing medicine will cost people their jobs—for instance, when computers can read medical scans more accurately than people. But there will always be a major role for humans to shape the diagnostic process, says Meera Hameed, MD, chief of the surgical pathology service at Memorial Sloan Kettering Cancer Center in New York City. She says algorithms that can read digital scans will help integrate medical information, such as diagnosis, lab tests, and genetics, so pathologists can decide what to do with that information. “We will be the ones interpreting that data and putting it all together,” she says.
The need for privacy and huge amounts of data remain a challenge for AI in medicine, says Mike Nohaile, PhD, senior vice president of strategy, commercialization, and innovation for pharmaceutical giant Amgen. In large data sets, names are removed, but people can be re-identified today by their genetic code. AI is also greedy for data. While a child can learn the difference between a cat and a dog by seeing a handful of examples, an algorithm might need 50,000 data points to make the same distinction.
Computer scientists who build digital algorithms can also unintentionally introduce racial and demographic bias. Heart attacks might be driven by one factor in one group of people, while in another population, a different factor might be the main cause, Nohaile says. “I don’t want a doctor saying to someone, ‘You’re at no or low risk’ and it’s wrong,” he says. “If it does go wrong, it probably will fall disproportionately on disadvantaged populations.” Also, he says, today, the algorithms used to run AI are often hard to understand and interpret. “I don’t want to trust a black box to make decisions because I don’t know if it’s been biased,” Nohaile says. “We think about that a lot.”
That said, recent advances in digital analysis have enabled computers to draw more meaningful conclusions from large data sets. And the quality of medical information has improved dramatically over the last six years, he says, thanks to the Affordable Care Act, the national insurance program championed by then-President Barack Obama that required providers to digitize their records in order to receive federal reimbursements.
Companies like Amgen are using AI to speed drug development and clinical trials, Nohaile says. Large patient data sets can help companies identify patients who are well suited for research trials, allowing those trials to proceed faster.
Researchers can also move more quickly when they have AI to filter and make sense of reams of scientific data. And improvements in natural language processing are boosting the quality of medical records, making analyses more accurate. This will soon help patients better understand their doctor’s instructions and their own condition, Nohaile says.
Preventing Medical Errors:
AI can also help prevent medical mistakes and flag those most at risk for problems, says Robbie Freeman, RN, MS, vice president of clinical innovation at the Mount Sinai Health System in New York. “We know that hospitals are still a place where a lot of avoidable harm can happen,” he says. Freeman’s team at Mount Sinai develops A.I.-powered tools to prevent some of those situations.
One algorithm they created combs through medical records to determine which patients are at increased risk of falling. Notifying the staff of this extra risk means they can take steps to prevent accidents. Freeman says the predictive model his team developed outperforms the previous model by 30% to 40%.
They’ve trained another system to identify patients at high risk for malnutrition who might benefit by spending time with a hospital nutritionist. That algorithm “learns” from new data, Freeman says, so if a dietitian visits a patient labeled at-risk and finds no problem, their conclusion refines the model. “This is where AI has tremendous potential,” Freeman says, “to really power the accuracy for the tools we have for keeping patients safe.”
While much of the information in these algorithms was already being collected, it would often go unnoticed. Freeman says that during his six years as a nurse, he frequently felt like he was “documenting into a black hole.” Now, algorithms can evaluate how a patient is changing over time and can reveal a composite picture, rather than identifying 100 different categories of information. “The data was always there, but the algorithms make it actionable,” he says.
Managing such enormous quantities of data remains one of the biggest challenges for AI. At Mount Sinai, Freeman has access to billions of data points—going back to 2010 for 50,000 inpatients a year. Improvements in computing technology have allowed his group to make better use of these data points when designing algorithms. “Every year it gets a little easier and a little less expensive to do,” he says. “I don’t think we could have done it five years ago.”
But because algorithms require so much data to make accurate predictions, smaller health systems that don’t have access to this level of data might end up with unreliable or useless results, he warns.
Big Benefits — But A Ways To Go:
The improvements in data are beginning to yield benefits to patients, says Hill, chairman, CEO, and co-founder of GNS Healthcare, which is based in Cambridge, Massachusetts. Hill thinks AI algorithms like the one that suggests which patients will benefit from bone marrow transplants has the potential to save millions or more in health care spending by matching patients with therapies that are most likely to help them.
Over the next 3 to 5 years, the quality of data will improve even more, he predicts, allowing the seamless combination of information, such as disease treatment with clinical data, such as a patient’s response to medication.
At the moment, Nohaile says the biggest problem with AI in medicine is that people overestimate what it can do. AI is much closer to a spreadsheet than to human intelligence, he says, laughing at the idea that it will rival a doctor or nurse’s abilities anytime soon: “You use a spreadsheet to help the human do a much more efficient and effective job at what they do.” And that, he says, is how people should view AI.
Now 12, Bertrand has vibrant eyes, a quick smile, and a love of dolphins. But he nearly died last spring from a runaway infection. He can’t sit up on his own anymore, and he struggles to use his hands to play with toys. Puberty will likely wreak more havoc.
Might, who has a PhD in computer science, has been able to use his professional expertise to change the trajectory of his son’s life. With computational simulation and by digitally combing through vast amounts of data, Might discovered two therapies that have extended Bertrand’s life and improved its quality.
Similar artificial intelligence (AI) enabled Colin Hill to determine which blood cancer patients are likely to gain the most from bone marrow transplants. The company Hill runs, GNS Healthcare, found a genetic signature in some multiple myeloma patients that suggested they would benefit from a transplant. For others, the risky, painful, and expensive procedure would probably only provide false hope.
Hospitals and doctors’ offices collect reams of data on their patients—everything from blood pressure to mobility measures to genetic sequencing. Today, most of that data sits on a computer somewhere, helping no one. But that is slowly changing as computers get better at using AI to find patterns in vast amounts of data and as recording, storing, and analyzing information gets cheaper and easier.
“I think the average patient or future patient is already being touched by AI in health care. They’re just not necessarily aware of it,” says Chris Coburn, chief innovation officer for Partners HealthCare System, a hospital and physicians network based in Boston.
AI has already helped medical administrators and schedulers with billing and making better use of providers’ time—though it still drives many doctors (and patients) crazy because of the tediousness of the work and the time spent typing rather than interacting with the patient.
AI remains a little further from reality when it comes to patient care, but “I could not easily name a [health] field that doesn’t have some active work as it relates to AI,” says Coburn, who mentions pathology, radiology, spinal surgery, cardiac surgery, and dermatology, among others.
Might’s and Hill’s stories are forerunners of a coming transformation that will enable the full potential of AI in medicine, they and other experts say. Such digital technology has been transforming other industries for decades—think cell phone traffic apps for commuting or GPS apps that measure weather, wind, and wave action to develop the most fuel-efficient shipping routes across oceans. Though slowed by the need for patient privacy and other factors, AI is finally making real inroads into medical care.
Supporters say AI promises earlier cancer diagnoses and shorter timelines for developing and testing new medications. Natural language processing should allow doctors to detach themselves from their keyboards. Wearable sensors and data analysis will offer a richer view of patients’ health as it unfolds over time.
Concerns About Digital CareAI, however, does have its limitations.
Some worry that digitizing medicine will cost people their jobs—for instance, when computers can read medical scans more accurately than people. But there will always be a major role for humans to shape the diagnostic process, says Meera Hameed, MD, chief of the surgical pathology service at Memorial Sloan Kettering Cancer Center in New York City. She says algorithms that can read digital scans will help integrate medical information, such as diagnosis, lab tests, and genetics, so pathologists can decide what to do with that information. “We will be the ones interpreting that data and putting it all together,” she says.
The need for privacy and huge amounts of data remain a challenge for AI in medicine, says Mike Nohaile, PhD, senior vice president of strategy, commercialization, and innovation for pharmaceutical giant Amgen. In large data sets, names are removed, but people can be re-identified today by their genetic code. AI is also greedy for data. While a child can learn the difference between a cat and a dog by seeing a handful of examples, an algorithm might need 50,000 data points to make the same distinction.
Computer scientists who build digital algorithms can also unintentionally introduce racial and demographic bias. Heart attacks might be driven by one factor in one group of people, while in another population, a different factor might be the main cause, Nohaile says. “I don’t want a doctor saying to someone, ‘You’re at no or low risk’ and it’s wrong,” he says. “If it does go wrong, it probably will fall disproportionately on disadvantaged populations.” Also, he says, today, the algorithms used to run AI are often hard to understand and interpret. “I don’t want to trust a black box to make decisions because I don’t know if it’s been biased,” Nohaile says. “We think about that a lot.”
That said, recent advances in digital analysis have enabled computers to draw more meaningful conclusions from large data sets. And the quality of medical information has improved dramatically over the last six years, he says, thanks to the Affordable Care Act, the national insurance program championed by then-President Barack Obama that required providers to digitize their records in order to receive federal reimbursements.
Companies like Amgen are using AI to speed drug development and clinical trials, Nohaile says. Large patient data sets can help companies identify patients who are well suited for research trials, allowing those trials to proceed faster.
Researchers can also move more quickly when they have AI to filter and make sense of reams of scientific data. And improvements in natural language processing are boosting the quality of medical records, making analyses more accurate. This will soon help patients better understand their doctor’s instructions and their own condition, Nohaile says.
Preventing Medical Errors:
AI can also help prevent medical mistakes and flag those most at risk for problems, says Robbie Freeman, RN, MS, vice president of clinical innovation at the Mount Sinai Health System in New York. “We know that hospitals are still a place where a lot of avoidable harm can happen,” he says. Freeman’s team at Mount Sinai develops A.I.-powered tools to prevent some of those situations.
One algorithm they created combs through medical records to determine which patients are at increased risk of falling. Notifying the staff of this extra risk means they can take steps to prevent accidents. Freeman says the predictive model his team developed outperforms the previous model by 30% to 40%.
They’ve trained another system to identify patients at high risk for malnutrition who might benefit by spending time with a hospital nutritionist. That algorithm “learns” from new data, Freeman says, so if a dietitian visits a patient labeled at-risk and finds no problem, their conclusion refines the model. “This is where AI has tremendous potential,” Freeman says, “to really power the accuracy for the tools we have for keeping patients safe.”
While much of the information in these algorithms was already being collected, it would often go unnoticed. Freeman says that during his six years as a nurse, he frequently felt like he was “documenting into a black hole.” Now, algorithms can evaluate how a patient is changing over time and can reveal a composite picture, rather than identifying 100 different categories of information. “The data was always there, but the algorithms make it actionable,” he says.
Managing such enormous quantities of data remains one of the biggest challenges for AI. At Mount Sinai, Freeman has access to billions of data points—going back to 2010 for 50,000 inpatients a year. Improvements in computing technology have allowed his group to make better use of these data points when designing algorithms. “Every year it gets a little easier and a little less expensive to do,” he says. “I don’t think we could have done it five years ago.”
But because algorithms require so much data to make accurate predictions, smaller health systems that don’t have access to this level of data might end up with unreliable or useless results, he warns.
Big Benefits — But A Ways To Go:
The improvements in data are beginning to yield benefits to patients, says Hill, chairman, CEO, and co-founder of GNS Healthcare, which is based in Cambridge, Massachusetts. Hill thinks AI algorithms like the one that suggests which patients will benefit from bone marrow transplants has the potential to save millions or more in health care spending by matching patients with therapies that are most likely to help them.
Over the next 3 to 5 years, the quality of data will improve even more, he predicts, allowing the seamless combination of information, such as disease treatment with clinical data, such as a patient’s response to medication.
At the moment, Nohaile says the biggest problem with AI in medicine is that people overestimate what it can do. AI is much closer to a spreadsheet than to human intelligence, he says, laughing at the idea that it will rival a doctor or nurse’s abilities anytime soon: “You use a spreadsheet to help the human do a much more efficient and effective job at what they do.” And that, he says, is how people should view AI.
List of Artificial Intelligence Projects
- YouTube Video: Jeff Dean Talks Google Brain and Brain Residency
- YouTube Video: Cognitive Architectures (MIT)
- YouTube Video: Siri vs Alexa vs Google vs Cortana: news question challenge. TechNewOld - AI Arena #1.
The following is a list of current and past, non-classified notable artificial intelligence projects.
Specialized projects:
Brain-inspired:
Multipurpose projects:
Software libraries:
See also:
Specialized projects:
Brain-inspired:
- Blue Brain Project, an attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level.
- Google Brain A deep learning project part of Google X attempting to have intelligence similar or equal to human-level.
- Human Brain Project
- NuPIC, an open source implementation by Numenta of its cortical learning algorithm.
- 4CAPS, developed at Carnegie Mellon University under Marcel A. Just
- ACT-R, developed at Carnegie Mellon University under John R. Anderson.
- AIXI, Universal Artificial Intelligence developed by Marcus Hutter at IDSIA and ANU.
- CALO, a DARPA-funded, 25-institution effort to integrate many artificial intelligence approaches:
- natural language processing,
- speech recognition,
- machine vision,
- probabilistic logic,
- planning,
- reasoning, many forms of machine learning)
- into an AI assistant that learns to help manage your office environment.
- CHREST, developed under Fernand Gobet at Brunel University and Peter C. Lane at the University of Hertfordshire.
- CLARION, developed under Ron Sun at Rensselaer Polytechnic Institute and University of Missouri.
- CoJACK, an ACT-R inspired extension to the JACK multi-agent system that adds a cognitive architecture to the agents for eliciting more realistic (human-like) behaviors in virtual environments.
- Copycat, by Douglas Hofstadter and Melanie Mitchell at the Indiana University.
- DUAL, developed at the New Bulgarian University under Boicho Kokinov.
- FORR developed by Susan L. Epstein at The City University of New York.
- IDA and LIDA, implementing Global Workspace Theory, developed under Stan Franklin at the University of Memphis.
- OpenCog Prime, developed using the OpenCog Framework.
- Procedural Reasoning System (PRS), developed by Michael Georgeff and Amy L. Lansky at SRI International.
- Psi-Theory developed under Dietrich Dörner at the Otto-Friedrich University in Bamberg, Germany.
- R-CAST, developed at the Pennsylvania State University.
- Soar, developed under Allen Newell and John Laird at Carnegie Mellon University and the University of Michigan.
- Society of mind and its successor the Emotion machine proposed by Marvin Minsky.
- Subsumption architectures, developed e.g. by Rodney Brooks (though it could be argued whether they are cognitive).
- AlphaGo, software developed by Google that plays the Chinese board game Go.
- Chinook, a computer program that plays English draughts; the first to win the world champion title in the competition against humans.
- Deep Blue, a chess-playing computer developed by IBM which beat Garry Kasparov in 1997.
- FreeHAL, a self-learning conversation simulator (chatterbot) which uses semantic nets to organize its knowledge to imitate a very close human behavior within conversations.
- Halite, an artificial intelligence programming competition created by Two Sigma.
- Libratus, a poker AI that beat world-class poker players in 2017, intended to be generalisable to other applications.
- Quick, Draw!, an online game developed by Google that challenges players to draw a picture of an object or idea and then uses a neural network to guess what the drawing is.
- Stockfish AI, an open source chess engine currently ranked the highest in many computer chess rankings.
- TD-Gammon, a program that learned to play world-class backgammon partly by playing against itself (temporal difference learning with neural networks).
- Serenata de Amor, project for the analysis of public expenditures and detect discrepancies.
- Braina, an intelligent personal assistant application with a voice interface for Windows OS.
- Cyc, an attempt to assemble an ontology and database of everyday knowledge, enabling human-like reasoning.
- Eurisko, a language by Douglas Lenat for solving problems which consists of heuristics, including some for how to use and change its heuristics.
- Google Now, an intelligent personal assistant with a voice interface in Google's Android and Apple Inc.'s iOS, as well as Google Chrome web browser on personal computers.
- Holmes a new AI created by Wipro.
- Microsoft Cortana, an intelligent personal assistant with a voice interface in Microsoft's various Windows 10 editions.
- Mycin, an early medical expert system.
- Open Mind Common Sense, a project based at the MIT Media Lab to build a large common sense knowledge base from online contributions.
- P.A.N., a publicly available text analyzer.
- Siri, an intelligent personal assistant and knowledge navigator with a voice-interface in Apple Inc.'s iOS and macOS.
- SNePS, simultaneously a logic-based, frame-based, and network-based knowledge representation, reasoning, and acting system.
- Viv (software), a new AI by the creators of Siri.
- Wolfram Alpha, an online service that answers queries by computing the answer from structured data.
- AIBO, the robot pet for the home, grew out of Sony's Computer Science Laboratory (CSL).
- Cog, a robot developed by MIT to study theories of cognitive science and artificial intelligence, now discontinued.
- Melomics, a bioinspired technology for music composition and synthesization of music, where computers develop their own style, rather than mimic musicians.
- AIML, an XML dialect for creating natural language software agents.
- Apache Lucene, a high-performance, full-featured text search engine library written entirely in Java.
- Apache OpenNLP, a machine learning based toolkit for the processing of natural language text. It supports the most common NLP tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking and parsing.
- Artificial Linguistic Internet Computer Entity (A.L.I.C.E.), an award-winning natural language processing chatterbot.
- Cleverbot, successor to Jabberwacky, now with 170m lines of conversation, Deep Context, fuzziness and parallel processing. Cleverbot learns from around 2 million user interactions per month.
- ELIZA, a famous 1966 computer program by Joseph Weizenbaum, which parodied person-centered therapy.
- Jabberwacky, a chatterbot by Rollo Carpenter, aiming to simulate natural human chat.
- Mycroft, a free and open-source intelligent personal assistant that uses a natural language user interface.
- PARRY, another early chatterbot, written in 1972 by Kenneth Colby, attempting to simulate a paranoid schizophrenic.
- SHRDLU, an early natural language processing computer program developed by Terry Winograd at MIT from 1968 to 1970.
- SYSTRAN, a machine translation technology by the company of the same name, used by Yahoo!, AltaVista and Google, among others.
- 1 the Road, the first novel marketed by an AI.
- Synthetic Environment for Analysis and Simulations (SEAS), a model of the real world used by Homeland security and the United States Department of Defense that uses simulation and AI to predict and evaluate future events and courses of action.
Multipurpose projects:
Software libraries:
- Apache Mahout, a library of scalable machine learning algorithms.
- Deeplearning4j, an open-source, distributed deep learning framework written for the JVM.
- Keras, a high level open-source software library for machine learning (works on top of other libraries).
- Microsoft Cognitive Toolkit (previously known as CNTK), an open source toolkit for building artificial neural networks.
- OpenNN, a comprehensive C++ library implementing neural networks.
- PyTorch, an open-source Tensor and Dynamic neural network in Python.
- TensorFlow, an open-source software library for machine learning.
- Theano, a Python library and optimizing compiler for manipulating and evaluating mathematical expressions, especially matrix-valued ones.
- Neural Designer, a commercial deep learning tool for predictive analytics.
- Neuroph, a Java neural network framework.
- OpenCog, a GPL-licensed framework for artificial intelligence written in C++, Python and Scheme.
- RapidMiner, an environment for machine learning and data mining, now developed commercially.
- Weka, a free implementation of many machine learning algorithms in Java.
- Data Applied, a web based data mining environment.
- Grok, a service that ingests data streams and creates actionable predictions in real time.
- Watson, a pilot service by IBM to uncover and share data-driven insights, and to spur cognitive applications.
See also:
Machine Learning as a Sub-set of AI including PEW Research Article and Video
- YouTube Video: Machine Learning Basics | What Is Machine Learning? | Introduction To Machine Learning | Simplilearn
- YouTube Video: MIT Introduction to Deep Learning
- YouTube Video: Introduction to Machine Learning vs. Deep Learning
What is machine learning, and how does it work?
At Pew Research Center, we collect and analyze data in a variety of ways. Besides asking people what they think through surveys, we also regularly study things like images, videos and even the text of religious sermons.
In a digital world full of ever-expanding datasets like these, it’s not always possible for humans to analyze such vast troves of information themselves. That’s why our researchers have increasingly made use of a method called machine learning. Broadly speaking, machine learning uses computer programs to identify patterns across thousands or even millions of data points. In many ways, these techniques automate tasks that researchers have done by hand for years.
Our latest video explainer – part of our Methods 101 series – explains the basics of machine learning and how it allows researchers at the Center to analyze data on a large scale. To learn more about how we’ve used machine learning and other computational methods in our research, including the analysis mentioned in this video, you can explore recent reports from our Data Labs team.
[End of Article]
___________________________________________________________________________
Machine Learning (Wikipedia)
Machine learning (ML) is the study of computer algorithms that improve automatically through experience. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so.
Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks.
Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning.
Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning. In its application across business problems, machine learning is also referred to as predictive analytics.
Overview:
Machine learning involves computers discovering how they can perform tasks without being explicitly programmed to do so. For simple tasks assigned to computers, it is possible to program algorithms telling the machine how to execute all steps required to solve the problem at hand; on the computer's part, no learning is needed.
For more advanced tasks, it can be challenging for a human to manually create the needed algorithms. In practice, it can turn out to be more effective to help the machine develop its own algorithm, rather than have human programmers specify every needed step.
The discipline of machine learning employs various approaches to help computers learn to accomplish tasks where no fully satisfactory algorithm is available. In cases where vast numbers of potential answers exist, one approach is to label some of the correct answers as valid. This can then be used as training data for the computer to improve the algorithm(s) it uses to determine correct answers. For example, to train a system for the task of digital character recognition, the MNIST dataset has often been used.
Machine learning approaches:
Early classifications for machine learning approaches sometimes divided them into three broad categories, depending on the nature of the "signal" or "feedback" available to the learning system. These were:
History and relationships to other fields:
See also: Timeline of machine learning
The term machine learning was coined in 1959 by Arthur Samuel, an American IBMer and pioneer in the field of computer gaming and artificial intelligence. A representative book of the machine learning research during the 1960s was the Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification.
Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973. In 1981 a report was given on using teaching strategies so that a neural network learns to recognize 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal.
Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E."
This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?".
Relation to artificial intelligence:
As a scientific endeavor, machine learning grew out of the quest for artificial intelligence. In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostly perceptrons and other models that were later found to be reinventions of the generalized linear models of statistics. Probabilistic reasoning was also employed, especially in automated medical diagnosis.
However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation. By 1980, expert systems had come to dominate AI, and statistics was out of favor.
Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming, but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval.
Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines including Hopfield, Rumelhart and Hinton. Their main success came in the mid-1980s with the reinvention of backpropagation.
Machine learning, reorganized as a separate field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics and probability theory.
As of 2019, many sources continue to assert that machine learning remains a sub field of AI. Yet some practitioners, for example Dr Daniel Hulme, who both teaches AI and runs a company operating in the field, argues that machine learning and AI are separate.
Relation to data mining:
Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases).
Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy.
Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge.
Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data.
Relation to optimization:
Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples).
The difference between the two fields arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples.
Relation to statistics:
Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns.
According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field.
Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model, wherein "algorithmic model" means more or less the machine learning algorithms like Random forest.
Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning.
Theory:
Main articles: Computational learning theory and Statistical learning theory
A core objective of a learner is to generalize from its experience. Generalization in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set.
The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases.
The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms.
Instead, probabilistic bounds on the performance are quite common. The bias–variance decomposition is one way to quantify generalization error.
For the best performance in the context of generalization, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject to overfitting and generalization will be poorer.
In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results. Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time.
Click on any of the following blue hyperlinks for more about Machine Learning:
At Pew Research Center, we collect and analyze data in a variety of ways. Besides asking people what they think through surveys, we also regularly study things like images, videos and even the text of religious sermons.
In a digital world full of ever-expanding datasets like these, it’s not always possible for humans to analyze such vast troves of information themselves. That’s why our researchers have increasingly made use of a method called machine learning. Broadly speaking, machine learning uses computer programs to identify patterns across thousands or even millions of data points. In many ways, these techniques automate tasks that researchers have done by hand for years.
Our latest video explainer – part of our Methods 101 series – explains the basics of machine learning and how it allows researchers at the Center to analyze data on a large scale. To learn more about how we’ve used machine learning and other computational methods in our research, including the analysis mentioned in this video, you can explore recent reports from our Data Labs team.
[End of Article]
___________________________________________________________________________
Machine Learning (Wikipedia)
Machine learning (ML) is the study of computer algorithms that improve automatically through experience. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so.
Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks.
Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning.
Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning. In its application across business problems, machine learning is also referred to as predictive analytics.
Overview:
Machine learning involves computers discovering how they can perform tasks without being explicitly programmed to do so. For simple tasks assigned to computers, it is possible to program algorithms telling the machine how to execute all steps required to solve the problem at hand; on the computer's part, no learning is needed.
For more advanced tasks, it can be challenging for a human to manually create the needed algorithms. In practice, it can turn out to be more effective to help the machine develop its own algorithm, rather than have human programmers specify every needed step.
The discipline of machine learning employs various approaches to help computers learn to accomplish tasks where no fully satisfactory algorithm is available. In cases where vast numbers of potential answers exist, one approach is to label some of the correct answers as valid. This can then be used as training data for the computer to improve the algorithm(s) it uses to determine correct answers. For example, to train a system for the task of digital character recognition, the MNIST dataset has often been used.
Machine learning approaches:
Early classifications for machine learning approaches sometimes divided them into three broad categories, depending on the nature of the "signal" or "feedback" available to the learning system. These were:
- Supervised learning: The computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs.
- Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning).
- Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle or playing a game against an opponent) As it navigates its problem space, the program is provided feedback that's analogous to rewards, which it tries to maximise.
- Other approaches or processes have since developed that don't fit neatly into this three-fold categorization, and sometimes more than one is used by the same machine learning system. For example topic modeling, dimensionality reduction or meta learning. As of 2020, deep learning has become the dominant approach for much ongoing work in the field of machine learning .
History and relationships to other fields:
See also: Timeline of machine learning
The term machine learning was coined in 1959 by Arthur Samuel, an American IBMer and pioneer in the field of computer gaming and artificial intelligence. A representative book of the machine learning research during the 1960s was the Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification.
Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973. In 1981 a report was given on using teaching strategies so that a neural network learns to recognize 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal.
Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E."
This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?".
Relation to artificial intelligence:
As a scientific endeavor, machine learning grew out of the quest for artificial intelligence. In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostly perceptrons and other models that were later found to be reinventions of the generalized linear models of statistics. Probabilistic reasoning was also employed, especially in automated medical diagnosis.
However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation. By 1980, expert systems had come to dominate AI, and statistics was out of favor.
Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming, but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval.
Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines including Hopfield, Rumelhart and Hinton. Their main success came in the mid-1980s with the reinvention of backpropagation.
Machine learning, reorganized as a separate field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics and probability theory.
As of 2019, many sources continue to assert that machine learning remains a sub field of AI. Yet some practitioners, for example Dr Daniel Hulme, who both teaches AI and runs a company operating in the field, argues that machine learning and AI are separate.
Relation to data mining:
Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases).
Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy.
Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge.
Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data.
Relation to optimization:
Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples).
The difference between the two fields arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples.
Relation to statistics:
Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns.
According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field.
Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model, wherein "algorithmic model" means more or less the machine learning algorithms like Random forest.
Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning.
Theory:
Main articles: Computational learning theory and Statistical learning theory
A core objective of a learner is to generalize from its experience. Generalization in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set.
The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases.
The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms.
Instead, probabilistic bounds on the performance are quite common. The bias–variance decomposition is one way to quantify generalization error.
For the best performance in the context of generalization, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject to overfitting and generalization will be poorer.
In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results. Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time.
Click on any of the following blue hyperlinks for more about Machine Learning:
- Approaches
- Applications
- Limitations
- Model assessments
- Ethics
- Software
- Journals
- Conferences
- See also:
- Automated machine learning – Automated machine learning or AutoML is the process of automating the end-to-end process of machine learning.
- Big data – Information assets characterized by such a high volume, velocity, and variety to require specific technology and analytical methods for its transformation into value
- Explanation-based learning
- Important publications in machine learning – Wikimedia list article
- List of datasets for machine learning research
- Predictive analytics – Statistical techniques analyzing facts to make predictions about unknown events
- Quantum machine learning
- Machine-learning applications in bioinformatics
- Seq2seq
- Fairness (machine learning)
- International Machine Learning Society
- mloss is an academic database of open-source machine learning software.
- Machine Learning Crash Course by Google. This is a free course on machine learning through the use of TensorFlow.
Competitions and Prizes in Artificial Intelligence
- YouTube Video: the David E. Rumelhart Prize
- YouTube Video: International Rank 1 holder in Olympiad shared his experience of studies with askIITians
- YouTube Video: 2020 Vision Product of the Year Award Winner Video: Morpho (AI Software and Algorithms)
There are a number of competitions and prizes to promote research in artificial intelligence.
General machine intelligence:
The David E. Rumelhart prize is an annual award for making a "significant contemporary contribution to the theoretical foundations of human cognition". The prize is $100,000.
The Human-Competitive Award is an annual challenge started in 2004 to reward results "competitive with the work of creative and inventive humans". The prize is $10,000. Entries are required to use evolutionary computing.
The IJCAI Award for Research Excellence is a biannual award given at the IJCAI conference to researcher in artificial intelligence as a recognition of excellence of their career.
The 2011 Federal Virtual World Challenge, advertised by The White House and sponsored by the U.S. Army Research Laboratory's Simulation and Training Technology Center, held a competition offering a total of $52,000 USD in cash prize awards for general artificial intelligence applications, including "adaptive learning systems, intelligent conversational bots, adaptive behavior (objects or processes)" and more.
The Machine Intelligence Prize is awarded annually by the British Computer Society for progress towards machine intelligence.
The Kaggle - "the world's largest community of data scientists compete to solve most valuable problems".
Conversational behavior:
The Loebner prize is an annual competition to determine the best Turing test competitors. The winner is the computer system that, in the judges' opinions, demonstrates the "most human" conversational behavior, they have an additional prize for a system that in their opinion passes a Turing test. This second prize has not yet been awarded.
Automatic control:
Pilotless aircraft:
The International Aerial Robotics Competition is a long-running event begun in 1991 to advance the state of the art in fully autonomous air vehicles. This competition is restricted to university teams (although industry and governmental sponsorship of teams is allowed).
Key to this event is the creation of flying robots which must complete complex missions without any human intervention. Successful entries are able to interpret their environment and make real-time decisions based only on a high-level mission directive (e.g., "find a particular target inside a building having certain characteristics which is among a group of buildings 3 kilometers from the aerial robot launch point").
In 2000, a $30,000 prize was awarded during the 3rd Mission (search and rescue), and in 2008, $80,000 in prize money was awarded at the conclusion of the 4th Mission (urban reconnaissance).
Driverless cars:
The DARPA Grand Challenge is a series of competitions to promote driverless car technology, aimed at a congressional mandate stating that by 2015 one-third of the operational ground combat vehicles of the US Armed Forces should be unmanned.
While the first race had no winner, the second awarded a $2 million prize for the autonomous navigation of a hundred-mile trail, using GPS, computers and a sophisticated array of sensors.
In November 2007, DARPA introduced the DARPA Urban Challenge, a sixty-mile urban area race requiring vehicles to navigate through traffic.
In November 2010 the US Armed Forces extended the competition with the $1.6 million prize Multi Autonomous Ground-robotic International Challenge to consider cooperation between multiple vehicles in a simulated-combat situation.
Roborace will be a global motorsport championship with autonomously driving, electrically powered vehicles. The series will be run as a support series during the Formula E championship for electric vehicles. This will be the first global championship for driverless cars.
Data-mining and prediction:
The Netflix Prize was a competition for the best collaborative filtering algorithm that predicts user ratings for films, based on previous ratings. The competition was held by Netflix, an online DVD-rental service. The prize was $1,000,000.
The Pittsburgh Brain Activity Interpretation Competition will reward analysis of fMRI data "to predict what individuals perceive and how they act and feel in a novel Virtual Reality world involving searching for and collecting objects, interpreting changing instructions, and avoiding a threatening dog." The prize in 2007 was $22,000.
The Face Recognition Grand Challenge (May 2004 to March 2006) aimed to promote and advance face recognition technology.
The American Meteorological Society's artificial intelligence competition involves learning a classifier to characterize precipitation based on meteorological analyses of environmental conditions and polarimetric radar data.
Cooperation and coordination:
Robot football:
The RoboCup and FIRA are annual international robot soccer competitions. The International RoboCup Federation challenge is by 2050 "a team of fully autonomous humanoid robot soccer players shall win the soccer game, comply with the official rule of the FIFA, against the winner of the most recent World Cup."
Logic, reasoning and knowledge representation;
The Herbrand Award is a prize given by CADE Inc. to honour persons or groups for important contributions to the field of automated deduction. The prize is $1000.
The CADE ATP System Competition (CASC) is a yearly competition of fully automated theorem provers for classical first order logic associated with the CADE and IJCAR conferences. The competition was part of the Alan Turing Centenary Conference in 2012, with total prizes of 9000 GBP given by Google.
The SUMO prize is an annual prize for the best open source ontology extension of the Suggested Upper Merged Ontology (SUMO), a formal theory of terms and logical definitions describing the world. The prize is $3000.
The Hutter Prize for Lossless Compression of Human Knowledge is a cash prize which rewards compression improvements on a specific 100 MB English text file. The prize awards 500 euros for each one percent improvement, up to €50,000. The organizers believe that text compression and AI are equivalent problems and 3 prizes were already given, at around € 2k.
The Cyc TPTP Challenge is a competition to develop reasoning methods for the Cyc comprehensive ontology and database of everyday common sense knowledge. The prize is 100 euros for "each winner of two related challenges".
The Eternity II challenge was a constraint satisfaction problem very similar to the Tetravex game. The objective is to lay 256 tiles on a 16x16 grid while satisfying a number of constraints. The problem is known to be NP-complete. The prize was US$2,000,000. The competition ended in December 2010.
Games:
The World Computer Chess Championship has been held since 1970. The International Computer Games Association continues to hold an annual Computer Olympiad which includes this event plus computer competitions for many other games.
The Ing Prize was a substantial money prize attached to the World Computer Go Congress, starting from 1985 and expiring in 2000. It was a graduated set of handicap challenges against young professional players with increasing prizes as the handicap was lowered. At the time it expired in 2000, the unclaimed prize was 400,000 NT dollars for winning a 9-stone handicap match.
The AAAI General Game Playing Competition is a competition to develop programs that are effective at general game playing. Given a definition of a game, the program must play it effectively without human intervention. Since the game is not known in advance the competitors cannot especially adapt their programs to a particular scenario. The prize in 2006 and 2007 was $10,000.
The General Video Game AI Competition (GVGAI) poses the problem of creating artificial intelligence that can play a wide, and in principle unlimited, range of games.
Concretely, it tackles the problem of devising an algorithm that is able to play any game it is given, even if the game is not known a priori. Additionally, the contests poses the challenge of creating level and rule generators for any game is given.
This area of study can be seen as an approximation of General Artificial Intelligence, with very little room for game dependent heuristics. The competition runs yearly in different tracks:
The 2007 Ultimate Computer Chess Challenge was a competition organised by World Chess Federation that pitted Deep Fritz against Deep Junior. The prize was $100,000.
The annual Arimaa Challenge offered a $10,000 prize until the year 2020 to develop a program that plays the board game Arimaa and defeats a group of selected human opponents. In 2015, David Wu's bot bot_sharp beat the humans, losing only 2 games out of 9. As a result, the Arimaa Challenge was declared over and David Wu received the prize of $12,000 ($2,000 being offered by third-parties for 2015's championship).
2K Australia is offering a prize worth A$10,000 to develop a game-playing bot that plays a first-person shooter. The aim is to convince a panel of judges that it is actually a human player. The competition started in 2008 and was won in 2012. A new competition is planned for 2014.
The Google AI Challenge was a bi-annual online contest organized by the University of Waterloo Computer Science Club and sponsored by Google that ran from 2009 to 2011. Each year a game was chosen and contestants submitted specialized automated bots to play against other competing bots.
Cloudball had its first round in Spring 2012 and finished on June 15. It is an international artificial intelligence programming contest, where users continuously submit the actions their soccer teams will take in each time step, in simple high level C# code.
See also:
General machine intelligence:
The David E. Rumelhart prize is an annual award for making a "significant contemporary contribution to the theoretical foundations of human cognition". The prize is $100,000.
The Human-Competitive Award is an annual challenge started in 2004 to reward results "competitive with the work of creative and inventive humans". The prize is $10,000. Entries are required to use evolutionary computing.
The IJCAI Award for Research Excellence is a biannual award given at the IJCAI conference to researcher in artificial intelligence as a recognition of excellence of their career.
The 2011 Federal Virtual World Challenge, advertised by The White House and sponsored by the U.S. Army Research Laboratory's Simulation and Training Technology Center, held a competition offering a total of $52,000 USD in cash prize awards for general artificial intelligence applications, including "adaptive learning systems, intelligent conversational bots, adaptive behavior (objects or processes)" and more.
The Machine Intelligence Prize is awarded annually by the British Computer Society for progress towards machine intelligence.
The Kaggle - "the world's largest community of data scientists compete to solve most valuable problems".
Conversational behavior:
The Loebner prize is an annual competition to determine the best Turing test competitors. The winner is the computer system that, in the judges' opinions, demonstrates the "most human" conversational behavior, they have an additional prize for a system that in their opinion passes a Turing test. This second prize has not yet been awarded.
Automatic control:
Pilotless aircraft:
The International Aerial Robotics Competition is a long-running event begun in 1991 to advance the state of the art in fully autonomous air vehicles. This competition is restricted to university teams (although industry and governmental sponsorship of teams is allowed).
Key to this event is the creation of flying robots which must complete complex missions without any human intervention. Successful entries are able to interpret their environment and make real-time decisions based only on a high-level mission directive (e.g., "find a particular target inside a building having certain characteristics which is among a group of buildings 3 kilometers from the aerial robot launch point").
In 2000, a $30,000 prize was awarded during the 3rd Mission (search and rescue), and in 2008, $80,000 in prize money was awarded at the conclusion of the 4th Mission (urban reconnaissance).
Driverless cars:
The DARPA Grand Challenge is a series of competitions to promote driverless car technology, aimed at a congressional mandate stating that by 2015 one-third of the operational ground combat vehicles of the US Armed Forces should be unmanned.
While the first race had no winner, the second awarded a $2 million prize for the autonomous navigation of a hundred-mile trail, using GPS, computers and a sophisticated array of sensors.
In November 2007, DARPA introduced the DARPA Urban Challenge, a sixty-mile urban area race requiring vehicles to navigate through traffic.
In November 2010 the US Armed Forces extended the competition with the $1.6 million prize Multi Autonomous Ground-robotic International Challenge to consider cooperation between multiple vehicles in a simulated-combat situation.
Roborace will be a global motorsport championship with autonomously driving, electrically powered vehicles. The series will be run as a support series during the Formula E championship for electric vehicles. This will be the first global championship for driverless cars.
Data-mining and prediction:
The Netflix Prize was a competition for the best collaborative filtering algorithm that predicts user ratings for films, based on previous ratings. The competition was held by Netflix, an online DVD-rental service. The prize was $1,000,000.
The Pittsburgh Brain Activity Interpretation Competition will reward analysis of fMRI data "to predict what individuals perceive and how they act and feel in a novel Virtual Reality world involving searching for and collecting objects, interpreting changing instructions, and avoiding a threatening dog." The prize in 2007 was $22,000.
The Face Recognition Grand Challenge (May 2004 to March 2006) aimed to promote and advance face recognition technology.
The American Meteorological Society's artificial intelligence competition involves learning a classifier to characterize precipitation based on meteorological analyses of environmental conditions and polarimetric radar data.
Cooperation and coordination:
Robot football:
The RoboCup and FIRA are annual international robot soccer competitions. The International RoboCup Federation challenge is by 2050 "a team of fully autonomous humanoid robot soccer players shall win the soccer game, comply with the official rule of the FIFA, against the winner of the most recent World Cup."
Logic, reasoning and knowledge representation;
The Herbrand Award is a prize given by CADE Inc. to honour persons or groups for important contributions to the field of automated deduction. The prize is $1000.
The CADE ATP System Competition (CASC) is a yearly competition of fully automated theorem provers for classical first order logic associated with the CADE and IJCAR conferences. The competition was part of the Alan Turing Centenary Conference in 2012, with total prizes of 9000 GBP given by Google.
The SUMO prize is an annual prize for the best open source ontology extension of the Suggested Upper Merged Ontology (SUMO), a formal theory of terms and logical definitions describing the world. The prize is $3000.
The Hutter Prize for Lossless Compression of Human Knowledge is a cash prize which rewards compression improvements on a specific 100 MB English text file. The prize awards 500 euros for each one percent improvement, up to €50,000. The organizers believe that text compression and AI are equivalent problems and 3 prizes were already given, at around € 2k.
The Cyc TPTP Challenge is a competition to develop reasoning methods for the Cyc comprehensive ontology and database of everyday common sense knowledge. The prize is 100 euros for "each winner of two related challenges".
The Eternity II challenge was a constraint satisfaction problem very similar to the Tetravex game. The objective is to lay 256 tiles on a 16x16 grid while satisfying a number of constraints. The problem is known to be NP-complete. The prize was US$2,000,000. The competition ended in December 2010.
Games:
The World Computer Chess Championship has been held since 1970. The International Computer Games Association continues to hold an annual Computer Olympiad which includes this event plus computer competitions for many other games.
The Ing Prize was a substantial money prize attached to the World Computer Go Congress, starting from 1985 and expiring in 2000. It was a graduated set of handicap challenges against young professional players with increasing prizes as the handicap was lowered. At the time it expired in 2000, the unclaimed prize was 400,000 NT dollars for winning a 9-stone handicap match.
The AAAI General Game Playing Competition is a competition to develop programs that are effective at general game playing. Given a definition of a game, the program must play it effectively without human intervention. Since the game is not known in advance the competitors cannot especially adapt their programs to a particular scenario. The prize in 2006 and 2007 was $10,000.
The General Video Game AI Competition (GVGAI) poses the problem of creating artificial intelligence that can play a wide, and in principle unlimited, range of games.
Concretely, it tackles the problem of devising an algorithm that is able to play any game it is given, even if the game is not known a priori. Additionally, the contests poses the challenge of creating level and rule generators for any game is given.
This area of study can be seen as an approximation of General Artificial Intelligence, with very little room for game dependent heuristics. The competition runs yearly in different tracks:
- single player planning,
- two-player planning,
- single player learning,
- level and rule generation,
- and each track prizes ranging from 200 to 500 US dollars for winners and runner-ups.
The 2007 Ultimate Computer Chess Challenge was a competition organised by World Chess Federation that pitted Deep Fritz against Deep Junior. The prize was $100,000.
The annual Arimaa Challenge offered a $10,000 prize until the year 2020 to develop a program that plays the board game Arimaa and defeats a group of selected human opponents. In 2015, David Wu's bot bot_sharp beat the humans, losing only 2 games out of 9. As a result, the Arimaa Challenge was declared over and David Wu received the prize of $12,000 ($2,000 being offered by third-parties for 2015's championship).
2K Australia is offering a prize worth A$10,000 to develop a game-playing bot that plays a first-person shooter. The aim is to convince a panel of judges that it is actually a human player. The competition started in 2008 and was won in 2012. A new competition is planned for 2014.
The Google AI Challenge was a bi-annual online contest organized by the University of Waterloo Computer Science Club and sponsored by Google that ran from 2009 to 2011. Each year a game was chosen and contestants submitted specialized automated bots to play against other competing bots.
Cloudball had its first round in Spring 2012 and finished on June 15. It is an international artificial intelligence programming contest, where users continuously submit the actions their soccer teams will take in each time step, in simple high level C# code.
See also:
Nuance Communications and AI
- YouTube Video: Microsoft Acquires Nuance
- YouTube Video: Nuance AI Expertise: The Future of Voice Technology
- YouTube Video: Wheelhouse CIO discusses Microsoft's Nuance acquisition
Click Here for Nuance Web Site
Nuance Offers AI Solutions for the following: ___________________________________________________________________________
Nuance (Wikipedia)
Nuance is an American multinational computer software technology corporation, headquartered in Burlington, Massachusetts, on the outskirts of Boston, that provides speech recognition, and artificial intelligence.
Nuance merged with its competitor in the commercial large-scale speech application business, ScanSoft, in October 2005. ScanSoft was a Xerox spin-off that was bought in 1999 by Visioneer, a hardware and software scanner company, which adopted ScanSoft as the new merged company name. The original ScanSoft had its roots in Kurzweil Computer Products.
In April 2021, Microsoft announced it would buy Nuance Communications. The deal is an all-cash transaction of $19.7 billion, including company's debt, or $56 a share.
History:
The company that would become Nuance was incorporated in 1992 as Visioneer. In 1999, Visioneer acquired ScanSoft, Inc. (SSFT), and the combined company became known as ScanSoft.
In September 2005, ScanSoft Inc. acquired and merged with Nuance Communications, a natural language spinoff from SRI International. The resulting company adopted the Nuance name. During the prior decade, the two companies competed in the commercial large-scale speech application business.
ScanSoft origins:
In 1974, Raymond Kurzweil founded Kurzweil Computer Products, Inc. to develop the first omni-font optical character-recognition system – a computer program capable of recognizing text written in any normal font.
In 1980, Kurzweil sold his company to Xerox. The company became known as Xerox Imaging Systems (XIS), and later ScanSoft.
In March 1992, a new company called Visioneer, Inc. was founded to develop scanner hardware and software products, such as a sheetfed scanner called PaperMax and the document management software PaperPort.
Visioneer eventually sold its hardware division to Primax Electronics, Ltd. in January 1999. Two months later, in March, Visioneer acquired ScanSoft from Xerox to form a new public company with ScanSoft as the new company-wide name.
Prior to 2001, ScanSoft focused primarily on desktop imaging software such as TextBridge, PaperPort and OmniPage. Beginning with the December 2001 acquisition of Lernout & Hauspie, the company moved into the speech recognition business and began to compete with Nuance.
Lernout & Hauspie had acquired speech recognition company Dragon Systems in June 2001, shortly before becoming bankrupt in October.
Partnership with Siri and Apple Inc.
Siri is an application that combines speech recognition with advanced natural-language processing. The artificial intelligence, which required both advances in the underlying algorithms and leaps in processing power both on mobile devices and the servers that share the workload, allows software to understand words and their intentions.
Acquisitions:
Prior to the 2005 merger, ScanSoft acquired other companies to expand its business. Unlike ScanSoft, Nuance did not actively acquire companies prior to their merger other than the notable acquisition of Rhetorical Systems in November 2004 for $6.7 million. After the merger, the company continued to grow through acquisition.
ScanSoft merges with Nuance; changes company-wide name to Nuance Communications, Inc.:
Acquisition of Nuance Document Imaging by Kofax Inc.:
On February 1, 2019, Kofax Inc. announced the closing of its acquisition of Nuance Communications' Document Imaging Division. By means of this acquisition, Kofax gained Nuance's Power PDF, PaperPort document management, and OmniPage optical character recognition software applications.
Kofax also acquired Copitrak in the closing.
Acquisition by Microsoft:
On April 12, 2021, Microsoft announced that it will buy Nuance Communications for $19.7 billion, or $56 a share, a 22% increase over the previous closing price. Nuance's CEO, Mark Benjamin, will stay with the company. This will be Microsoft's second-biggest deal ever, after its purchase of Linkedin for $24 billion in 2016.
Nuance Offers AI Solutions for the following: ___________________________________________________________________________
Nuance (Wikipedia)
Nuance is an American multinational computer software technology corporation, headquartered in Burlington, Massachusetts, on the outskirts of Boston, that provides speech recognition, and artificial intelligence.
Nuance merged with its competitor in the commercial large-scale speech application business, ScanSoft, in October 2005. ScanSoft was a Xerox spin-off that was bought in 1999 by Visioneer, a hardware and software scanner company, which adopted ScanSoft as the new merged company name. The original ScanSoft had its roots in Kurzweil Computer Products.
In April 2021, Microsoft announced it would buy Nuance Communications. The deal is an all-cash transaction of $19.7 billion, including company's debt, or $56 a share.
History:
The company that would become Nuance was incorporated in 1992 as Visioneer. In 1999, Visioneer acquired ScanSoft, Inc. (SSFT), and the combined company became known as ScanSoft.
In September 2005, ScanSoft Inc. acquired and merged with Nuance Communications, a natural language spinoff from SRI International. The resulting company adopted the Nuance name. During the prior decade, the two companies competed in the commercial large-scale speech application business.
ScanSoft origins:
In 1974, Raymond Kurzweil founded Kurzweil Computer Products, Inc. to develop the first omni-font optical character-recognition system – a computer program capable of recognizing text written in any normal font.
In 1980, Kurzweil sold his company to Xerox. The company became known as Xerox Imaging Systems (XIS), and later ScanSoft.
In March 1992, a new company called Visioneer, Inc. was founded to develop scanner hardware and software products, such as a sheetfed scanner called PaperMax and the document management software PaperPort.
Visioneer eventually sold its hardware division to Primax Electronics, Ltd. in January 1999. Two months later, in March, Visioneer acquired ScanSoft from Xerox to form a new public company with ScanSoft as the new company-wide name.
Prior to 2001, ScanSoft focused primarily on desktop imaging software such as TextBridge, PaperPort and OmniPage. Beginning with the December 2001 acquisition of Lernout & Hauspie, the company moved into the speech recognition business and began to compete with Nuance.
Lernout & Hauspie had acquired speech recognition company Dragon Systems in June 2001, shortly before becoming bankrupt in October.
Partnership with Siri and Apple Inc.
Siri is an application that combines speech recognition with advanced natural-language processing. The artificial intelligence, which required both advances in the underlying algorithms and leaps in processing power both on mobile devices and the servers that share the workload, allows software to understand words and their intentions.
Acquisitions:
Prior to the 2005 merger, ScanSoft acquired other companies to expand its business. Unlike ScanSoft, Nuance did not actively acquire companies prior to their merger other than the notable acquisition of Rhetorical Systems in November 2004 for $6.7 million. After the merger, the company continued to grow through acquisition.
ScanSoft merges with Nuance; changes company-wide name to Nuance Communications, Inc.:
- September 15, 2005 — ScanSoft acquired and merged with Nuance Communications, of Menlo Park, California, for $221 million.
- October 18, 2005 — the company changed its name to Nuance Communications, Inc.
- March 31, 2006 — Dictaphone Corporation, of Stratford, Connecticut, for $357 million.
- December 29, 2006 — Mobile Voice Control, Inc. of Mason, Ohio.
- March 2007 — Focus Informatics, Inc. Woburn, Massachusetts.
- March 26, 2007 — Bluestar Resources Ltd.
- April 24, 2007 — BeVocal, Inc. of Mountain View, California, for $140 million.
- August 24, 2007 — VoiceSignal Technologies, Inc. of Woburn, Massachusetts.
- August 24, 2007 — Tegic Communications, Inc. of Seattle, Washington, for $265 million. Tegic developed and was the patent owner of T9 technology.
- September 28, 2007 — Commissure, Inc. of New York City, New York, for 217,975 shares of common stock.
- November 2, 2007 — Vocada, Inc. of Dallas, Texas.
- November 26, 2007 — Viecore, Inc. of Mahwah, New Jersey.
- November 26, 2007 — Viecore, FSD. of Eatontown, New Jersey. It was sold to EOIR in 2013.
- May 20, 2008 — eScription, Inc. of Needham, Massachusetts, for $340 million plus 1,294,844 shares of common stock.
- July 31, 2008 — MultiVision Communications Inc. of Markham, Ontario.
- September 26, 2008 — Philips Speech Recognition SystemsGMBH (PSRS), a business unit of Royal Philips Electronics of Vienna, Austria for about €66 million, or US$96.1 million. The acquisition of Philips Speech Recognition Systems sparked an antitrust investigation by the US Department of Justice. This investigation was focused upon medical transcription services. This investigation was closed in December, 2009.
- October 1, 2008 — SNAPin Software, Inc. of Bellevue, Washington — $180 million in shares of common stock.
- January 15, 2009 — Nuance Acquires IBM's patents Speech Technology rights.
- April 10, 2009 — Zi Corporation of Calgary, Alberta, Canada for approximately $35 million in cash and common stock.
- May 2009 — the speech technology department of Harman International Industries.
- July 14, 2009 — Jott Networks Inc. of Seattle, Washington.
- September 18, 2009 — nCore Ltd. of Oulu, Finland.
- October 5, 2009 — Ecopy of Nashua, New Hampshire. Under the terms of the agreement, net consideration was approximately $54 million in Nuance common stock.
- December 30, 2009 — Spinvox of Marlow, UK for $102.5m comprising $66m in cash and $36.5m in stock.
- February 16, 2010 — Nuance announced they acquired MacSpeech for an undisclosed amount.
- February 2010 — Nuance acquired Language and Computing, Inc., a provider of natural language processing and natural language understanding technology solutions, from Gimv NV, a Belgium-based private equity firm.
- July 2010 — Nuance acquired iTa P/L, an Australian IVR and speech services company.
- November 2010 — Nuance acquired PerSay, a voice biometrics-based authentication company for $12.6 million.
- February 2011 — Nuance acquired Noterize, an Australian company producing software for the Apple iPad.
- June 2011 — Nuance acquired Equitrac, the world leader in print management and cost recovery software.
- June 2011 — Nuance acquired SVOX, a speech technology company specializing in the automotive, mobile, and consumer electronics markets.
- July 2011 — Nuance acquired Webmedx, a provider of medical transcription and editing services. Financial terms of the deal were not disclosed.
- August 2011 — Loquendo announced Nuance acquired it. Loquendo provided a range of speech technologies for telephony, mobile, automotive, embedded and desktop solutions including text-to-speech (TTS), automatic speech recognition (ASR) and voice biometrics solutions. Nuance paid 53 million euros.
- October, 2011 — Nuance acquired Swype, a company that produces input software for touchscreen displays, for more than $100 million.
- December 2011 — Nuance acquired Vlingo, after repeatedly suing Vlingo over patent infringement. The Cambridge-based Vlingo was trying to make voice enabling applications easier, by using their own speech-to-text J2ME/Brew application API.
- April 2012 — Nuance acquired Transcend Services. Transcend utilizes a combination of its proprietary Internet-based voice and data distribution technology, customer based technology, and home-based medical language specialists to convert physicians' voice recordings into electronic documents. It also provides outsourcing transcription and editing services on the customer's platform.
- June 2012 — Nuance acquired SafeCom, a provider of print management and cost recovery software noted for their integration with Hewlett-Packard printing devices.
- September 2012 — Nuance acquired Ditech Networks for $22.5 million.
- September 2012 — Nuance acquired Quantim, QuadraMed's HIM Business — a provider of information technology solutions for the healthcare industry.
- October 2012 — Nuance acquired J.A. Thomas and Associates (JATA) — a provider of physician-oriented, clinical documentation improvement (CDI) programs for the healthcare industry.
- November 2012 — Nuance acquired Accentus.
- January 2013 — Nuance acquired VirtuOz.
- April 2013 — Nuance acquired Copitrak.
- May 2013 — Nuance acquired Tweddle Connect business for $80 million from Tweddle Group.
- July 2013 — Nuance acquired Cognition Technologies Inc.
- October, 2013 — Nuance acquired Varolii (formally Par3 Communications).
- July, 2014 — Nuance acquired Accelarad (FKA Neurostar Solutions), makers of SeeMyRadiology, a cloud-based medical images and reports exchange network. Accelarad was based in Atlanta Georgia with a Sales Operations office in Birmingham Alabama.
- June, 2016 — Nuance acquired TouchCommerce, a leader in digital customer service and intelligent engagement solutions with a specialization in live chat.
- August, 2016 — Nuance acquired Montage Healthcare Solutions.
- February, 2017 — Nuance acquired mCarbon for $36M, a mobile value added services provider.
- January 2018 — Nuance acquired iScribes, a medical documentation solutions provider.
- May, 2018 — Nuance acquired voicebox for $82M, an early leader in speech recognition and natural language technologies.
- Feb 8, 2021 — Nuance acquired Saykara.
Acquisition of Nuance Document Imaging by Kofax Inc.:
On February 1, 2019, Kofax Inc. announced the closing of its acquisition of Nuance Communications' Document Imaging Division. By means of this acquisition, Kofax gained Nuance's Power PDF, PaperPort document management, and OmniPage optical character recognition software applications.
Kofax also acquired Copitrak in the closing.
Acquisition by Microsoft:
On April 12, 2021, Microsoft announced that it will buy Nuance Communications for $19.7 billion, or $56 a share, a 22% increase over the previous closing price. Nuance's CEO, Mark Benjamin, will stay with the company. This will be Microsoft's second-biggest deal ever, after its purchase of Linkedin for $24 billion in 2016.
Artificial intelligence in government
- YouTube Video: Artificial Intelligence in Government
- YouTube Video: Artificial intelligence: Should AI be regulated by governments?
- YouTube Video: The Power of Artificial Intelligence for Government
Artificial intelligence (AI) has a range of uses in government. It can be used to further public policy objectives (in areas such as emergency services, health and welfare), as well as assist the public to interact with the government (through the use of virtual assistants, for example).
According to the Harvard Business Review, "Applications of artificial intelligence to the public sector are broad and growing, with early experiments taking place around the world."
Hila Mehr from the Ash Center for Democratic Governance and Innovation at Harvard University notes that AI in government is not new, with postal services using machine methods in the late 1990s to recognize handwriting on envelopes to automatically route letters.
The use of AI in government comes with significant benefits, including efficiencies resulting in cost savings (for instance by reducing the number of front office staff), and reducing the opportunities for corruption. However, it also carries risks.
Uses of AI in government:
The potential uses of AI in government are wide and varied, with Deloitte considering that "Cognitive technologies could eventually revolutionize every facet of government operations". Mehr suggests that six types of government problems are appropriate for AI applications:
Meher states that "While applications of AI in government work have not kept pace with the rapid expansion of AI in the private sector, the potential use cases in the public sector mirror common applications in the private sector."
Potential and actual uses of AI in government can be divided into three broad categories: those that contribute to public policy objectives; those that assist public interactions with the government; and other uses.
Contributing to public policy objectives:
There are a range of examples of where AI can contribute to public policy objectives.These include:
Assisting public interactions with government:
AI can be used to assist members of the public to interact with government and access government services, for example by:
Examples of virtual assistants or chatbots being used by government include the following:
Other uses:
Other uses of AI in government include:
Potential benefits:
AI offers potential efficiencies and costs savings for the government. For example, Deloitte has estimated that automation could save US Government employees between 96.7 million to 1.2 billion hours a year, resulting in potential savings of between $3.3 billion to $41.1 billion a year.
The Harvard Business Review has stated that while this may lead a government to reduce employee numbers, "Governments could instead choose to invest in the quality of its services. They can re-employ workers’ time towards more rewarding work that requires lateral thinking, empathy, and creativity — all things at which humans continue to outperform even the most sophisticated AI program."
Potential risks:
Potential risks associated with the use of AI in government include AI becoming susceptible to bias, a lack of transparency in how an AI application may make decisions, and the accountability for any such decisions.
See also:
According to the Harvard Business Review, "Applications of artificial intelligence to the public sector are broad and growing, with early experiments taking place around the world."
Hila Mehr from the Ash Center for Democratic Governance and Innovation at Harvard University notes that AI in government is not new, with postal services using machine methods in the late 1990s to recognize handwriting on envelopes to automatically route letters.
The use of AI in government comes with significant benefits, including efficiencies resulting in cost savings (for instance by reducing the number of front office staff), and reducing the opportunities for corruption. However, it also carries risks.
Uses of AI in government:
The potential uses of AI in government are wide and varied, with Deloitte considering that "Cognitive technologies could eventually revolutionize every facet of government operations". Mehr suggests that six types of government problems are appropriate for AI applications:
- Resource allocation - such as where administrative support is required to complete tasks more quickly.
- Large datasets - where these are too large for employees to work efficiently and multiple datasets could be combined to provide greater insights.
- Experts shortage - including where basic questions could be answered and niche issues can be learned.
- Predictable scenario - historical data makes the situation predictable.
- Procedural - repetitive tasks where inputs or outputs have a binary answer.
- Diverse data - where data takes a variety of forms (such as visual and linguistic) and needs to be summarized regularly.
Meher states that "While applications of AI in government work have not kept pace with the rapid expansion of AI in the private sector, the potential use cases in the public sector mirror common applications in the private sector."
Potential and actual uses of AI in government can be divided into three broad categories: those that contribute to public policy objectives; those that assist public interactions with the government; and other uses.
Contributing to public policy objectives:
There are a range of examples of where AI can contribute to public policy objectives.These include:
- Receiving benefits at job loss, retirement, bereavement and child birth almost immediately, in an automated way (thus without requiring any actions from citizens at all)
- Social insurance service provision
- Classifying emergency calls based on their urgency (like the system used by the Cincinnati Fire Department in the United States)
- Detecting and preventing the spread of diseases
- Assisting public servants in making welfare payments and immigration decisions
- Adjudicating bail hearings
- Triaging health care cases
- Monitoring social media for public feedback on policies
- Monitoring social media to identify emergency situations
- Identifying fraudulent benefits claims
- Predicting a crime and recommending optimal police presence
- Predicting traffic congestion and car accidents
- Anticipating road maintenance requirements
- Identifying breaches of health regulations
- Providing personalised education to students
- Marking exam papers
- Assisting with defence and national security (see Artificial intelligence § Military and Applications of artificial intelligence § Other respectively).
- Making symptom based health Chatbot for diagnosis
Assisting public interactions with government:
AI can be used to assist members of the public to interact with government and access government services, for example by:
- Answering questions using virtual assistants or chatbots (see below)
- Directing requests to the appropriate area within government
- Filling out forms
- Assisting with searching documents (e.g. IP Australia’s trade mark search)
- Scheduling appointments
Examples of virtual assistants or chatbots being used by government include the following:
- Launched in February 2016, the Australian Taxation Office has a virtual assistant on its website called "Alex". As at 30 June 2017, Alex could respond to more than 500 questions, had engaged in 1.5 million conversations and resolved over 81% of enquiries at first contact.
- Australia's National Disability Insurance Scheme (NDIS) is developing a virtual assistant called "Nadia" which takes the form of an avatar using the voice of actor Cate Blanchett. Nadia is intended to assist users of the NDIS to navigate the service. Costing some $4.5 million, the project has been postponed following a number of issues. Nadia was developed using IBM Watson, however, the Australian Government is considering other platforms such as Microsoft Cortana for its further development.
- The Australian Government's Department of Human Services uses virtual assistants on parts of its website to answer questions and encourage users to stay in the digital channel. As at December 2018, a virtual assistant called "Sam" could answer general questions about family, job seeker and student payments and related information. The Department also introduced an internally-facing virtual assistant called "MelissHR" to make it easier for departmental staff to access human resources information.
- Estonia is building a virtual assistant which will guide citizens through any interactions they have with the government. Automated and proactive services "push" services to citizens at key events of their lives (including births, bereavements, unemployment, ...). One example is the automated registering of babies when they are born.
Other uses:
Other uses of AI in government include:
- Translation
- Drafting documents
Potential benefits:
AI offers potential efficiencies and costs savings for the government. For example, Deloitte has estimated that automation could save US Government employees between 96.7 million to 1.2 billion hours a year, resulting in potential savings of between $3.3 billion to $41.1 billion a year.
The Harvard Business Review has stated that while this may lead a government to reduce employee numbers, "Governments could instead choose to invest in the quality of its services. They can re-employ workers’ time towards more rewarding work that requires lateral thinking, empathy, and creativity — all things at which humans continue to outperform even the most sophisticated AI program."
Potential risks:
Potential risks associated with the use of AI in government include AI becoming susceptible to bias, a lack of transparency in how an AI application may make decisions, and the accountability for any such decisions.
See also:
- AI for Good
- Applications of artificial intelligence
- Artificial intelligence
- Government by algorithm
- Lawbot
- Regulation of algorithms
- Regulation of artificial intelligence
Artificial Intelligence in Heavy Industry
- YouTube Videos: Power Industry 4.0 with Artificial Intelligence
- YouTube Video: Applications of AI across Industries
- YouTube Video: Artificial Intelligence (AI) | Robotics | Robots | Machine Learning | March of the Machines
Artificial intelligence, in modern terms, generally refers to computer systems that mimic human cognitive functions. It encompasses independent learning and problem-solving. While this type of general artificial intelligence has not been achieved yet, most contemporary artificial intelligence projects are currently better understood as types of machine-learning algorithms, that can be integrated with existing data to understand, categorize, and adapt sets of data without the need for explicit programming.
AI-driven systems can discover patterns and trends, discover inefficiencies, and predict future outcomes based on historical trends, which ultimately enables informed decision-making. As such, they are potentially beneficial for many industries, notably heavy industry.
While the application of artificial intelligence in heavy industry is still in its early stages, applications are likely to include optimization of asset management and operational performance, as well as identifying efficiencies and decreasing downtime.
Potential benefits:
AI-driven machines ensure an easier manufacturing process, along with many other benefits, at each new stage of advancement. Technology creates new potential for task automation while increasing the intelligence of human and machine interaction. Some benefits of AI include directed automation, 24/7 production, safer operational environments, and reduced operating costs.
Directed automation:
AI and robots can execute actions repeatedly without any error, and design more competent production models by building automation solutions. They are also capable of eliminating human errors and delivering superior levels of quality assurance on their own.
24/7 production:
While humans must work in shifts to accommodate sleep and mealtimes, robots can keep a production line running continuously. Businesses can expand their production capabilities and meet higher demands for products from global customers due to boosted production from this round-the-clock work performance.
Safer operational environment:
More AI means fewer human laborers performing dangerous and strenuous work. Logically speaking, with fewer humans and more robots performing activities associated with risk, the number of workplace accidents should dramatically decrease. It also offers a great opportunity for exploration because companies do not have to risk human life.
Condensed operating costs:
With AI taking over day-to-day activities, a business will have considerably lower operating costs. Rather than employing humans to work in shifts, they could simply invest in AI.
The only cost incurred would be from maintenance after the machinery is purchased and commissioned.
Environmental impacts:
Self-driving cars are potentially beneficial to the environment. They can be programmed to navigate the most efficient route and reduce idle time, which could result in less fossil fuel consumption and greenhouse gas (GHG) emissions. The same could be said for heavy machinery used in heavy industry. AI can accurately follow a sequence of procedures repeatedly, whereas humans are prone to occasional errors.
Additional benefits of AI:
AI and industrial automation have advanced considerably over the years. There has been an evolution of many new techniques and innovations, such as advances in sensors and the increase of computing capabilities.
AI helps machines gather and extract data, identify patterns, adapt to new trends through machine intelligence, learning, and speech recognition. It also helps to make quick data-driven decisions, advance process effectiveness, minimize operational costs, facilitate product development, and enable extensive scalability.
Potential negatives:
High cost:
Though the cost has been decreasing in the past few years, individual development expenditures can still be as high as $300,000 for basic AI. Small businesses with a low capital investment may have difficulty generating the funds necessary to leverage AI.
For larger companies, the price of AI may be higher, depending on how much AI is involved in the process. Because of higher costs, the feasibility of leveraging AI becomes a challenge for many companies. Nevertheless, the cost of utilizing AI can be cheaper for companies with the advent of open-source artificial intelligence software.
Reduced employment opportunities:
Job opportunities will grow with the advent of AI; however, some jobs might be lost because AI would replace them. Any job that involves repetitive tasks is at risk of being replaced.
In 2017, Gartner predicted 500,000 jobs would be created because of AI, but also predicted that up to 900,000 jobs could be lost because of it. These figures stand true for jobs only within the United States.
AI decision-making:
AI is only as intelligent as the individuals responsible for its initial programming. In 2014, an active shooter situation led to people calling Uber to escape the shooting and surrounding area. Instead of recognizing this as a dangerous situation, the algorithm Uber used saw a rise in demand and increased its prices. This type of situation can be dangerous in the heavy industry, where one mistake can cost lives or cause injury.
Environmental impacts:
Only 20 percent of electronic waste was recycled in 2016, despite 67 nations having enacted e-waste legislation. Electronic waste is expected to reach 52.2 million tons in the year 2021. The manufacture of digital devices and other electronics goes hand-in-hand with AI development which is poised to damage the environment.
In September 2015, the German car company Volkswagen witnessed an international scandal. The software in the cars falsely activated emission controls of nitrogen oxide gases (NOx gases) when they were undergoing a sample test. Once the cars were on the road, the emission controls deactivated and the NOx emissions increased up to 40 times.
NOx gases are harmful because they cause significant health problems, including respiratory problems and asthma. Further studies have shown that additional emissions could cause over 1,200 premature deaths in Europe and result in $2.4 million worth of lost productivity.
AI trained to act on environmental variables might have erroneous algorithms, which can lead to potentially negative effects on the environment.
Algorithms trained on biased data will produce biased results. The COMPAS judicial decision support system is one such example of biased data producing unfair outcomes. When machines develop learning and decision-making ability that is not coded by a programmer, the mistakes can be hard to trace and see. As such, the management and scrutiny of AI-based processes are essential.
Effects of AI in the manufacturing industry:
Landing.ai, a startup formed by Andrew Ng, developed machine-vision tools that detect microscopic defects in products at resolutions well beyond the human vision. The machine-vision tools use a machine-learning algorithm tested on small volumes of sample images. The computer not only 'sees' the errors but processes the information and learns from what it observes.
In 2014, China, Japan, the United States, the Republic of Korea and Germany together contributed to 70 percent of the total sales volume of robots. In the automotive industry, a sector with a particularly high degree of automation, Japan had the highest density of industrial robots in the world at 1,414 per 10,000 employees.
Generative design is a new process born from artificial intelligence. Designers or engineers specify design goals (as well as material parameters, manufacturing methods, and cost constraints) into the generative design software. The software explores all potential permutations for a feasible solution and generates design alternatives. The software also uses machine learning to test and learn from each iteration to test which iterations work and which iterations fail. It is said to effectively rent 50,000 computers [in the cloud] for an hour.
Artificial intelligence has gradually become widely adopted in the modern world. AI personal assistants, like Siri or Alexa, have been around for military purposes since 2003.
AI-driven systems can discover patterns and trends, discover inefficiencies, and predict future outcomes based on historical trends, which ultimately enables informed decision-making. As such, they are potentially beneficial for many industries, notably heavy industry.
While the application of artificial intelligence in heavy industry is still in its early stages, applications are likely to include optimization of asset management and operational performance, as well as identifying efficiencies and decreasing downtime.
Potential benefits:
AI-driven machines ensure an easier manufacturing process, along with many other benefits, at each new stage of advancement. Technology creates new potential for task automation while increasing the intelligence of human and machine interaction. Some benefits of AI include directed automation, 24/7 production, safer operational environments, and reduced operating costs.
Directed automation:
AI and robots can execute actions repeatedly without any error, and design more competent production models by building automation solutions. They are also capable of eliminating human errors and delivering superior levels of quality assurance on their own.
24/7 production:
While humans must work in shifts to accommodate sleep and mealtimes, robots can keep a production line running continuously. Businesses can expand their production capabilities and meet higher demands for products from global customers due to boosted production from this round-the-clock work performance.
Safer operational environment:
More AI means fewer human laborers performing dangerous and strenuous work. Logically speaking, with fewer humans and more robots performing activities associated with risk, the number of workplace accidents should dramatically decrease. It also offers a great opportunity for exploration because companies do not have to risk human life.
Condensed operating costs:
With AI taking over day-to-day activities, a business will have considerably lower operating costs. Rather than employing humans to work in shifts, they could simply invest in AI.
The only cost incurred would be from maintenance after the machinery is purchased and commissioned.
Environmental impacts:
Self-driving cars are potentially beneficial to the environment. They can be programmed to navigate the most efficient route and reduce idle time, which could result in less fossil fuel consumption and greenhouse gas (GHG) emissions. The same could be said for heavy machinery used in heavy industry. AI can accurately follow a sequence of procedures repeatedly, whereas humans are prone to occasional errors.
Additional benefits of AI:
AI and industrial automation have advanced considerably over the years. There has been an evolution of many new techniques and innovations, such as advances in sensors and the increase of computing capabilities.
AI helps machines gather and extract data, identify patterns, adapt to new trends through machine intelligence, learning, and speech recognition. It also helps to make quick data-driven decisions, advance process effectiveness, minimize operational costs, facilitate product development, and enable extensive scalability.
Potential negatives:
High cost:
Though the cost has been decreasing in the past few years, individual development expenditures can still be as high as $300,000 for basic AI. Small businesses with a low capital investment may have difficulty generating the funds necessary to leverage AI.
For larger companies, the price of AI may be higher, depending on how much AI is involved in the process. Because of higher costs, the feasibility of leveraging AI becomes a challenge for many companies. Nevertheless, the cost of utilizing AI can be cheaper for companies with the advent of open-source artificial intelligence software.
Reduced employment opportunities:
Job opportunities will grow with the advent of AI; however, some jobs might be lost because AI would replace them. Any job that involves repetitive tasks is at risk of being replaced.
In 2017, Gartner predicted 500,000 jobs would be created because of AI, but also predicted that up to 900,000 jobs could be lost because of it. These figures stand true for jobs only within the United States.
AI decision-making:
AI is only as intelligent as the individuals responsible for its initial programming. In 2014, an active shooter situation led to people calling Uber to escape the shooting and surrounding area. Instead of recognizing this as a dangerous situation, the algorithm Uber used saw a rise in demand and increased its prices. This type of situation can be dangerous in the heavy industry, where one mistake can cost lives or cause injury.
Environmental impacts:
Only 20 percent of electronic waste was recycled in 2016, despite 67 nations having enacted e-waste legislation. Electronic waste is expected to reach 52.2 million tons in the year 2021. The manufacture of digital devices and other electronics goes hand-in-hand with AI development which is poised to damage the environment.
In September 2015, the German car company Volkswagen witnessed an international scandal. The software in the cars falsely activated emission controls of nitrogen oxide gases (NOx gases) when they were undergoing a sample test. Once the cars were on the road, the emission controls deactivated and the NOx emissions increased up to 40 times.
NOx gases are harmful because they cause significant health problems, including respiratory problems and asthma. Further studies have shown that additional emissions could cause over 1,200 premature deaths in Europe and result in $2.4 million worth of lost productivity.
AI trained to act on environmental variables might have erroneous algorithms, which can lead to potentially negative effects on the environment.
Algorithms trained on biased data will produce biased results. The COMPAS judicial decision support system is one such example of biased data producing unfair outcomes. When machines develop learning and decision-making ability that is not coded by a programmer, the mistakes can be hard to trace and see. As such, the management and scrutiny of AI-based processes are essential.
Effects of AI in the manufacturing industry:
Landing.ai, a startup formed by Andrew Ng, developed machine-vision tools that detect microscopic defects in products at resolutions well beyond the human vision. The machine-vision tools use a machine-learning algorithm tested on small volumes of sample images. The computer not only 'sees' the errors but processes the information and learns from what it observes.
In 2014, China, Japan, the United States, the Republic of Korea and Germany together contributed to 70 percent of the total sales volume of robots. In the automotive industry, a sector with a particularly high degree of automation, Japan had the highest density of industrial robots in the world at 1,414 per 10,000 employees.
Generative design is a new process born from artificial intelligence. Designers or engineers specify design goals (as well as material parameters, manufacturing methods, and cost constraints) into the generative design software. The software explores all potential permutations for a feasible solution and generates design alternatives. The software also uses machine learning to test and learn from each iteration to test which iterations work and which iterations fail. It is said to effectively rent 50,000 computers [in the cloud] for an hour.
Artificial intelligence has gradually become widely adopted in the modern world. AI personal assistants, like Siri or Alexa, have been around for military purposes since 2003.
Artificial Intelligence (AI) in Healthcare
- YouTube Video: Adoption of AI and Machine Learning in Healthcare (GE)
- YouTube Video: The Advent of AI in Healthcare (Cleveland Clinic)
- YouTube Video: Is this the future of health? | The Economist
Artificial intelligence in healthcare is an overarching term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to mimic human cognition in the analysis, presentation, and comprehension of complex medical and health care data. Specifically, AI is the ability of computer algorithms to approximate conclusions based solely on input data.
What distinguishes AI technology from traditional technologies in health care is the ability to gather data, process it and give a well-defined output to the end-user. AI does this through machine learning algorithms and deep learning. These algorithms can recognize patterns in behavior and create their own logic.
To gain useful insights and predictions, machine learning models must be trained using extensive amounts of input data. AI algorithms behave differently from humans in two ways:
The primary aim of health-related AI applications is to analyze relationships between prevention or treatment techniques and patient outcomes.
AI programs are applied to practices such as:
AI algorithms can also be used to analyze large amounts of data through electronic health records for disease prevention and diagnosis. Medical institutions such as the following have developed AI algorithms for their departments:
Large technology companies such as IBM and Google, have also developed AI algorithms for healthcare. Additionally, hospitals are looking to AI software to support operational initiatives that increase cost saving, improve patient satisfaction, and satisfy their staffing and workforce needs.
Currently, the United States government is investing billions of dollars to progress the development of AI in healthcare. Companies are developing technologies that help healthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing length of stay and optimizing staffing levels.
As widespread use of AI in healthcare is relatively new, there are several unprecedented ethical concerns related to its practice such as data privacy, automation of jobs, and representation biases.
History:
Research in the 1960s and 1970s produced the first problem-solving program, or expert system, known as Dendral. While it was designed for applications in organic chemistry, it provided the basis for a subsequent system MYCIN, considered one of the most significant early uses of artificial intelligence in medicine. MYCIN and other systems such as INTERNIST-1 and CASNET did not achieve routine use by practitioners, however.
The 1980s and 1990s brought the proliferation of the microcomputer and new levels of network connectivity. During this time, there was a recognition by researchers and developers that AI systems in healthcare must be designed to accommodate the absence of perfect data and build on the expertise of physicians. Approaches involving fuzzy set theory, Bayesian networks, and artificial neural networks, have been applied to intelligent computing systems in healthcare.
Medical and technological advancements occurring over this half-century period that have enabled the growth healthcare-related applications of AI include:
Current research:
Various specialties in medicine have shown an increase in research regarding AI. As the novel coronavirus ravages through the globe, the United States is estimated to invest more than $2 billion in AI related healthcare research over the next 5 years, more than 4 times the amount spent in 2019 ($463 million).
Dermatology:
Dermatology is an imaging abundant specialty and the development of deep learning has been strongly tied to image processing. Therefore there is a natural fit between the dermatology and deep learning.
There are 3 main imaging types in dermatology: contextual images, macro images, micro images. For each modality, deep learning showed great progress.
Radiology:
AI is being studied within the radiology field to detect and diagnose diseases within patients through Computerized Tomography (CT) and Magnetic Resonance (MR) Imaging. The focus on Artificial Intelligence in radiology has rapidly increased in recent years according to the Radiology Society of North America, where they have seen growth from 0 to 3, 17, and overall 10% of total publications from 2015-2018 respectively.
A study at Stanford created an algorithm that could detect pneumonia in patients with a better average F1 metric (a statistical metric based on accuracy and recall), than radiologists involved in the trial. Through imaging in oncology, AI has been able to serve well for detecting abnormalities and monitoring change over time; two key factors in oncological health.
Many companies and vendor neutral systems such as icometrix, QUIBIM, Robovision, and UMC Utrecht’s IMAGRT have become available to provide a trainable machine learning platform to detect a wide range of diseases.
The Radiological Society of North America has implemented presentations on AI in imaging during its annual conference. Many professionals are optimistic about the future of AI processing in radiology, as it will cut down on needed interaction time and allow doctors to see more patients.
Although not always as good as a trained eye at deciphering malicious or benign growths, the history of medical imaging shows a trend toward rapid advancement in both capability and reliability of new systems. The emergence of AI technology in radiology is perceived as a threat by some specialists, as it can improve by certain statistical metrics in isolated cases, where specialists cannot.
Screening:
Recent advances have suggested the use of AI to describe and evaluate the outcome of maxillo-facial surgery or the assessment of cleft palate therapy in regard to facial attractiveness or age appearance.
In 2018, a paper published in the journal Annals of Oncology mentioned that skin cancer could be detected more accurately by an artificial intelligence system (which used a deep learning convolutional neural network) than by dermatologists. On average, the human dermatologists accurately detected 86.6% of skin cancers from the images, compared to 95% for the CNN machine.
In January 2020 researchers demonstrate an AI system, based on a Google DeepMind algorithm, that is capable of surpassing human experts in breast cancer detection.
In July 2020 it was reported that an AI algorithm by the University of Pittsburgh achieves the highest accuracy to date in identifying prostate cancer, with 98% sensitivity and 97% specificity.
Psychiatry:
In psychiatry, AI applications are still in a phase of proof-of-concept. Areas where the evidence is widening quickly include chatbots, conversational agents that imitate human behavior and which have been studied for anxiety and depression.
Challenges include the fact that many applications in the field are developed and proposed by private corporations, such as the screening for suicidal ideation implemented by Facebook in 2017. Such applications outside the healthcare system raise various professional, ethical and regulatory questions.
Primary care:
Primary care has become one key development area for AI technologies. AI in primary care has been used for supporting decision making, predictive modelling, and business analytics. Despite the rapid advances in AI technologies, general practitioners' view on the role of AI in primary care is very limited–mainly focused on administrative and routine documentation tasks.
Disease diagnosis:
An article by Jiang, et al. (2017) demonstrated that there are several types of AI techniques that have been used for a variety of different diseases, such as support vector machines, neural networks, and decision trees. Each of these techniques is described as having a “training goal” so “classifications agree with the outcomes as much as possible…”.
To demonstrate some specifics for disease diagnosis/classification there are two different techniques used in the classification of these diseases include using “Artificial Neural Networks (ANN) and Bayesian Networks (BN)”. It was found that ANN was better and could more accurately classify diabetes and CVD.
Through the use of Medical Learning Classifiers (MLC’s), Artificial Intelligence has been able to substantially aid doctors in patient diagnosis through the manipulation of mass Electronic Health Records (EHR’s).
Medical conditions have grown more complex, and with a vast history of electronic medical records building, the likelihood of case duplication is high. Although someone today with a rare illness is less likely to be the only person to have suffered from any given disease, the inability to access cases from similarly symptomatic origins is a major roadblock for physicians.
The implementation of AI to not only help find similar cases and treatments, but also factor in chief symptoms and help the physicians ask the most appropriate questions helps the patient receive the most accurate diagnosis and treatment possible.
Telemedicine:
The increase of telemedicine, the treatment of patients remotely, has shown the rise of possible AI applications. AI can assist in caring for patients remotely by monitoring their information through sensors.
A wearable device may allow for constant monitoring of a patient and the ability to notice changes that may be less distinguishable by humans. The information can be compared to other data that has already been collected using artificial intelligence algorithms that alert physicians if there are any issues to be aware of.
Another application of artificial intelligence is in chat-bot therapy. Some researchers charge that the reliance on chat-bots for mental healthcare does not offer the reciprocity and accountability of care that should exist in the relationship between the consumer of mental healthcare and the care provider (be it a chat-bot or psychologist), though.
Since the average age has risen due to a longer life expectancy, artificial intelligence could be useful in helping take care of older populations. Tools such as environment and personal sensors can identify a person’s regular activities and alert a caretaker if a behavior or a measured vital is abnormal.
Although the technology is useful, there are also discussions about limitations of monitoring in order to respect a person’s privacy since there are technologies that are designed to map out home layouts and detect human interactions.
Electronic health records:
Electronic health records (EHR) are crucial to the digitalization and information spread of the healthcare industry. Now that around 80% of medical practices use EHR, the next step is to use artificial intelligence to interpret the records and provide new information to physicians.
One application uses natural language processing (NLP) to make more succinct reports that limit the variation between medical terms by matching similar medical terms. For example, the term heart attack and myocardial infarction mean the same things, but physicians may use one over the over based on personal preferences.
NLP algorithms consolidate these differences so that larger datasets can be analyzed. Another use of NLP identifies phrases that are redundant due to repetition in a physician’s notes and keeps the relevant information to make it easier to read.
Beyond making content edits to an EHR, there are AI algorithms that evaluate an individual patient’s record and predict a risk for a disease based on their previous information and family history. One general algorithm is a rule-based system that makes decisions similarly to how humans use flow charts.
This system takes in large amounts of data and creates a set of rules that connect specific observations to concluded diagnoses. Thus, the algorithm can take in a new patient’s data and try to predict the likeliness that they will have a certain condition or disease. Since the algorithms can evaluate a patient’s information based on collective data, they can find any outstanding issues to bring to a physician’s attention and save time.
One study conducted by the Centerstone research institute found that predictive modeling of EHR data has achieved 70–72% accuracy in predicting individualized treatment response. These methods are helpful due to the fact that the amount of online health records doubles every five years.
Physicians do not have the bandwidth to process all this data manually, and AI can leverage this data to assist physicians in treating their patients.
Drug Interactions:
Improvements in natural language processing led to the development of algorithms to identify drug-drug interactions in medical literature. Drug-drug interactions pose a threat to those taking multiple medications simultaneously, and the danger increases with the number of medications being taken.
To address the difficulty of tracking all known or suspected drug-drug interactions, machine learning algorithms have been created to extract information on interacting drugs and their possible effects from medical literature.
Efforts were consolidated in 2013 in the DDIExtraction Challenge, in which a team of researchers at Carlos III University assembled a corpus of literature on drug-drug interactions to form a standardized test for such algorithms. Competitors were tested on their ability to accurately determine, from the text, which drugs were shown to interact and what the characteristics of their interactions were.
Researchers continue to use this corpus to standardize the measurement of the effectiveness of their algorithms.
Other algorithms identify drug-drug interactions from patterns in user-generated content, especially electronic health records and/or adverse event reports. Organizations such as the FDA Adverse Event Reporting System (FAERS) and the World Health Organization's VigiBase allow doctors to submit reports of possible negative reactions to medications. Deep learning algorithms have been developed to parse these reports and detect patterns that imply drug-drug interactions.
Creation of new drugs:
DSP-1181, a molecule of the drug for OCD (obsessive-compulsive disorder) treatment, was invented by artificial intelligence through joint efforts of Exscientia (British start-up) and Sumitomo Dainippon Pharma (Japanese pharmaceutical firm). The drug development took a single year, while pharmaceutical companies usually spend about five years on similar projects. DSP-1181 was accepted for a human trial.
In September 2019 Insilico Medicine reports the creation, via artificial intelligence, of six novel inhibitors of the DDR1 gene, a kinase target implicated in fibrosis and other diseases. The system, known as Generative Tensorial Reinforcement Learning (GENTRL), designed the new compounds in 21 days, with a lead candidate tested and showing positive results in mice.
The same month Canadian company Deep Genomics announces that its AI-based drug discovery platform has identified a target and drug candidate for Wilson's disease. The candidate, DG12P1, is designed to correct the exon-skipping effect of Met645Arg, a genetic mutation affecting the ATP7B copper-binding protein.
Industry:
The trend of large health companies merging allows for greater health data accessibility. Greater health data lays the groundwork for implementation of AI algorithms.
A large part of industry focus of implementation of AI in the healthcare sector is in the clinical decision support systems. As more data is collected, machine learning algorithms adapt and allow for more robust responses and solutions.
Numerous companies are exploring the possibilities of the incorporation of big data in the healthcare industry. Many companies investigate the market opportunities through the realms of “data assessment, storage, management, and analysis technologies” which are all crucial parts of the healthcare industry.
The following are examples of large companies that have contributed to AI algorithms for use in healthcare:
Digital consultant apps like Babylon Health's GP at Hand, Ada Health, AliHealth Doctor You, KareXpert and Your.MD use AI to give medical consultation based on personal medical history and common medical knowledge.
Users report their symptoms into the app, which uses speech recognition to compare against a database of illnesses.
Babylon then offers a recommended action, taking into account the user's medical history.
Entrepreneurs in healthcare have been effectively using seven business model archetypes to take AI solution[buzzword] to the marketplace. These archetypes depend on the value generated for the target user (e.g. patient focus vs. healthcare provider and payer focus) and value capturing mechanisms (e.g. providing information or connecting stakeholders).
IFlytek launched a service robot “Xiao Man”, which integrated artificial intelligence technology to identify the registered customer and provide personalized recommendations in medical areas. It also works in the field of medical imaging. Similar robots are also being made by companies such as UBTECH ("Cruzr") and Softbank Robotics ("Pepper").
The Indian startup Haptik recently developed a WhatsApp chatbot which answers questions associated with the deadly coronavirus in India.
With the market for AI expanding constantly, large tech companies such as Apple, Google, Amazon, and Baidu all have their own AI research divisions, as well as millions of dollars allocated for acquisition of smaller AI based companies. Many automobile manufacturers are beginning to use machine learning healthcare in their cars as well.
Companies such as BMW, GE, Tesla, Toyota, and Volvo all have new research campaigns to find ways of learning a driver's vital statistics to ensure they are awake, paying attention to the road, and not under the influence of substances or in emotional distress.
Implications:
The use of AI is predicted to decrease medical costs as there will be more accuracy in diagnosis and better predictions in the treatment plan as well as more prevention of disease.
Other future uses for AI include Brain-computer Interfaces (BCI) which are predicted to help those with trouble moving, speaking or with a spinal cord injury. The BCIs will use AI to help these patients move and communicate by decoding neural activates.
Artificial intelligence has led to significant improvements in areas of healthcare such as medical imaging, automated clinical decision-making, diagnosis, prognosis, and more.
Although AI possesses the capability to revolutionize several fields of medicine, it still has limitations and cannot replace a bedside physician.
Healthcare is a complicated science that is bound by legal, ethical, regulatory, economical, and social constraints. In order to fully implement AI within healthcare, there must be "parallel changes in the global environment, with numerous stakeholders, including citizen and society."
Expanding care to developing nations:
Artificial intelligence continues to expand in its abilities to diagnose more people accurately in nations where fewer doctors are accessible to the public. Many new technology companies such as SpaceX and the Raspberry Pi Foundation have enabled more developing countries to have access to computers and the internet than ever before.
With the increasing capabilities of AI over the internet, advanced machine learning algorithms can allow patients to get accurately diagnosed when they would previously have no way of knowing if they had a life threatening disease or not.
Using AI in developing nations who do not have the resources will diminish the need for outsourcing and can improve patient care. AI can allow for not only diagnosis of patient is areas where healthcare is scarce, but also allow for a good patient experience by resourcing files to find the best treatment for a patient.
The ability of AI to adjust course as it goes also allows the patient to have their treatment modified based on what works for them; a level of individualized care that is nearly non-existent in developing countries.
Regulation:
While research on the use of AI in healthcare aims to validate its efficacy in improving patient outcomes before its broader adoption, its use may nonetheless introduce several new types of risk to patients and healthcare providers, such as algorithmic bias, Do not resuscitate implications, and other machine morality issues. These challenges of the clinical use of AI has brought upon potential need for regulations.
Currently, there are regulations pertaining to the collection of patient data. This includes policies such as the Health Insurance Portability and Accountability Act (HIPPA) and the European General Data Protection Regulation (GDPR).
The GDPR pertains to patients within the EU and details the consent requirements for patient data use when entities collect patient healthcare data. Similarly, HIPPA protects healthcare data from patient records in the United States.
In May 2016, the White House announced its plan to host a series of workshops and formation of the National Science and Technology Council (NSTC) Subcommittee on Machine Learning and Artificial Intelligence.
In October 2016, the group published The National Artificial Intelligence Research and Development Strategic Plan, outlining its proposed priorities for Federally-funded AI research and development (within government and academia). The report notes a strategic R&D plan for the subfield of health information technology is in development stages.
The only agency that has expressed concern is the FDA. Bakul Patel, the Associate Center Director for Digital Health of the FDA, is quoted saying in May 2017:
“We're trying to get people who have hands-on development experience with a product's full life cycle. We already have some scientists who know artificial intelligence and machine learning, but we want complementary people who can look forward and see how this technology will evolve.”
The joint ITU-WHO Focus Group on Artificial Intelligence for Health (FG-AI4H) has built a platform for the testing and benchmarking of AI applications in health domain. As of November 2018, eight use cases are being benchmarked, including assessing breast cancer risk from histopathological imagery, guiding anti-venom selection from snake images, and diagnosing skin lesions.
Ethical concerns:
Data collection:
In order to effectively train Machine Learning and use AI in healthcare, massive amounts of data must be gathered. Acquiring this data, however, comes at the cost of patient privacy in most cases and is not well received publicly. For example, a survey conducted in the UK estimated that 63% of the population is uncomfortable with sharing their personal data in order to improve artificial intelligence technology.
The scarcity of real, accessible patient data is a hindrance that deters the progress of developing and deploying more artificial intelligence in healthcare.
Automation:
According to a recent study, AI can replace up to 35% of jobs in the UK within the next 10 to 20 years. However, of these jobs, it was concluded that AI has not eliminated any healthcare jobs so far. Though if AI were to automate healthcare related jobs, the jobs most susceptible to automation would be those dealing with digital information, radiology, and pathology, as opposed to those dealing with doctor to patient interaction.
Automation can provide benefits alongside doctors as well. It is expected that doctors who take advantage of AI in healthcare will provide greater quality healthcare than doctors and medical establishments who do not. AI will likely not completely replace healthcare workers but rather give them more time to attend to their patients. AI may avert healthcare worker burnout and cognitive overload
AI will ultimately help contribute to progression of societal goals which include better communication, improved quality of healthcare, and autonomy.
Bias:
Since AI makes decisions solely on the data it receives as input, it is important that this data represents accurate patient demographics. In a hospital setting, patients do not have full knowledge of how predictive algorithms are created or calibrated. Therefore, these medical establishments can unfairly code their algorithms to discriminate against minorities and prioritize profits rather than providing optimal care.
There can also be unintended bias in these algorithms that can exacerbate social and healthcare inequities. Since AI’s decisions are a direct reflection of its input data, the data it receives must have accurate representation of patient demographics. White males are overly represented in medical data sets. Therefore, having minimal patient data on minorities can lead to AI making more accurate predictions for majority populations, leading to unintended worse medical outcomes for minority populations.
Collecting data from minority communities can also lead to medical discrimination. For instance, HIV is a prevalent virus among minority communities and HIV status can be used to discriminate against patients. However, these biases are able to be eliminated through careful implementation and a methodical collection of representative data.
See also:
What distinguishes AI technology from traditional technologies in health care is the ability to gather data, process it and give a well-defined output to the end-user. AI does this through machine learning algorithms and deep learning. These algorithms can recognize patterns in behavior and create their own logic.
To gain useful insights and predictions, machine learning models must be trained using extensive amounts of input data. AI algorithms behave differently from humans in two ways:
- algorithms are literal: once a goal is set, the algorithm learns exclusively from the input data and can only understand what it has been programmed to do,
- and some deep learning algorithms are black boxes; algorithms can predict with extreme precision, but offer little to no comprehensible explanation to the logic behind its decisions aside from the data and type of algorithm used.
The primary aim of health-related AI applications is to analyze relationships between prevention or treatment techniques and patient outcomes.
AI programs are applied to practices such as:
- diagnosis processes,
- treatment protocol development,
- drug development,
- personalized medicine,
- and patient monitoring and care.
AI algorithms can also be used to analyze large amounts of data through electronic health records for disease prevention and diagnosis. Medical institutions such as the following have developed AI algorithms for their departments:
- The Mayo Clinic,
- Memorial Sloan Kettering Cancer Center,
- and the British National Health Service,
Large technology companies such as IBM and Google, have also developed AI algorithms for healthcare. Additionally, hospitals are looking to AI software to support operational initiatives that increase cost saving, improve patient satisfaction, and satisfy their staffing and workforce needs.
Currently, the United States government is investing billions of dollars to progress the development of AI in healthcare. Companies are developing technologies that help healthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing length of stay and optimizing staffing levels.
As widespread use of AI in healthcare is relatively new, there are several unprecedented ethical concerns related to its practice such as data privacy, automation of jobs, and representation biases.
History:
Research in the 1960s and 1970s produced the first problem-solving program, or expert system, known as Dendral. While it was designed for applications in organic chemistry, it provided the basis for a subsequent system MYCIN, considered one of the most significant early uses of artificial intelligence in medicine. MYCIN and other systems such as INTERNIST-1 and CASNET did not achieve routine use by practitioners, however.
The 1980s and 1990s brought the proliferation of the microcomputer and new levels of network connectivity. During this time, there was a recognition by researchers and developers that AI systems in healthcare must be designed to accommodate the absence of perfect data and build on the expertise of physicians. Approaches involving fuzzy set theory, Bayesian networks, and artificial neural networks, have been applied to intelligent computing systems in healthcare.
Medical and technological advancements occurring over this half-century period that have enabled the growth healthcare-related applications of AI include:
- Improvements in computing power resulting in faster data collection and data processing.
- Growth of genomic sequencing databases
- Widespread implementation of electronic health record systems
- Improvements in natural language processing and computer vision, enabling machines to replicate human perceptual processes
- Enhanced the precision of robot-assisted surgery
- Improvements in deep learning techniques and data logs in rare diseases
Current research:
Various specialties in medicine have shown an increase in research regarding AI. As the novel coronavirus ravages through the globe, the United States is estimated to invest more than $2 billion in AI related healthcare research over the next 5 years, more than 4 times the amount spent in 2019 ($463 million).
Dermatology:
Dermatology is an imaging abundant specialty and the development of deep learning has been strongly tied to image processing. Therefore there is a natural fit between the dermatology and deep learning.
There are 3 main imaging types in dermatology: contextual images, macro images, micro images. For each modality, deep learning showed great progress.
- Han et. al. showed keratinocytic skin cancer detection from face photographs.
- Esteva et al. demonstrated dermatologist-level classification of skin cancer from lesion images.
- Noyan et. al. demonstrated a convolutional neural network that achieved 94% accuracy at identifying skin cells from microscopic Tzanck smear images.
Radiology:
AI is being studied within the radiology field to detect and diagnose diseases within patients through Computerized Tomography (CT) and Magnetic Resonance (MR) Imaging. The focus on Artificial Intelligence in radiology has rapidly increased in recent years according to the Radiology Society of North America, where they have seen growth from 0 to 3, 17, and overall 10% of total publications from 2015-2018 respectively.
A study at Stanford created an algorithm that could detect pneumonia in patients with a better average F1 metric (a statistical metric based on accuracy and recall), than radiologists involved in the trial. Through imaging in oncology, AI has been able to serve well for detecting abnormalities and monitoring change over time; two key factors in oncological health.
Many companies and vendor neutral systems such as icometrix, QUIBIM, Robovision, and UMC Utrecht’s IMAGRT have become available to provide a trainable machine learning platform to detect a wide range of diseases.
The Radiological Society of North America has implemented presentations on AI in imaging during its annual conference. Many professionals are optimistic about the future of AI processing in radiology, as it will cut down on needed interaction time and allow doctors to see more patients.
Although not always as good as a trained eye at deciphering malicious or benign growths, the history of medical imaging shows a trend toward rapid advancement in both capability and reliability of new systems. The emergence of AI technology in radiology is perceived as a threat by some specialists, as it can improve by certain statistical metrics in isolated cases, where specialists cannot.
Screening:
Recent advances have suggested the use of AI to describe and evaluate the outcome of maxillo-facial surgery or the assessment of cleft palate therapy in regard to facial attractiveness or age appearance.
In 2018, a paper published in the journal Annals of Oncology mentioned that skin cancer could be detected more accurately by an artificial intelligence system (which used a deep learning convolutional neural network) than by dermatologists. On average, the human dermatologists accurately detected 86.6% of skin cancers from the images, compared to 95% for the CNN machine.
In January 2020 researchers demonstrate an AI system, based on a Google DeepMind algorithm, that is capable of surpassing human experts in breast cancer detection.
In July 2020 it was reported that an AI algorithm by the University of Pittsburgh achieves the highest accuracy to date in identifying prostate cancer, with 98% sensitivity and 97% specificity.
Psychiatry:
In psychiatry, AI applications are still in a phase of proof-of-concept. Areas where the evidence is widening quickly include chatbots, conversational agents that imitate human behavior and which have been studied for anxiety and depression.
Challenges include the fact that many applications in the field are developed and proposed by private corporations, such as the screening for suicidal ideation implemented by Facebook in 2017. Such applications outside the healthcare system raise various professional, ethical and regulatory questions.
Primary care:
Primary care has become one key development area for AI technologies. AI in primary care has been used for supporting decision making, predictive modelling, and business analytics. Despite the rapid advances in AI technologies, general practitioners' view on the role of AI in primary care is very limited–mainly focused on administrative and routine documentation tasks.
Disease diagnosis:
An article by Jiang, et al. (2017) demonstrated that there are several types of AI techniques that have been used for a variety of different diseases, such as support vector machines, neural networks, and decision trees. Each of these techniques is described as having a “training goal” so “classifications agree with the outcomes as much as possible…”.
To demonstrate some specifics for disease diagnosis/classification there are two different techniques used in the classification of these diseases include using “Artificial Neural Networks (ANN) and Bayesian Networks (BN)”. It was found that ANN was better and could more accurately classify diabetes and CVD.
Through the use of Medical Learning Classifiers (MLC’s), Artificial Intelligence has been able to substantially aid doctors in patient diagnosis through the manipulation of mass Electronic Health Records (EHR’s).
Medical conditions have grown more complex, and with a vast history of electronic medical records building, the likelihood of case duplication is high. Although someone today with a rare illness is less likely to be the only person to have suffered from any given disease, the inability to access cases from similarly symptomatic origins is a major roadblock for physicians.
The implementation of AI to not only help find similar cases and treatments, but also factor in chief symptoms and help the physicians ask the most appropriate questions helps the patient receive the most accurate diagnosis and treatment possible.
Telemedicine:
The increase of telemedicine, the treatment of patients remotely, has shown the rise of possible AI applications. AI can assist in caring for patients remotely by monitoring their information through sensors.
A wearable device may allow for constant monitoring of a patient and the ability to notice changes that may be less distinguishable by humans. The information can be compared to other data that has already been collected using artificial intelligence algorithms that alert physicians if there are any issues to be aware of.
Another application of artificial intelligence is in chat-bot therapy. Some researchers charge that the reliance on chat-bots for mental healthcare does not offer the reciprocity and accountability of care that should exist in the relationship between the consumer of mental healthcare and the care provider (be it a chat-bot or psychologist), though.
Since the average age has risen due to a longer life expectancy, artificial intelligence could be useful in helping take care of older populations. Tools such as environment and personal sensors can identify a person’s regular activities and alert a caretaker if a behavior or a measured vital is abnormal.
Although the technology is useful, there are also discussions about limitations of monitoring in order to respect a person’s privacy since there are technologies that are designed to map out home layouts and detect human interactions.
Electronic health records:
Electronic health records (EHR) are crucial to the digitalization and information spread of the healthcare industry. Now that around 80% of medical practices use EHR, the next step is to use artificial intelligence to interpret the records and provide new information to physicians.
One application uses natural language processing (NLP) to make more succinct reports that limit the variation between medical terms by matching similar medical terms. For example, the term heart attack and myocardial infarction mean the same things, but physicians may use one over the over based on personal preferences.
NLP algorithms consolidate these differences so that larger datasets can be analyzed. Another use of NLP identifies phrases that are redundant due to repetition in a physician’s notes and keeps the relevant information to make it easier to read.
Beyond making content edits to an EHR, there are AI algorithms that evaluate an individual patient’s record and predict a risk for a disease based on their previous information and family history. One general algorithm is a rule-based system that makes decisions similarly to how humans use flow charts.
This system takes in large amounts of data and creates a set of rules that connect specific observations to concluded diagnoses. Thus, the algorithm can take in a new patient’s data and try to predict the likeliness that they will have a certain condition or disease. Since the algorithms can evaluate a patient’s information based on collective data, they can find any outstanding issues to bring to a physician’s attention and save time.
One study conducted by the Centerstone research institute found that predictive modeling of EHR data has achieved 70–72% accuracy in predicting individualized treatment response. These methods are helpful due to the fact that the amount of online health records doubles every five years.
Physicians do not have the bandwidth to process all this data manually, and AI can leverage this data to assist physicians in treating their patients.
Drug Interactions:
Improvements in natural language processing led to the development of algorithms to identify drug-drug interactions in medical literature. Drug-drug interactions pose a threat to those taking multiple medications simultaneously, and the danger increases with the number of medications being taken.
To address the difficulty of tracking all known or suspected drug-drug interactions, machine learning algorithms have been created to extract information on interacting drugs and their possible effects from medical literature.
Efforts were consolidated in 2013 in the DDIExtraction Challenge, in which a team of researchers at Carlos III University assembled a corpus of literature on drug-drug interactions to form a standardized test for such algorithms. Competitors were tested on their ability to accurately determine, from the text, which drugs were shown to interact and what the characteristics of their interactions were.
Researchers continue to use this corpus to standardize the measurement of the effectiveness of their algorithms.
Other algorithms identify drug-drug interactions from patterns in user-generated content, especially electronic health records and/or adverse event reports. Organizations such as the FDA Adverse Event Reporting System (FAERS) and the World Health Organization's VigiBase allow doctors to submit reports of possible negative reactions to medications. Deep learning algorithms have been developed to parse these reports and detect patterns that imply drug-drug interactions.
Creation of new drugs:
DSP-1181, a molecule of the drug for OCD (obsessive-compulsive disorder) treatment, was invented by artificial intelligence through joint efforts of Exscientia (British start-up) and Sumitomo Dainippon Pharma (Japanese pharmaceutical firm). The drug development took a single year, while pharmaceutical companies usually spend about five years on similar projects. DSP-1181 was accepted for a human trial.
In September 2019 Insilico Medicine reports the creation, via artificial intelligence, of six novel inhibitors of the DDR1 gene, a kinase target implicated in fibrosis and other diseases. The system, known as Generative Tensorial Reinforcement Learning (GENTRL), designed the new compounds in 21 days, with a lead candidate tested and showing positive results in mice.
The same month Canadian company Deep Genomics announces that its AI-based drug discovery platform has identified a target and drug candidate for Wilson's disease. The candidate, DG12P1, is designed to correct the exon-skipping effect of Met645Arg, a genetic mutation affecting the ATP7B copper-binding protein.
Industry:
The trend of large health companies merging allows for greater health data accessibility. Greater health data lays the groundwork for implementation of AI algorithms.
A large part of industry focus of implementation of AI in the healthcare sector is in the clinical decision support systems. As more data is collected, machine learning algorithms adapt and allow for more robust responses and solutions.
Numerous companies are exploring the possibilities of the incorporation of big data in the healthcare industry. Many companies investigate the market opportunities through the realms of “data assessment, storage, management, and analysis technologies” which are all crucial parts of the healthcare industry.
The following are examples of large companies that have contributed to AI algorithms for use in healthcare:
- IBM's Watson Oncology is in development at Memorial Sloan Kettering Cancer Center and Cleveland Clinic. IBM is also working with CVS Health on AI applications in chronic disease treatment and with Johnson & Johnson on analysis of scientific papers to find new connections for drug development. In May 2017, IBM and Rensselaer Polytechnic Institute began a joint project entitled Health Empowerment by Analytics, Learning and Semantics (HEALS), to explore using AI technology to enhance healthcare.
- Microsoft's Hanover project, in partnership with Oregon Health & Science University's Knight Cancer Institute, analyzes medical research to predict the most effective cancer drug treatment options for patients. Other projects include medical image analysis of tumor progression and the development of programmable cells.
- Google's DeepMind platform is being used by the UK National Health Service to detect certain health risks through data collected via a mobile app. A second project with the NHS involves analysis of medical images collected from NHS patients to develop computer vision algorithms to detect cancerous tissues.
- Tencent is working on several medical systems and services. These include AI Medical Innovation System (AIMIS), an AI-powered diagnostic medical imaging service; WeChat Intelligent Healthcare; and Tencent Doctorwork
- Intel's venture capital arm Intel Capital recently invested in startup Lumiata which uses AI to identify at-risk patients and develop care options.
- Kheiron Medical developed deep learning software to detect breast cancers in mammograms.
- Fractal Analytics has incubated Qure.ai which focuses on using deep learning and AI to improve radiology and speed up the analysis of diagnostic x-rays.
- Neuralink has come up with a next generation neuroprosthetic which intricately interfaces with thousands of neural pathways in the brain. Their process allows a chip, roughly the size of a quarter, to be inserted in place of a chunk of skull by a precision surgical robot to avoid accidental injury .
Digital consultant apps like Babylon Health's GP at Hand, Ada Health, AliHealth Doctor You, KareXpert and Your.MD use AI to give medical consultation based on personal medical history and common medical knowledge.
Users report their symptoms into the app, which uses speech recognition to compare against a database of illnesses.
Babylon then offers a recommended action, taking into account the user's medical history.
Entrepreneurs in healthcare have been effectively using seven business model archetypes to take AI solution[buzzword] to the marketplace. These archetypes depend on the value generated for the target user (e.g. patient focus vs. healthcare provider and payer focus) and value capturing mechanisms (e.g. providing information or connecting stakeholders).
IFlytek launched a service robot “Xiao Man”, which integrated artificial intelligence technology to identify the registered customer and provide personalized recommendations in medical areas. It also works in the field of medical imaging. Similar robots are also being made by companies such as UBTECH ("Cruzr") and Softbank Robotics ("Pepper").
The Indian startup Haptik recently developed a WhatsApp chatbot which answers questions associated with the deadly coronavirus in India.
With the market for AI expanding constantly, large tech companies such as Apple, Google, Amazon, and Baidu all have their own AI research divisions, as well as millions of dollars allocated for acquisition of smaller AI based companies. Many automobile manufacturers are beginning to use machine learning healthcare in their cars as well.
Companies such as BMW, GE, Tesla, Toyota, and Volvo all have new research campaigns to find ways of learning a driver's vital statistics to ensure they are awake, paying attention to the road, and not under the influence of substances or in emotional distress.
Implications:
The use of AI is predicted to decrease medical costs as there will be more accuracy in diagnosis and better predictions in the treatment plan as well as more prevention of disease.
Other future uses for AI include Brain-computer Interfaces (BCI) which are predicted to help those with trouble moving, speaking or with a spinal cord injury. The BCIs will use AI to help these patients move and communicate by decoding neural activates.
Artificial intelligence has led to significant improvements in areas of healthcare such as medical imaging, automated clinical decision-making, diagnosis, prognosis, and more.
Although AI possesses the capability to revolutionize several fields of medicine, it still has limitations and cannot replace a bedside physician.
Healthcare is a complicated science that is bound by legal, ethical, regulatory, economical, and social constraints. In order to fully implement AI within healthcare, there must be "parallel changes in the global environment, with numerous stakeholders, including citizen and society."
Expanding care to developing nations:
Artificial intelligence continues to expand in its abilities to diagnose more people accurately in nations where fewer doctors are accessible to the public. Many new technology companies such as SpaceX and the Raspberry Pi Foundation have enabled more developing countries to have access to computers and the internet than ever before.
With the increasing capabilities of AI over the internet, advanced machine learning algorithms can allow patients to get accurately diagnosed when they would previously have no way of knowing if they had a life threatening disease or not.
Using AI in developing nations who do not have the resources will diminish the need for outsourcing and can improve patient care. AI can allow for not only diagnosis of patient is areas where healthcare is scarce, but also allow for a good patient experience by resourcing files to find the best treatment for a patient.
The ability of AI to adjust course as it goes also allows the patient to have their treatment modified based on what works for them; a level of individualized care that is nearly non-existent in developing countries.
Regulation:
While research on the use of AI in healthcare aims to validate its efficacy in improving patient outcomes before its broader adoption, its use may nonetheless introduce several new types of risk to patients and healthcare providers, such as algorithmic bias, Do not resuscitate implications, and other machine morality issues. These challenges of the clinical use of AI has brought upon potential need for regulations.
Currently, there are regulations pertaining to the collection of patient data. This includes policies such as the Health Insurance Portability and Accountability Act (HIPPA) and the European General Data Protection Regulation (GDPR).
The GDPR pertains to patients within the EU and details the consent requirements for patient data use when entities collect patient healthcare data. Similarly, HIPPA protects healthcare data from patient records in the United States.
In May 2016, the White House announced its plan to host a series of workshops and formation of the National Science and Technology Council (NSTC) Subcommittee on Machine Learning and Artificial Intelligence.
In October 2016, the group published The National Artificial Intelligence Research and Development Strategic Plan, outlining its proposed priorities for Federally-funded AI research and development (within government and academia). The report notes a strategic R&D plan for the subfield of health information technology is in development stages.
The only agency that has expressed concern is the FDA. Bakul Patel, the Associate Center Director for Digital Health of the FDA, is quoted saying in May 2017:
“We're trying to get people who have hands-on development experience with a product's full life cycle. We already have some scientists who know artificial intelligence and machine learning, but we want complementary people who can look forward and see how this technology will evolve.”
The joint ITU-WHO Focus Group on Artificial Intelligence for Health (FG-AI4H) has built a platform for the testing and benchmarking of AI applications in health domain. As of November 2018, eight use cases are being benchmarked, including assessing breast cancer risk from histopathological imagery, guiding anti-venom selection from snake images, and diagnosing skin lesions.
Ethical concerns:
Data collection:
In order to effectively train Machine Learning and use AI in healthcare, massive amounts of data must be gathered. Acquiring this data, however, comes at the cost of patient privacy in most cases and is not well received publicly. For example, a survey conducted in the UK estimated that 63% of the population is uncomfortable with sharing their personal data in order to improve artificial intelligence technology.
The scarcity of real, accessible patient data is a hindrance that deters the progress of developing and deploying more artificial intelligence in healthcare.
Automation:
According to a recent study, AI can replace up to 35% of jobs in the UK within the next 10 to 20 years. However, of these jobs, it was concluded that AI has not eliminated any healthcare jobs so far. Though if AI were to automate healthcare related jobs, the jobs most susceptible to automation would be those dealing with digital information, radiology, and pathology, as opposed to those dealing with doctor to patient interaction.
Automation can provide benefits alongside doctors as well. It is expected that doctors who take advantage of AI in healthcare will provide greater quality healthcare than doctors and medical establishments who do not. AI will likely not completely replace healthcare workers but rather give them more time to attend to their patients. AI may avert healthcare worker burnout and cognitive overload
AI will ultimately help contribute to progression of societal goals which include better communication, improved quality of healthcare, and autonomy.
Bias:
Since AI makes decisions solely on the data it receives as input, it is important that this data represents accurate patient demographics. In a hospital setting, patients do not have full knowledge of how predictive algorithms are created or calibrated. Therefore, these medical establishments can unfairly code their algorithms to discriminate against minorities and prioritize profits rather than providing optimal care.
There can also be unintended bias in these algorithms that can exacerbate social and healthcare inequities. Since AI’s decisions are a direct reflection of its input data, the data it receives must have accurate representation of patient demographics. White males are overly represented in medical data sets. Therefore, having minimal patient data on minorities can lead to AI making more accurate predictions for majority populations, leading to unintended worse medical outcomes for minority populations.
Collecting data from minority communities can also lead to medical discrimination. For instance, HIV is a prevalent virus among minority communities and HIV status can be used to discriminate against patients. However, these biases are able to be eliminated through careful implementation and a methodical collection of representative data.
See also:
- Artificial intelligence
- Glossary of artificial intelligence
- Full body scanner (ie Dermascanner, ...)
- BlueDot
- Clinical decision support system
- Computer-aided diagnosis
- Computer-aided simple triage
- Google DeepMind
- IBM Watson Health
- Medical image computing
- Michal Rosen-Zvi
- Speech recognition software in healthcare
- The MICCAI Society
Regulation of artificial intelligence
- YouTube Video: How Should Artificial Intelligence Be Regulated?
- YouTube Video: Artificial intelligence and algorithms: pros and cons | DW Documentary (AI documentary)
- YouTube Video: the European Union seeks to protect human rights by regulating AI
[Your WebHost: I have included the following invitation to the February 5, 2020 symposium offered by the U.S. Patent Office since the agenda covers the major applications for AI, which this topic about regulating AI development is relevant]:
On February 5, 2020, the Copyright Office and the World Intellectual Property Organization (WIPO) held a symposium that took an in-depth look at how the creative community currently is using artificial intelligence (AI) to create original works.
Panelists’ discussions included the relationship between AI and copyright; what level of human input is sufficient for the resulting work to be eligible for copyright protection; the challenges and considerations for using copyright-protected works to train a machine or to examine large data sets; and the future of AI and copyright policy.
The Relationship between AI and Copyright (9:50 – 10:20 am): This discussion will involve an introductory look at what AI is and why copyright is implicated. Explaining these issues is an expert in AI technology, who will discuss the technological issues, and the U.S. Copyright Office’s Director of Registration Policy and Practice, who will explain the copyright legal foundation for AI issues.
Speakers:
AI and the Administration of International Copyright Systems (10:20 – 11:00 am)
Countries throughout the world are looking at AI and how different laws should handle questions such as copyrightability and using AI to help administer copyright systems. This panel will discuss the international copyright dimensions of the rise of AI.
Moderator: Maria Strong (Acting Register of Copyrights and Director, U.S. Copyright Office)
Speakers:
AI and the Visual Arts (11:10 – 11:55 am) Creators are already experimenting with AI to create new visual works, including paintings and more.
Moderator: John Ashley (Chief, Visual Arts Division, U.S. Copyright Office)
Speakers:
AI and Creating a World of Other Works: (11:55 am – 12:40 pm) Creators are using AI to develop a wide variety of works beyond music and visual works. AI also is implicated in the creation and distribution of works such as video games, books, news articles, and more.
Moderator: Katie Alvarez (Counsel for Policy and International Affairs, U.S. Copyright Office)
Speakers:
AI and Creating Music (1:40 – 2:40 pm) Music is a dynamic field and authors use AI in interesting ways to develop new works and explore new market possibilities.
Moderator: Regan Smith (General Counsel and Associate Register of Copyrights, U.S. Copyright Office)
Speakers:
Bias and Artificial Intelligence: Works created by AI depend on what creators choose to include as source material. As a result of the selection process and building algorithms, AI can often reflect intentional and unintentional bias. Acknowledging this issue and learning how it happens can help make AI-created works more representative of our culture.
Moderator: Whitney Levandusky (Attorney-Advisor, Office of Public Information and Education, U.S. Copyright Office)
Speakers:
AI and the Consumer Marketplace (3:20 – 4:05 pm): Companies have recognized that AI can itself be a product. In recent years, there has been a wave of development in this sector, including by creating products like driverless cars. Find out how many AI-centered products are already out there, what is on the horizon, and how is copyright involved.
Moderator: Mark Gray (Attorney-Advisor, Office of the General Counsel, U.S. Copyright Office)
Speakers:
Digital Avatars in Audiovisual Works (4:05 – 4:50 pm): How is the motion picture industry using AI, and how does that impact performers? This session will review how AI is being used, including advantages and challenges.
Moderator: Catherine Zaller Rowland (Associate Register of Copyrights and Director of Public Information and Education, U.S. Copyright Office)
Speakers:
Regulation of Artificial Intelligence (Wikipedia)
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms.
The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.
Perspectives:
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI. Regulation is considered necessary to both encourage AI and manage associated risks.
Public administration and policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems, although regulation of artificial superintelligences is also considered.
The basic approach to regulation focuses on the risks and biases of AI's underlying technology, i.e., machine-learning algorithms, at the level of the input data, algorithm testing, and the decision model, as well as whether explanations of biases in the code can be understandable for prospective recipients of the technology, and technically feasible for producers to convey.
AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.
A public administration approach sees a relationship between AI law and regulation, the ethics of AI, and 'AI society', defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction.
The development of public sector strategies for management and regulation of AI is deemed necessary at the local, national, and international levels and in a variety of fields, from public service management and accountability to law enforcement, healthcare (especially the concept of a Human Guarantee), the financial sector, robotics, autonomous vehicles, the military and national security, and international law.
In 2017 Elon Musk called for regulation of AI development. According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization."
In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development. Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that AI is in its infancy and that it is too early to regulate the technology. Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.
As a response to the AI control problem:
Main article: AI control problem
Regulation of AI can be seen as positive social means to manage the AI control problem, i.e., the need to insure long-term beneficial AI, with other social responses such as doing nothing or banning being seen as impractical, and approaches such as enhancing human capabilities through transhumanism approaches such as brain-computer interfaces being seen as potentially complementary.
Regulation of research into artificial general intelligence (AGI) focuses on the role of review boards, from university or corporation to international levels, and on encouraging research into safe AI, together with the possibility of differential intellectual progress (prioritizing risk-reducing strategies over risk-taking strategies in AI development) or conducting international mass surveillance to perform AGI arms control.
For instance, the 'AGI Nanny' is a proposed strategy, potentially under the control of humanity, for preventing the creation of a dangerous superintelligence as well as addressing other major threats to human well-being, such as subversion of the global financial system, until a superintelligence can be safely created.
It entails the creation of a smarter-than-human, but not superintelligent, artificial general intelligence system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger." Regulation of conscious, ethically aware AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights. Regulation of AI has been seen as restrictive, with a risk of preventing the development of AGI.
Global guidance:
The development of a global governance board to regulate AI development was suggested at least as early as 2017. In December 2018, Canada and France announced plans for a G7-backed International Panel on Artificial Intelligence, modeled on the International Panel on Climate Change, to study the global effects of AI on people and economies and to steer AI development.
In 2019 the Panel was renamed the Global Partnership on AI, but it is yet to be endorsed by the United States.
The OECD Recommendations on AI were adopted in May 2019, and the G20 AI Principles in June 2019. In September 2019 the World Economic Forum issued ten 'AI Government Procurement Guidelines'. In February 2020, the European Union published its draft strategy paper for promoting and regulating AI.
At the United Nations, several entities have begun to promote and discuss aspects of AI regulation and policy, including the UNICRI Centre for AI and Robotics. At UNESCO’s Scientific 40th session in November 2019, the organization commenced a two year process to achieve a "global standard-setting instrument on ethics of artificial intelligence".
In pursuit of this goal, UNESCO forums and conferences on AI have taken place to gather stakeholder views. The most recent draft text of a recommendation on the ethics of AI of the UNESCO Ad Hoc Expert Group was issued in September 2020 and includes a call for legislative gaps to be filled.
Regional and national regulation:
The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and Russia. Since early 2016, many national, regional and international authorities have begun adopting strategies, actions plans and policy papers on AI.
These documents cover a wide range of topics such as regulation and governance, as well as industrial strategy, research, talent and infrastructure.
China:
Further information: Artificial intelligence industry in China
The regulation of AI in China is mainly governed by the State Council of the PRC's July 8, 2017 "A Next Generation Artificial Intelligence Development Plan" (State Council Document No. 35), in which the Central Committee of the Communist Party of China and the State Council of the People's Republic of China urged the governing bodies of China to promote the development of AI.
Regulation of the issues of ethical and legal support for the development of AI is nascent, but policy ensures state control of Chinese companies and over valuable data, including storage of data on Chinese users within the country and the mandatory use of People's Republic of China's national standards for AI, including over big data, cloud computing, and industrial software.
Council of Europe:
The Council of Europe (CoE) is an international organization which promotes human rights democracy and the rule of law and comprises 47 member states, including all 29 Signatories of the European Union's 2018 Declaration of Cooperation on Artificial Intelligence.
The CoE has created a common legal space in which the members have a legal obligation to guarantee rights as set out in the European Convention on Human Rights. Specifically in relation to AI, "The Council of Europe’s aim is to identify intersecting areas between AI and our standards on human rights, democracy and rule of law, and to develop relevant standard setting or capacity-building solutions".
The large number of relevant documents identified by the CoE include guidelines, charters, papers, reports and strategies. The authoring bodies of these AI regulation documents are not confined to one sector of society and include organizations, companies, bodies and nation-states.
European Union:
Most European Union (EU) countries have their own national strategies towards regulating AI, but these are largely convergent. The European Union is guided by a European Strategy on Artificial Intelligence, supported by a High-Level Expert Group on Artificial Intelligence.
In April 2019, the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence (AI), following this with its Policy and investment recommendations for trustworthy Artificial Intelligence in June 2019.
The EU Commission’s High Level Expert Group on Artificial Intelligence carries out work on Trustworthy AI, and the Commission has issued reports on the Safety and Liability Aspects of AI and on the Ethics of Automated Vehicles. In 2020 the EU Commission sought views on a proposal for AI specific legislation, and that process is ongoing.
On February 2, 2020, the European Commission published its White Paper on Artificial Intelligence - A European approach to excellence and trust. The White Paper consists of two main building blocks, an ‘ecosystem of excellence’ and a ‘ecosystem of trust’.
The latter outlines the EU's approach for a regulatory framework for AI. In its proposed approach, the Commission differentiates between 'high-risk' and 'non-high-risk' AI applications. Only the former should be in the scope of a future EU regulatory framework.
Whether this would be the case could in principle be determined by two cumulative criteria, concerning critical sectors and critical use. Following key requirements are considered for high-risk AI applications: requirements for training data; data and record-keeping; informational duties; requirements for robustness and accuracy; human oversight; and specific requirements for specific AI applications, such as those used for purposes of remote biometric identification.
AI applications that do not qualify as ‘high-risk’ could be governed by a voluntary labeling scheme. As regards compliance and enforcement, the Commission considers prior conformity assessments which could include 'procedures for testing, inspection or certification' and/or 'checks of the algorithms and of the data sets used in the development phase'. A European governance structure on AI in the form of a framework for cooperation of national competent authorities could facilitate the implementation of the regulatory framework.
United Kingdom:
The UK supported the application and development of AI in business via the Digital Economy Strategy 2015-2018, introduced at the beginning of 2015 by Innovate UK as part of the UK Digital Strategy. In the public sector, guidance has been provided by the Department for Digital, Culture, Media and Sport, on data ethics and the Alan Turing Institute, on responsible design and implementation of AI systems.
In terms of cyber security, the National Cyber Security Centre has issued guidance on ‘Intelligent Security Tools’.
United States:
Discussions on regulation of AI in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.
As early as 2016, the Obama administration had begun to focus on the risks and regulations for artificial intelligence. In a report titled Preparing For the Future of Artificial Intelligence, the National Science and Technology Council set a precedent to allow researchers to continue to develop new AI technologies with few restrictions.
It is stated within the report that "the approach to regulation of AI-enabled products to protect public safety should be informed by assessment of the aspects of risk....". These risks would be the principal reason to create any form of regulation, granted that any existing regulation would not apply to AI technology.
The first main report was the National Strategic Research and Development Plan for Artificial Intelligence. On August 13, 2018, Section 1051 of the Fiscal Year 2019 John S. McCain National Defense Authorization Act (P.L. 115-232) established the National Security Commission on Artificial Intelligence "to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States."
Steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence. The Artificial Intelligence Initiative Act (S.1558) is a proposed bill that would establish a federal initiative designed to accelerate research and development on AI for, inter alia, the economic and national security of the United States.
On January 7, 2019, following an Executive Order on Maintaining American Leadership in Artificial Intelligence, the White House’s Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI.
In response, the National Institute of Standards and Technology has released a position paper, the National Security Commission on Artificial Intelligence has published an interim report, and the Defense Innovation Board has issued recommendations on the ethical use of AI. A year later, the administration called for comments on further deregulation in another draft of its Guidance for Regulation of Artificial Intelligence Applications.
Other specific agencies working on the regulation of AI include the Food and Drug Administration, which has created pathways to regulate the incorporation of AI in medical imaging.
Regulation of fully autonomous weapons:
Main article: Lethal autonomous weapon
Legal questions related to lethal autonomous weapons systems (LAWS), in particular compliance with the laws of armed conflict, have been under discussion at the United Nations since 2013, within the context of the Convention on Certain Conventional Weapons.
Notably, informal meetings of experts took place in 2014, 2015 and 2016 and a Group of Governmental Experts (GGE) was appointed to further deliberate on the issue in 2016. A set of guiding principles on LAWS affirmed by the GGE on LAWS were adopted in 2018.
In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue, and leading to proposals for global regulation.
The possibility of a moratorium or preemptive ban of the development and use of LAWS has also been raised on several occasions by other national delegations to the Convention on Certain Conventional Weapons and is strongly advocated for by the Campaign to Stop Killer Robots - a coalition of non-governmental organizations.
See also:
On February 5, 2020, the Copyright Office and the World Intellectual Property Organization (WIPO) held a symposium that took an in-depth look at how the creative community currently is using artificial intelligence (AI) to create original works.
Panelists’ discussions included the relationship between AI and copyright; what level of human input is sufficient for the resulting work to be eligible for copyright protection; the challenges and considerations for using copyright-protected works to train a machine or to examine large data sets; and the future of AI and copyright policy.
The Relationship between AI and Copyright (9:50 – 10:20 am): This discussion will involve an introductory look at what AI is and why copyright is implicated. Explaining these issues is an expert in AI technology, who will discuss the technological issues, and the U.S. Copyright Office’s Director of Registration Policy and Practice, who will explain the copyright legal foundation for AI issues.
Speakers:
- Ahmed Elgammal (Professor at the Department of Computer Science, Rutgers University, and Director of the The Art & Artificial Intelligence Lab)
- Rob Kasunic (Associate Register of Copyrights and Director of Registration Policy and Practice, U.S. Copyright Office)
AI and the Administration of International Copyright Systems (10:20 – 11:00 am)
Countries throughout the world are looking at AI and how different laws should handle questions such as copyrightability and using AI to help administer copyright systems. This panel will discuss the international copyright dimensions of the rise of AI.
Moderator: Maria Strong (Acting Register of Copyrights and Director, U.S. Copyright Office)
Speakers:
- Ros Lynch (Director, Copyright & IP Enforcement, U.K. Intellectual Property Office (UKIPO))
- Ulrike Till (Division of Artificial Intelligence Policy, WIPO)
- Michele Woods (Director, Copyright Law Division, WIPO) Break (11:00 – 11:10 am)
AI and the Visual Arts (11:10 – 11:55 am) Creators are already experimenting with AI to create new visual works, including paintings and more.
Moderator: John Ashley (Chief, Visual Arts Division, U.S. Copyright Office)
Speakers:
- Sandra Aistars (Clinical Professor and Senior Scholar and Director of Copyright Research and Policy of CPIP, Antonin Scalia Law School, George Mason University)
- Ahmed Elgammal (Professor at the Department of Computer Science, Rutgers University, and Director of the The Art & Artificial Intelligence Lab)
- Andres Guadamuz (Senior Lecturer in Intellectual Property Law, University of Sussex and Editor in Chief of the Journal of World Intellectual Property)
AI and Creating a World of Other Works: (11:55 am – 12:40 pm) Creators are using AI to develop a wide variety of works beyond music and visual works. AI also is implicated in the creation and distribution of works such as video games, books, news articles, and more.
Moderator: Katie Alvarez (Counsel for Policy and International Affairs, U.S. Copyright Office)
Speakers:
- Jason Boog (West Coast correspondent for Publishers Weekly)
- Kayla Page (Senior Counsel, Epic Games)
- Mary Rasenberger (Executive Director, the Authors Guild and Authors Guild Foundation)
- Meredith Rose (Policy Counsel, Public Knowledge)
AI and Creating Music (1:40 – 2:40 pm) Music is a dynamic field and authors use AI in interesting ways to develop new works and explore new market possibilities.
Moderator: Regan Smith (General Counsel and Associate Register of Copyrights, U.S. Copyright Office)
Speakers:
- Joel Douek (Cofounder of EccoVR, West Coast creative director and chief scientist for Man Made Music, and board member of the Society of Composers & Lyricists)
- E. Michael Harrington (Composer, Musician, Consultant, and Professor in Music Copyright and Intellectual Property Matters at Berklee Online)
- David Hughes (Chief Technology Officer, Recording Industry Association of America (RIAA))
- Alex Mitchell (Founder and CEO, Boomy)
Bias and Artificial Intelligence: Works created by AI depend on what creators choose to include as source material. As a result of the selection process and building algorithms, AI can often reflect intentional and unintentional bias. Acknowledging this issue and learning how it happens can help make AI-created works more representative of our culture.
Moderator: Whitney Levandusky (Attorney-Advisor, Office of Public Information and Education, U.S. Copyright Office)
Speakers:
- Amanda Levendowski (Associate Professor of Law and founding Director of the Intellectual Property and Information Policy (iPIP) Clinic, Georgetown Law)
- Miriam Vogel (Executive Director, EqualAI)
AI and the Consumer Marketplace (3:20 – 4:05 pm): Companies have recognized that AI can itself be a product. In recent years, there has been a wave of development in this sector, including by creating products like driverless cars. Find out how many AI-centered products are already out there, what is on the horizon, and how is copyright involved.
Moderator: Mark Gray (Attorney-Advisor, Office of the General Counsel, U.S. Copyright Office)
Speakers:
- Julie Babayan (Senior Manager, Government Relations and Public Policy, Adobe)
- Vanessa Bailey (Global Director of Intellectual Property Policy, Intel Corporation)
- Melody Drummond Hansen (Partner and Chair, Automated & Connected Vehicles, O’Melveny & Myers LLP)
Digital Avatars in Audiovisual Works (4:05 – 4:50 pm): How is the motion picture industry using AI, and how does that impact performers? This session will review how AI is being used, including advantages and challenges.
Moderator: Catherine Zaller Rowland (Associate Register of Copyrights and Director of Public Information and Education, U.S. Copyright Office)
Speakers:
- Sarah Howes (Director and Counsel, Government Affairs and Public Policy, SAG-AFTRA)
- Ian Slotin (SVP, Intellectual Property, NBCUniversal)
Regulation of Artificial Intelligence (Wikipedia)
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms.
The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.
Perspectives:
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI. Regulation is considered necessary to both encourage AI and manage associated risks.
Public administration and policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems, although regulation of artificial superintelligences is also considered.
The basic approach to regulation focuses on the risks and biases of AI's underlying technology, i.e., machine-learning algorithms, at the level of the input data, algorithm testing, and the decision model, as well as whether explanations of biases in the code can be understandable for prospective recipients of the technology, and technically feasible for producers to convey.
AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.
A public administration approach sees a relationship between AI law and regulation, the ethics of AI, and 'AI society', defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction.
The development of public sector strategies for management and regulation of AI is deemed necessary at the local, national, and international levels and in a variety of fields, from public service management and accountability to law enforcement, healthcare (especially the concept of a Human Guarantee), the financial sector, robotics, autonomous vehicles, the military and national security, and international law.
In 2017 Elon Musk called for regulation of AI development. According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization."
In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development. Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that AI is in its infancy and that it is too early to regulate the technology. Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.
As a response to the AI control problem:
Main article: AI control problem
Regulation of AI can be seen as positive social means to manage the AI control problem, i.e., the need to insure long-term beneficial AI, with other social responses such as doing nothing or banning being seen as impractical, and approaches such as enhancing human capabilities through transhumanism approaches such as brain-computer interfaces being seen as potentially complementary.
Regulation of research into artificial general intelligence (AGI) focuses on the role of review boards, from university or corporation to international levels, and on encouraging research into safe AI, together with the possibility of differential intellectual progress (prioritizing risk-reducing strategies over risk-taking strategies in AI development) or conducting international mass surveillance to perform AGI arms control.
For instance, the 'AGI Nanny' is a proposed strategy, potentially under the control of humanity, for preventing the creation of a dangerous superintelligence as well as addressing other major threats to human well-being, such as subversion of the global financial system, until a superintelligence can be safely created.
It entails the creation of a smarter-than-human, but not superintelligent, artificial general intelligence system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger." Regulation of conscious, ethically aware AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights. Regulation of AI has been seen as restrictive, with a risk of preventing the development of AGI.
Global guidance:
The development of a global governance board to regulate AI development was suggested at least as early as 2017. In December 2018, Canada and France announced plans for a G7-backed International Panel on Artificial Intelligence, modeled on the International Panel on Climate Change, to study the global effects of AI on people and economies and to steer AI development.
In 2019 the Panel was renamed the Global Partnership on AI, but it is yet to be endorsed by the United States.
The OECD Recommendations on AI were adopted in May 2019, and the G20 AI Principles in June 2019. In September 2019 the World Economic Forum issued ten 'AI Government Procurement Guidelines'. In February 2020, the European Union published its draft strategy paper for promoting and regulating AI.
At the United Nations, several entities have begun to promote and discuss aspects of AI regulation and policy, including the UNICRI Centre for AI and Robotics. At UNESCO’s Scientific 40th session in November 2019, the organization commenced a two year process to achieve a "global standard-setting instrument on ethics of artificial intelligence".
In pursuit of this goal, UNESCO forums and conferences on AI have taken place to gather stakeholder views. The most recent draft text of a recommendation on the ethics of AI of the UNESCO Ad Hoc Expert Group was issued in September 2020 and includes a call for legislative gaps to be filled.
Regional and national regulation:
The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and Russia. Since early 2016, many national, regional and international authorities have begun adopting strategies, actions plans and policy papers on AI.
These documents cover a wide range of topics such as regulation and governance, as well as industrial strategy, research, talent and infrastructure.
China:
Further information: Artificial intelligence industry in China
The regulation of AI in China is mainly governed by the State Council of the PRC's July 8, 2017 "A Next Generation Artificial Intelligence Development Plan" (State Council Document No. 35), in which the Central Committee of the Communist Party of China and the State Council of the People's Republic of China urged the governing bodies of China to promote the development of AI.
Regulation of the issues of ethical and legal support for the development of AI is nascent, but policy ensures state control of Chinese companies and over valuable data, including storage of data on Chinese users within the country and the mandatory use of People's Republic of China's national standards for AI, including over big data, cloud computing, and industrial software.
Council of Europe:
The Council of Europe (CoE) is an international organization which promotes human rights democracy and the rule of law and comprises 47 member states, including all 29 Signatories of the European Union's 2018 Declaration of Cooperation on Artificial Intelligence.
The CoE has created a common legal space in which the members have a legal obligation to guarantee rights as set out in the European Convention on Human Rights. Specifically in relation to AI, "The Council of Europe’s aim is to identify intersecting areas between AI and our standards on human rights, democracy and rule of law, and to develop relevant standard setting or capacity-building solutions".
The large number of relevant documents identified by the CoE include guidelines, charters, papers, reports and strategies. The authoring bodies of these AI regulation documents are not confined to one sector of society and include organizations, companies, bodies and nation-states.
European Union:
Most European Union (EU) countries have their own national strategies towards regulating AI, but these are largely convergent. The European Union is guided by a European Strategy on Artificial Intelligence, supported by a High-Level Expert Group on Artificial Intelligence.
In April 2019, the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence (AI), following this with its Policy and investment recommendations for trustworthy Artificial Intelligence in June 2019.
The EU Commission’s High Level Expert Group on Artificial Intelligence carries out work on Trustworthy AI, and the Commission has issued reports on the Safety and Liability Aspects of AI and on the Ethics of Automated Vehicles. In 2020 the EU Commission sought views on a proposal for AI specific legislation, and that process is ongoing.
On February 2, 2020, the European Commission published its White Paper on Artificial Intelligence - A European approach to excellence and trust. The White Paper consists of two main building blocks, an ‘ecosystem of excellence’ and a ‘ecosystem of trust’.
The latter outlines the EU's approach for a regulatory framework for AI. In its proposed approach, the Commission differentiates between 'high-risk' and 'non-high-risk' AI applications. Only the former should be in the scope of a future EU regulatory framework.
Whether this would be the case could in principle be determined by two cumulative criteria, concerning critical sectors and critical use. Following key requirements are considered for high-risk AI applications: requirements for training data; data and record-keeping; informational duties; requirements for robustness and accuracy; human oversight; and specific requirements for specific AI applications, such as those used for purposes of remote biometric identification.
AI applications that do not qualify as ‘high-risk’ could be governed by a voluntary labeling scheme. As regards compliance and enforcement, the Commission considers prior conformity assessments which could include 'procedures for testing, inspection or certification' and/or 'checks of the algorithms and of the data sets used in the development phase'. A European governance structure on AI in the form of a framework for cooperation of national competent authorities could facilitate the implementation of the regulatory framework.
United Kingdom:
The UK supported the application and development of AI in business via the Digital Economy Strategy 2015-2018, introduced at the beginning of 2015 by Innovate UK as part of the UK Digital Strategy. In the public sector, guidance has been provided by the Department for Digital, Culture, Media and Sport, on data ethics and the Alan Turing Institute, on responsible design and implementation of AI systems.
In terms of cyber security, the National Cyber Security Centre has issued guidance on ‘Intelligent Security Tools’.
United States:
Discussions on regulation of AI in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.
As early as 2016, the Obama administration had begun to focus on the risks and regulations for artificial intelligence. In a report titled Preparing For the Future of Artificial Intelligence, the National Science and Technology Council set a precedent to allow researchers to continue to develop new AI technologies with few restrictions.
It is stated within the report that "the approach to regulation of AI-enabled products to protect public safety should be informed by assessment of the aspects of risk....". These risks would be the principal reason to create any form of regulation, granted that any existing regulation would not apply to AI technology.
The first main report was the National Strategic Research and Development Plan for Artificial Intelligence. On August 13, 2018, Section 1051 of the Fiscal Year 2019 John S. McCain National Defense Authorization Act (P.L. 115-232) established the National Security Commission on Artificial Intelligence "to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States."
Steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence. The Artificial Intelligence Initiative Act (S.1558) is a proposed bill that would establish a federal initiative designed to accelerate research and development on AI for, inter alia, the economic and national security of the United States.
On January 7, 2019, following an Executive Order on Maintaining American Leadership in Artificial Intelligence, the White House’s Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI.
In response, the National Institute of Standards and Technology has released a position paper, the National Security Commission on Artificial Intelligence has published an interim report, and the Defense Innovation Board has issued recommendations on the ethical use of AI. A year later, the administration called for comments on further deregulation in another draft of its Guidance for Regulation of Artificial Intelligence Applications.
Other specific agencies working on the regulation of AI include the Food and Drug Administration, which has created pathways to regulate the incorporation of AI in medical imaging.
Regulation of fully autonomous weapons:
Main article: Lethal autonomous weapon
Legal questions related to lethal autonomous weapons systems (LAWS), in particular compliance with the laws of armed conflict, have been under discussion at the United Nations since 2013, within the context of the Convention on Certain Conventional Weapons.
Notably, informal meetings of experts took place in 2014, 2015 and 2016 and a Group of Governmental Experts (GGE) was appointed to further deliberate on the issue in 2016. A set of guiding principles on LAWS affirmed by the GGE on LAWS were adopted in 2018.
In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue, and leading to proposals for global regulation.
The possibility of a moratorium or preemptive ban of the development and use of LAWS has also been raised on several occasions by other national delegations to the Convention on Certain Conventional Weapons and is strongly advocated for by the Campaign to Stop Killer Robots - a coalition of non-governmental organizations.
See also:
- Artificial intelligence arms race
- Artificial intelligence control problem
- Artificial intelligence in government
- Ethics of artificial intelligence
- Government by algorithm
- Regulation of algorithms
Architectural Engineering
YouTube Video: What Do Architectural Engineers Do?
Pictured below: Intelligent Mechanical Engineering
YouTube Video: What Do Architectural Engineers Do?
Pictured below: Intelligent Mechanical Engineering
Architectural engineering, also known as building engineering or architecture engineering, is an engineering discipline that deals with the the following:
From reduction of greenhouse gas emissions to the construction of resilient buildings, architectural engineers are at the forefront of addressing several major challenges of the 21st century. They apply the latest scientific knowledge and technologies to the design of buildings.
Architectural engineering as a relatively new licensed profession emerged in the 20th century as a result of the rapid technological developments. Architectural engineers are at the forefront of two major historical opportunities that today's world is immersed in: (1) that of rapidly advancing computer-technology, and (2) the parallel revolution arising from the need to create a sustainable planet.[3][4]
Distinguished from architecture as an art of design, architectural engineering, is the art and science of engineering and construction as practiced in respect of buildings.
Related engineering and design fields:
Structural Engineering[edit]Main article: Structural engineeringStructural engineering involves the analysis and design of the built environment (buildings, bridges, equipment supports, towers and walls).
Those concentrating on buildings are sometimes informally referred to as "building engineers". Structural engineers require expertise in strength of materials, structural analysis, and in predicting structural load such as from weight of the building, occupants and contents, and extreme events such as wind, rain, ice, and seismic design of structures which is referred to as earthquake engineering.
Architectural Engineers sometimes incorporate structural as one aspect of their designs; the structural discipline when practiced as a specialty works closely with architects and other engineering specialists.
Mechanical, electrical, and plumbing (MEP):
Mechanical engineering and electrical engineering engineers are specialists when engaged in the building design fields. This is known as mechanical, electrical, and plumbing (MEP) throughout the United States, or building services engineering in the United Kingdom, Canada, and Australia.[6] Mechanical engineers often design and oversee the heating, ventilation and air conditioning (HVAC), plumbing, and rainwater systems.
Plumbing designers often include design specifications for simple active fire protection systems, but for more complicated projects, fire protection engineers are often separately retained.
Electrical engineers are responsible for the building's power distribution, telecommunication, fire alarm, signalization, lightning protection and control systems, as well as lighting systems.
The architectural engineer (PE) in the United States:
Main article: Architectural engineer (PE)
In many jurisdictions of the United States, the architectural engineer is a licensed engineering professional.[7]
Usually a graduate of an EAC/ABET-accredited architectural engineering university program preparing students to perform whole-building design in competition with architect-engineer teams; or for practice in one of structural, mechanical or electrical fields of building design, but with an appreciation of integrated architectural requirements.
Although some states require a BS degree from an EAC/ABET-accredited engineering program, with no exceptions, about two thirds of the states accept BS degrees from ETAC/ABET-accredited architectural engineering technology programs to become licensed engineering professionals.
Architectural engineering technology graduates, with applied engineering skills, often gain further learning with an MS degree in engineering and/or NAAB-accredited Masters of Architecture to become licensed as both an engineer and architect. This path requires the individual to pass state licensing exams in both disciplines.
States handle this situation differently on experienced gained working under a licensed engineer and/or registered architect prior to taking the examinations. This education model is more in line with the educational system in the United Kingdom where an accredited MEng or MS degree in engineering for further learning is required by the Engineering Council to be registered as a Chartered Engineer.
The National Council of Architectural Registration Boards (NCARB) facilitate the licensure and credentialing of architects but requirements for registration often vary between states.
In the state of New Jersey, a registered architect is allowed to sit for the PE exam and a professional engineer is allowed to take the design portions of the Architectural Registration Exam (ARE), to become a registered architect. It is becoming more common for highly educated architectural engineers in the United States to become licensed as both engineer and architect.
Formal architectural engineering education, following the engineering model of earlier disciplines, developed in the late 19th century, and became widespread in the United States by the mid-20th century.
With the establishment of a specific "architectural engineering" NCEES Professional Engineering registration examination in the 1990s, and first offering in April 2003, architectural engineering became recognized as a distinct engineering discipline in the United States. Up to date NCEES account allows engineers to apply to other states PE license "by comity".
In most license-regulated jurisdictions, architectural engineers are not entitled to practice architecture unless they are also licensed as architects. Practice of structural engineering in high-risk locations, e.g., due to strong earthquakes, or on specific types of higher importance buildings such as hospitals, may require separate licensing as well. Regulations and customary practice vary widely by state or city.
The architect as architectural engineer:
See also: Architect § Professional requirements
In some countries, the practice of architecture includes planning, designing and overseeing the building's construction, and architecture, as a profession providing architectural services, is referred to as "architectural engineering".
In Japan, a "first-class architect" plays the dual role of architect and building engineer, although the services of a licensed "structural design first-class architect"(構造設計一級建築士) are required for buildings over a certain scale.[8]
In some languages, such as Korean and Arabic, "architect" is literally translated as "architectural engineer". In some countries, an "architectural engineer" (such as the ingegnere edile in Italy) is entitled to practice architecture and is often referred to as an architect.[citation needed] These individuals are often also structural engineers.
In other countries, such as Germany, Austria, Iran, and most of the Arab countries, architecture graduates receive an engineering degree (Dipl.-Ing. – Diplom-Ingenieur).[9]
In Spain, an "architect" has a technical university education and legal powers to carry out building structure and facility projects.[10]
In Brazil, architects and engineers used to share the same accreditation process (Conselho Federal de Engenheiros, Arquitetos e Agrônomos (CONFEA) – Federal Council of Engineering, Architecture and Agronomy). Now the Brazilian architects and urbanists have their own accreditation process (CAU – Architecture and Urbanism Council). Besides traditional architecture design training,
Brazilian architecture courses also offer complementary training in engineering disciplines such as structural, electrical, hydraulic and mechanical engineering. After graduation, architects focus in architectural planning, yet they can be responsible to the whole building, when it concerns to small buildings (except in electric wiring, where the architect autonomy is limited to systems up to 30kVA, and it has to be done by an Electrical Engineer), applied to buildings, urban environment, built cultural heritage, landscape planning, interiorscape planning and regional planning.[11][12]
In Greece licensed architectural engineers are graduates from architecture faculties that belong to the Polytechnic University,[13] obtaining an "Engineering Diploma". They graduate after 5 years of studies and are fully entitled architects once they become members of the Technical Chamber of Greece (TEE – Τεχνικό Επιμελητήριο Ελλάδος).[14][15] The Technical Chamber of Greece has more than 100,000 members encompassing all the engineering disciplines as well as architecture.
A prerequisite for being a member is to be licensed as a qualified engineer or architect and to be a graduate of an engineering and architecture schools of a Greek university, or of an equivalent school from abroad. The Technical Chamber of Greece is the authorized body to provide work licenses to engineers of all disciplines as well as architects, graduated in Greece or abroad. The license is awarded after examinations. The examinations take place three to four times a year. The Engineering Diploma equals a master's degree in ECTS units (300) according to the Bologna Accords.[16]
Education:
Further information: Engineer's degree
The architectural, structural, mechanical and electrical engineering branches each have well established educational requirements that are usually fulfilled by completion of a university program.
Architectural engineering as a single integrated field of study:
Main article: Building engineering education
Its multi-disciplinary engineering approach is what differentiates architectural engineering from architecture (the field of the architect): which is an integrated, separate and single, field of study when compared to other engineering disciplines.
Through training in and appreciation of architecture, the field seeks integration of building systems within its overall building design. Architectural engineering includes the design of building systems including heating, ventilation and air conditioning (HVAC), plumbing, fire protection, electrical, lighting, architectural acoustics, and structural systems.
In some university programs, students are required to concentrate on one of the systems; in others, they can receive a generalist architectural or building engineering degree.
See also:
- technological aspects and multi-disciplinary approach to planning,
- design,
- construction and operation of buildings, such as analysis and integrated design of environmental systems
- (energy conservation,
- HVAC,
- plumbing,
- lighting,
- fire protection,
- acoustics,
- vertical and horizontal transportation,
- electrical power systems),
- structural systems,
- behavior and properties of building components and materials,
- and construction management.[1][2]
From reduction of greenhouse gas emissions to the construction of resilient buildings, architectural engineers are at the forefront of addressing several major challenges of the 21st century. They apply the latest scientific knowledge and technologies to the design of buildings.
Architectural engineering as a relatively new licensed profession emerged in the 20th century as a result of the rapid technological developments. Architectural engineers are at the forefront of two major historical opportunities that today's world is immersed in: (1) that of rapidly advancing computer-technology, and (2) the parallel revolution arising from the need to create a sustainable planet.[3][4]
Distinguished from architecture as an art of design, architectural engineering, is the art and science of engineering and construction as practiced in respect of buildings.
Related engineering and design fields:
Structural Engineering[edit]Main article: Structural engineeringStructural engineering involves the analysis and design of the built environment (buildings, bridges, equipment supports, towers and walls).
Those concentrating on buildings are sometimes informally referred to as "building engineers". Structural engineers require expertise in strength of materials, structural analysis, and in predicting structural load such as from weight of the building, occupants and contents, and extreme events such as wind, rain, ice, and seismic design of structures which is referred to as earthquake engineering.
Architectural Engineers sometimes incorporate structural as one aspect of their designs; the structural discipline when practiced as a specialty works closely with architects and other engineering specialists.
Mechanical, electrical, and plumbing (MEP):
Mechanical engineering and electrical engineering engineers are specialists when engaged in the building design fields. This is known as mechanical, electrical, and plumbing (MEP) throughout the United States, or building services engineering in the United Kingdom, Canada, and Australia.[6] Mechanical engineers often design and oversee the heating, ventilation and air conditioning (HVAC), plumbing, and rainwater systems.
Plumbing designers often include design specifications for simple active fire protection systems, but for more complicated projects, fire protection engineers are often separately retained.
Electrical engineers are responsible for the building's power distribution, telecommunication, fire alarm, signalization, lightning protection and control systems, as well as lighting systems.
The architectural engineer (PE) in the United States:
Main article: Architectural engineer (PE)
In many jurisdictions of the United States, the architectural engineer is a licensed engineering professional.[7]
Usually a graduate of an EAC/ABET-accredited architectural engineering university program preparing students to perform whole-building design in competition with architect-engineer teams; or for practice in one of structural, mechanical or electrical fields of building design, but with an appreciation of integrated architectural requirements.
Although some states require a BS degree from an EAC/ABET-accredited engineering program, with no exceptions, about two thirds of the states accept BS degrees from ETAC/ABET-accredited architectural engineering technology programs to become licensed engineering professionals.
Architectural engineering technology graduates, with applied engineering skills, often gain further learning with an MS degree in engineering and/or NAAB-accredited Masters of Architecture to become licensed as both an engineer and architect. This path requires the individual to pass state licensing exams in both disciplines.
States handle this situation differently on experienced gained working under a licensed engineer and/or registered architect prior to taking the examinations. This education model is more in line with the educational system in the United Kingdom where an accredited MEng or MS degree in engineering for further learning is required by the Engineering Council to be registered as a Chartered Engineer.
The National Council of Architectural Registration Boards (NCARB) facilitate the licensure and credentialing of architects but requirements for registration often vary between states.
In the state of New Jersey, a registered architect is allowed to sit for the PE exam and a professional engineer is allowed to take the design portions of the Architectural Registration Exam (ARE), to become a registered architect. It is becoming more common for highly educated architectural engineers in the United States to become licensed as both engineer and architect.
Formal architectural engineering education, following the engineering model of earlier disciplines, developed in the late 19th century, and became widespread in the United States by the mid-20th century.
With the establishment of a specific "architectural engineering" NCEES Professional Engineering registration examination in the 1990s, and first offering in April 2003, architectural engineering became recognized as a distinct engineering discipline in the United States. Up to date NCEES account allows engineers to apply to other states PE license "by comity".
In most license-regulated jurisdictions, architectural engineers are not entitled to practice architecture unless they are also licensed as architects. Practice of structural engineering in high-risk locations, e.g., due to strong earthquakes, or on specific types of higher importance buildings such as hospitals, may require separate licensing as well. Regulations and customary practice vary widely by state or city.
The architect as architectural engineer:
See also: Architect § Professional requirements
In some countries, the practice of architecture includes planning, designing and overseeing the building's construction, and architecture, as a profession providing architectural services, is referred to as "architectural engineering".
In Japan, a "first-class architect" plays the dual role of architect and building engineer, although the services of a licensed "structural design first-class architect"(構造設計一級建築士) are required for buildings over a certain scale.[8]
In some languages, such as Korean and Arabic, "architect" is literally translated as "architectural engineer". In some countries, an "architectural engineer" (such as the ingegnere edile in Italy) is entitled to practice architecture and is often referred to as an architect.[citation needed] These individuals are often also structural engineers.
In other countries, such as Germany, Austria, Iran, and most of the Arab countries, architecture graduates receive an engineering degree (Dipl.-Ing. – Diplom-Ingenieur).[9]
In Spain, an "architect" has a technical university education and legal powers to carry out building structure and facility projects.[10]
In Brazil, architects and engineers used to share the same accreditation process (Conselho Federal de Engenheiros, Arquitetos e Agrônomos (CONFEA) – Federal Council of Engineering, Architecture and Agronomy). Now the Brazilian architects and urbanists have their own accreditation process (CAU – Architecture and Urbanism Council). Besides traditional architecture design training,
Brazilian architecture courses also offer complementary training in engineering disciplines such as structural, electrical, hydraulic and mechanical engineering. After graduation, architects focus in architectural planning, yet they can be responsible to the whole building, when it concerns to small buildings (except in electric wiring, where the architect autonomy is limited to systems up to 30kVA, and it has to be done by an Electrical Engineer), applied to buildings, urban environment, built cultural heritage, landscape planning, interiorscape planning and regional planning.[11][12]
In Greece licensed architectural engineers are graduates from architecture faculties that belong to the Polytechnic University,[13] obtaining an "Engineering Diploma". They graduate after 5 years of studies and are fully entitled architects once they become members of the Technical Chamber of Greece (TEE – Τεχνικό Επιμελητήριο Ελλάδος).[14][15] The Technical Chamber of Greece has more than 100,000 members encompassing all the engineering disciplines as well as architecture.
A prerequisite for being a member is to be licensed as a qualified engineer or architect and to be a graduate of an engineering and architecture schools of a Greek university, or of an equivalent school from abroad. The Technical Chamber of Greece is the authorized body to provide work licenses to engineers of all disciplines as well as architects, graduated in Greece or abroad. The license is awarded after examinations. The examinations take place three to four times a year. The Engineering Diploma equals a master's degree in ECTS units (300) according to the Bologna Accords.[16]
Education:
Further information: Engineer's degree
The architectural, structural, mechanical and electrical engineering branches each have well established educational requirements that are usually fulfilled by completion of a university program.
Architectural engineering as a single integrated field of study:
Main article: Building engineering education
Its multi-disciplinary engineering approach is what differentiates architectural engineering from architecture (the field of the architect): which is an integrated, separate and single, field of study when compared to other engineering disciplines.
Through training in and appreciation of architecture, the field seeks integration of building systems within its overall building design. Architectural engineering includes the design of building systems including heating, ventilation and air conditioning (HVAC), plumbing, fire protection, electrical, lighting, architectural acoustics, and structural systems.
In some university programs, students are required to concentrate on one of the systems; in others, they can receive a generalist architectural or building engineering degree.
See also:
- Architectural drawing
- Architectural technologist
- Architectural technology
- Building engineer
- Building officials
- Civil engineering
- Construction engineering
- Contour crafting
- History of architectural engineering
- International Building Code
- Mechanical, electrical, and plumbing
- Outline of architecture