Copyright © 2015 Bert N. Langford (Images may be subject to copyright. Please send feedback)
Welcome to Our Generation USA!
Innovations (& Their Innovators)
found in smart electronics, communications, military, transportation, science, engineering, and other fields and disciplines.
For a List of Medical Breakthroughs topics Click Here
For Computer Advancements, Click Here
For the Internet, Click Here
For Smartphones, Video and Online Games, Click Here
Innovations and Inventions
YouTube Video: Top 10 Inventions of the 20th Century by WatchMojo
Pictured below: Innovations in Technology that Humanity Will Reach by the Year 2030
Innovation can be defined simply as a "new idea, device or method". However, innovation is often also viewed as the application of better solutions that meet new requirements, unarticulated needs, or existing market needs.
Such innovation takes place through the provision of more-effective products, processes, services, technologies, or business models that are made available to markets, governments and society. The term "innovation" can be defined as something original and more effective and, as a consequence, new, that "breaks into" the market or society.
Innovation is related to, but not the same as, invention (see next topic below), as innovation is more apt to involve the practical implementation of an invention (i.e. new/improved ability) to make a meaningful impact in the market or society, and not all innovations require an invention. Innovation often manifests itself via the engineering process, when the problem being solved is of a technical or scientific nature. The opposite of innovation is exnovation.
While a novel device is often described as an innovation, in economics, management science, and other fields of practice and analysis, innovation is generally considered to be the result of a process that brings together various novel ideas in such a way that they affect society.
In industrial economics, innovations are created and found empirically from services to meet growing consumer demand.
A 2014 survey of literature on innovation found over 40 definitions. In an industrial survey of how the software industry defined innovation, the following definition given by Crossan and Apaydin was considered to be the most complete, which builds on the Organisation for Economic Co-operation and Development (OECD) manual's definition:
Innovation is:
According to Kanter, innovation includes original invention and creative use and defines innovation as a generation, admission and realization of new ideas, products, services and processes.
Two main dimensions of innovation were degree of novelty (patent) (i.e. whether an innovation is new to the firm, new to the market, new to the industry, or new to the world) and type of innovation (i.e. whether it is process or product-service system innovation). In recent organizational scholarship, researchers of workplaces have also distinguished innovation to be separate from creativity, by providing an updated definition of these two related but distinct constructs:
Workplace creativity concerns the cognitive and behavioral processes applied when attempting to generate novel ideas. Workplace innovation concerns the processes applied when attempting to implement new ideas.
Specifically, innovation involves some combination of problem/opportunity identification, the introduction, adoption or modification of new ideas germane to organizational needs, the promotion of these ideas, and the practical implementation of these ideas.
Click on any of the following blue hyperlinks for more about Innovation:
An invention is a unique or novel device, method, composition or process. The invention process is a process within an overall engineering and product development process. It may be an improvement upon a machine or product or a new process for creating an object or a result.
An invention that achieves a completely unique function or result may be a radical breakthrough. Such works are novel and not obvious to others skilled in the same field. An inventor may be taking a big step in success or failure.
Some inventions can be patented. A patent legally protects the intellectual property rights of the inventor and legally recognizes that a claimed invention is actually an invention. The rules and requirements for patenting an invention vary from country to country and the process of obtaining a patent is often expensive.
Another meaning of invention is cultural invention, which is an innovative set of useful social behaviors adopted by people and passed on to others. The Institute for Social Inventions collected many such ideas in magazines and books. Invention is also an important component of artistic and design creativity.
Inventions often extend the boundaries of human knowledge, experience or capability.
Inventions are of three kinds:
Scientific-technological inventions include:
Sociopolitical inventions comprise new laws, institutions, and procedures that change modes of social behavior and establish new forms of human interaction and organization. Examples include:
Humanistic inventions encompass culture in its entirety and are as transformative and important as any in the sciences, although people tend to take them for granted. In the domain of linguistics, for example, many alphabets have been inventions, as are all neologisms (Shakespeare invented about 1,700 words).
Literary inventions include:
Among the inventions of artists and musicians are:
Philosophers have invented:
Religious thinkers are responsible for such inventions as:
Some of these disciplines, genres, and trends may seem to have existed eternally or to have emerged spontaneously of their own accord, but most of them have had inventors.
For more about Inventions, click on any of the following blue hyperlinks:
Such innovation takes place through the provision of more-effective products, processes, services, technologies, or business models that are made available to markets, governments and society. The term "innovation" can be defined as something original and more effective and, as a consequence, new, that "breaks into" the market or society.
Innovation is related to, but not the same as, invention (see next topic below), as innovation is more apt to involve the practical implementation of an invention (i.e. new/improved ability) to make a meaningful impact in the market or society, and not all innovations require an invention. Innovation often manifests itself via the engineering process, when the problem being solved is of a technical or scientific nature. The opposite of innovation is exnovation.
While a novel device is often described as an innovation, in economics, management science, and other fields of practice and analysis, innovation is generally considered to be the result of a process that brings together various novel ideas in such a way that they affect society.
In industrial economics, innovations are created and found empirically from services to meet growing consumer demand.
A 2014 survey of literature on innovation found over 40 definitions. In an industrial survey of how the software industry defined innovation, the following definition given by Crossan and Apaydin was considered to be the most complete, which builds on the Organisation for Economic Co-operation and Development (OECD) manual's definition:
Innovation is:
- production or adoption, assimilation, and exploitation of a value-added novelty in economic and social spheres;
- renewal and enlargement of products, services, and markets;
- development of new methods of production;
- and establishment of new management systems. It is both a process and an outcome.
According to Kanter, innovation includes original invention and creative use and defines innovation as a generation, admission and realization of new ideas, products, services and processes.
Two main dimensions of innovation were degree of novelty (patent) (i.e. whether an innovation is new to the firm, new to the market, new to the industry, or new to the world) and type of innovation (i.e. whether it is process or product-service system innovation). In recent organizational scholarship, researchers of workplaces have also distinguished innovation to be separate from creativity, by providing an updated definition of these two related but distinct constructs:
Workplace creativity concerns the cognitive and behavioral processes applied when attempting to generate novel ideas. Workplace innovation concerns the processes applied when attempting to implement new ideas.
Specifically, innovation involves some combination of problem/opportunity identification, the introduction, adoption or modification of new ideas germane to organizational needs, the promotion of these ideas, and the practical implementation of these ideas.
Click on any of the following blue hyperlinks for more about Innovation:
- Inter-disciplinary views
- Diffusion
- Measures
- Government policies
- See also:
- Communities of innovation
- Creative competitive intelligence
- Creative problem solving
- Creativity
- Diffusion of innovations
- Deployment
- Disruptive innovation
- Diffusion (anthropology)
- Ecoinnovation
- Global Innovation Index (Boston Consulting Group)
- Global Innovation Index (INSEAD)
- Greatness
- Hype cycle
- Individual capital
- Induced innovation
- Information revolution
- Ingenuity
- Innovation leadership
- Innovation management
- Innovation system
- Knowledge economy
- List of countries by research and development spending
- List of emerging technologies
- List of Russian inventors
- Multiple discovery
- Obsolescence
- Open Innovation
- Open Innovations (Forum and Technology Show)
- Outcome-Driven Innovation
- Paradigm shift
- Participatory design
- Pro-innovation bias
- Public domain
- Research
- State of art
- Sustainable Development Goals (Agenda 9)
- Technology Life Cycle
- Technological innovation system
- Theories of technology
- Timeline of historic inventions
- Toolkits for User Innovation
- UNDP Innovation Facility
- Value network
- Virtual product development
An invention is a unique or novel device, method, composition or process. The invention process is a process within an overall engineering and product development process. It may be an improvement upon a machine or product or a new process for creating an object or a result.
An invention that achieves a completely unique function or result may be a radical breakthrough. Such works are novel and not obvious to others skilled in the same field. An inventor may be taking a big step in success or failure.
Some inventions can be patented. A patent legally protects the intellectual property rights of the inventor and legally recognizes that a claimed invention is actually an invention. The rules and requirements for patenting an invention vary from country to country and the process of obtaining a patent is often expensive.
Another meaning of invention is cultural invention, which is an innovative set of useful social behaviors adopted by people and passed on to others. The Institute for Social Inventions collected many such ideas in magazines and books. Invention is also an important component of artistic and design creativity.
Inventions often extend the boundaries of human knowledge, experience or capability.
Inventions are of three kinds:
- scientific-technological (including medicine),
- sociopolitical (including economics and law),
- and humanistic, or cultural.
Scientific-technological inventions include:
- railroads,
- aviation,
- vaccination,
- hybridization,
- antibiotics,
- astronautics,
- holography,
- the atomic bomb,
- computing,
- the Internet,
- and the smartphone.
Sociopolitical inventions comprise new laws, institutions, and procedures that change modes of social behavior and establish new forms of human interaction and organization. Examples include:
- the British Parliament,
- the US Constitution,
- the Manchester (UK) General Union of Trades,
- the Boy Scouts,
- the Red Cross,
- the Olympic Games,
- the United Nations,
- the European Union,
- and the Universal Declaration of Human Rights,
- as well as movements such as:
- socialism,
- Zionism,
- suffragism,
- feminism,
- and animal-rights veganism.
Humanistic inventions encompass culture in its entirety and are as transformative and important as any in the sciences, although people tend to take them for granted. In the domain of linguistics, for example, many alphabets have been inventions, as are all neologisms (Shakespeare invented about 1,700 words).
Literary inventions include:
- the epic,
- tragedy,
- comedy,
- the novel,
- the sonnet,
- the Renaissance,
- neoclassicism,
- Romanticism,
- Symbolism,
- Aestheticism,
- Socialist Realism,
- Surrealism,
- postmodernism,
- and (according to Freud) psychoanalysis.
Among the inventions of artists and musicians are:
- oil painting,
- printmaking,
- photography,
- cinema,
- musical tonality,
- atonality,
- jazz,
- rock,
- opera,
- and the symphony orchestra.
Philosophers have invented:
- logic (several times),
- dialectics,
- idealism,
- materialism,
- utopia,
- anarchism,
- semiotics,
- phenomenology,
- behaviorism,
- positivism,
- pragmatism,
- and deconstruction.
Religious thinkers are responsible for such inventions as:
- monotheism,
- pantheism,
- Methodism,
- Mormonism,
- iconoclasm,
- puritanism,
- deism,
- secularism,
- ecumenism,
- and Baha’i.
Some of these disciplines, genres, and trends may seem to have existed eternally or to have emerged spontaneously of their own accord, but most of them have had inventors.
For more about Inventions, click on any of the following blue hyperlinks:
- Process of invention
- Invention vs. innovation
- Purposes of invention
- Invention as defined by patent law
- Invention in the arts
- See also:
- portal
- Bayh-Dole Act
- Chindōgu
- Creativity techniques
- Directive on the legal protection of biotechnological inventions
- Discovery (observation)
- Edisonian approach
- Heroic theory of invention and scientific development
- Independent inventor
- Ingenuity
- INPEX (invention show)
- International Innovation Index
- Invention promotion firm
- Inventors' Day
- Kranzberg's laws of technology
- Lemelson-MIT Prize
- Category:Lists of inventions or discoveries
- List of inventions named after people
- List of inventors
- List of prolific inventors
- Multiple discovery
- National Inventors Hall of Fame
- Patent model
- Proof of concept
- Proposed directive on the patentability of computer-implemented inventions - it was rejected
- Scientific priority
- Technological revolution
- The Illustrated Science and Invention Encyclopedia
- Timeline of historic inventions
- Science and invention in Birmingham - The first cotton spinning mill to plastics and steam power.
- Invention Ideas
- List of PCT (Patent Cooperation Treaty) Notable Inventions at WIPO
- Hottelet, Ulrich (October 2007). "Invented in Germany - made in Asia". The Asia Pacific Times. Archived from the original on 2012-05-01
Smart Home Technology
YouTube Video Video Example of Smart Home Technology in Action...
Pictured: Illustration courtesy of http://www.futureforall.org/home/homeofthefuture.htm
Home automation or smart home is the residential extension of building automation and involves the control and automation of lighting, heating (such as smart thermostats), ventilation, air conditioning (HVAC), and security, as well as home appliances such as washer/dryers, ovens or refrigerators/freezers that use WiFi for remote monitoring.
Modern systems generally consist of switches and sensors connected to a central hub sometimes called a "gateway" from which the system is controlled with a user interface that is interacted either with a wall-mounted terminal, mobile phone software, tablet computer or a web interface, often but not always via internet cloud services.
While there are many competing vendors, there are very few world-wide accepted industry standards and the smart home space is heavily fragmented.
Popular communications protocol for products include the following:
Manufacturers often prevent independent implementations by withholding documentation and by suing people.
The home automation market was worth US$5.77 billion in 2015, predicted to have a market value over US$10 billion by the year 2020.
According to Li et. al. (2016) there are three generations of home automation:
Applications and technologies:
Implementations:
In a review of home automation devices, Consumer Reports found two main concerns for consumers:
Microsoft Research found in 2011, that home automation could involve high cost of ownership, inflexibility of interconnected devices, and poor manageability.
Historically systems have been sold as complete systems where the consumer relies on one vendor for the entire system including the hardware, the communications protocol, the central hub, and the user interface. However, there are now open source software systems which can be used with proprietary hardware.
Protocols:
There are a wide variety of technology platforms, or protocols, on which a smart home can be built. Each one is, essentially, its own language. Each language speaks to the various connected devices and instructs them to perform a function.
The automation protocol transport has involved direct wire connectivity, powerline (UPB) and wireless hybrid and wireless.
Most of the protocols below are not open. All have an API.
Click here for a chart listing available protocols.
Criticism and controversies:
Home automation suffers from platform fragmentation and lack of technical standards, a situation where the variety of home automation devices, in terms of both hardware variations and differences in the software running on them, makes the task of developing applications that work consistently between different inconsistent technology ecosystems hard.
Customers may be hesitant to bet their IoT future on proprietary software or hardware devices that use proprietary protocols that may fade or become difficult to customize and interconnect.
Home automation devices' amorphous computing nature is also a problem for security, since patches to bugs found in the core operating system often do not reach users of older and lower-price devices.
One set of researchers say that the failure of vendors to support older devices with patches and updates leaves more than 87% of active devices vulnerable.
See Also:
Modern systems generally consist of switches and sensors connected to a central hub sometimes called a "gateway" from which the system is controlled with a user interface that is interacted either with a wall-mounted terminal, mobile phone software, tablet computer or a web interface, often but not always via internet cloud services.
While there are many competing vendors, there are very few world-wide accepted industry standards and the smart home space is heavily fragmented.
Popular communications protocol for products include the following:
- X10,
- Ethernet,
- RS-485,
- 6LoWPAN,
- Bluetooth LE (BLE),
- ZigBee,
- and Z-Wave,
- or other proprietary protocols all of which are incompatible with each other.
Manufacturers often prevent independent implementations by withholding documentation and by suing people.
The home automation market was worth US$5.77 billion in 2015, predicted to have a market value over US$10 billion by the year 2020.
According to Li et. al. (2016) there are three generations of home automation:
- First generation: wireless technology with proxy server, e.g. Zigbee automation;
- Second generation: artificial intelligence controls electrical devices, e.g. amazon echo;
- Third generation: robot buddy "who" interacts with humans, e.g. Robot Rovio, Roomba.
Applications and technologies:
- Heating, ventilation and air conditioning (HVAC): it is possible to have remote control of all home energy monitors over the internet incorporating a simple and friendly user interface.
- Lighting control system
- Appliance control and integration with the smart grid and a smart meter, taking advantage, for instance, of high solar panel output in the middle of the day to run washing machines.
- Security: a household security system integrated with a home automation system can provide additional services such as remote surveillance of security cameras over the Internet, or central locking of all perimeter doors and windows.
- Leak detection, smoke and CO detectors
- Indoor positioning systems
- Home automation for the elderly and disabled
Implementations:
In a review of home automation devices, Consumer Reports found two main concerns for consumers:
- A WiFi network connected to the internet can be vulnerable to hacking.
- Technology is still in its infancy, and consumers could invest in a system that becomes abandonware. In 2014, Google bought the company selling the Revolv Hub home automation system, integrated it with Nest and in 2016 shut down the servers Revolv Hub depended on, rendering the hardware useless.
Microsoft Research found in 2011, that home automation could involve high cost of ownership, inflexibility of interconnected devices, and poor manageability.
Historically systems have been sold as complete systems where the consumer relies on one vendor for the entire system including the hardware, the communications protocol, the central hub, and the user interface. However, there are now open source software systems which can be used with proprietary hardware.
Protocols:
There are a wide variety of technology platforms, or protocols, on which a smart home can be built. Each one is, essentially, its own language. Each language speaks to the various connected devices and instructs them to perform a function.
The automation protocol transport has involved direct wire connectivity, powerline (UPB) and wireless hybrid and wireless.
Most of the protocols below are not open. All have an API.
Click here for a chart listing available protocols.
Criticism and controversies:
Home automation suffers from platform fragmentation and lack of technical standards, a situation where the variety of home automation devices, in terms of both hardware variations and differences in the software running on them, makes the task of developing applications that work consistently between different inconsistent technology ecosystems hard.
Customers may be hesitant to bet their IoT future on proprietary software or hardware devices that use proprietary protocols that may fade or become difficult to customize and interconnect.
Home automation devices' amorphous computing nature is also a problem for security, since patches to bugs found in the core operating system often do not reach users of older and lower-price devices.
One set of researchers say that the failure of vendors to support older devices with patches and updates leaves more than 87% of active devices vulnerable.
See Also:
- Home automation for the elderly and disabled
- Internet of Things
- List of home automation software and hardware
- List of home automation topics
- List of network buses
- Smart device
- Web of Things
Sergey Brin (co-founder of Google)
YouTube Video: Sergey Brin talks about Google Glass at TED 2013
Sergey Mikhaylovich Brin (born August 21, 1973) is a Soviet-born American computer scientist, internet entrepreneur, and philanthropist. Together with Larry Page, he co-founded Google.
Brin is the President of Google's parent company Alphabet Inc. In October 2016 (the most recent period for which figures are available), Brin was the 12th richest person in the world, with an estimated net worth of US$39.2 billion.
Brin immigrated to the United States with his family from the Soviet Union at the age of 6. He earned his bachelor's degree at the University of Maryland, following in his father's and grandfather's footsteps by studying mathematics, as well as computer science.
After graduation, he moved to Stanford University to acquire a PhD in computer science. There he met Page, with whom he later became friends. They crammed their dormitory room with inexpensive computers and applied Brin's data mining system to build a web search engine. The program became popular at Stanford, and they suspended their PhD studies to start up Google in a rented garage.
The Economist referred to Brin as an "Enlightenment Man", and as someone who believes that "knowledge is always good, and certainly always better than ignorance", a philosophy that is summed up by Google's mission statement, "Organize the world's information and make it universally accessible and useful," and unofficial, sometimes controversial motto, "Don't be evil".
Click on any of the following blue hyperlinks to learn more about Sergey Brin:
Brin is the President of Google's parent company Alphabet Inc. In October 2016 (the most recent period for which figures are available), Brin was the 12th richest person in the world, with an estimated net worth of US$39.2 billion.
Brin immigrated to the United States with his family from the Soviet Union at the age of 6. He earned his bachelor's degree at the University of Maryland, following in his father's and grandfather's footsteps by studying mathematics, as well as computer science.
After graduation, he moved to Stanford University to acquire a PhD in computer science. There he met Page, with whom he later became friends. They crammed their dormitory room with inexpensive computers and applied Brin's data mining system to build a web search engine. The program became popular at Stanford, and they suspended their PhD studies to start up Google in a rented garage.
The Economist referred to Brin as an "Enlightenment Man", and as someone who believes that "knowledge is always good, and certainly always better than ignorance", a philosophy that is summed up by Google's mission statement, "Organize the world's information and make it universally accessible and useful," and unofficial, sometimes controversial motto, "Don't be evil".
Click on any of the following blue hyperlinks to learn more about Sergey Brin:
- Early life and education
- Search engine development
- Other interests
- Censorship of Google in China
- Personal life
- Awards and accolades
- Filmography
Larry Page (co-founder of Google)
YouTube Video Where's Google going next? | Larry Page
Lawrence "Larry" Page (born March 26, 1973) is an American computer scientist and an Internet entrepreneur who co-founded Google Inc. with Sergey Brin in 1998.
Page is the chief executive officer (CEO) of Google's parent company, Alphabet Inc. After stepping aside as Google CEO in August 2001 in favor of Eric Schmidt, he re-assumed the role in April 2011. He announced his intention to step aside a second time in July 2015 to become CEO of Alphabet, under which Google's assets would be reorganized.
Under Page, Alphabet is seeking to deliver major advancements in a variety of industries.
In November 2016, he is the 12th richest person in the world, with an estimated net worth of US$36.9 billion.
Page is the inventor of PageRank, Google's best-known search ranking algorithm. Page received the Marconi Prize in 2004.
For more about Larry Page, click on any of the following blue hyperlinks:
Page is the chief executive officer (CEO) of Google's parent company, Alphabet Inc. After stepping aside as Google CEO in August 2001 in favor of Eric Schmidt, he re-assumed the role in April 2011. He announced his intention to step aside a second time in July 2015 to become CEO of Alphabet, under which Google's assets would be reorganized.
Under Page, Alphabet is seeking to deliver major advancements in a variety of industries.
In November 2016, he is the 12th richest person in the world, with an estimated net worth of US$36.9 billion.
Page is the inventor of PageRank, Google's best-known search ranking algorithm. Page received the Marconi Prize in 2004.
For more about Larry Page, click on any of the following blue hyperlinks:
- Early life and education
- PhD studies and research
- Alphabet
- Other interests
- Personal life
- Awards and accolades
Steve Jobs
YouTube Video by Steve Jobs - Courage*
* -- This is a clip from the D8 Conference, recorded in 2010. Steve Jobs is talking about the courage it takes to remove certain pieces of technology from Apple products. This happened after the iPad was introduced without support for Flash, just as the iPhone, back in 2007. This clip adds some perspective into the debate of Apple's new AirPod and the decision to remove the traditional analog audio connector from the iPhone 7. This kind of decision is not new to Apple.
Steven Paul "Steve" Jobs (February 24, 1955 – October 5, 2011) was an American businessman, inventor, and industrial designer. He was the co-founder, chairman, and chief executive officer (CEO) of Apple Inc.; CEO and majority shareholder of Pixar; a member of The Walt Disney Company's board of directors following its acquisition of Pixar; and founder, chairman, and CEO of NeXT. Jobs is widely recognized as a pioneer of the microcomputer revolution of the 1970s and 1980s, along with Apple co-founder Steve Wozniak.
Jobs was adopted at birth in San Francisco, and raised in the San Francisco Bay Area during the 1960s. Jobs briefly attended Reed College in 1972 before dropping out. He then decided to travel through India in 1974 seeking enlightenment and studying Zen Buddhism.
Jobs's declassified FBI report says an acquaintance knew that Jobs used illegal drugs in college including marijuana and LSD. Jobs told a reporter once that taking LSD was "one of the two or three most important things" he did in his life.
Jobs co-founded Apple in 1976 to sell Wozniak's Apple I personal computer. The duo gained fame and wealth a year later for the Apple II, one of the first highly successful mass-produced personal computers. In 1979, after a tour of PARC, Jobs saw the commercial potential of the Xerox Alto, which was mouse-driven and had a graphical user interface (GUI).
This led to development of the unsuccessful Apple Lisa in 1983, followed by the breakthrough Macintosh in 1984.
In addition to being the first mass-produced computer with a GUI, the Macintosh instigated the sudden rise of the desktop publishing industry in 1985 with the addition of the Apple LaserWriter, the first laser printer to feature vector graphics. Following a long power struggle, Jobs was forced out of Apple in 1985.
After leaving Apple, Jobs took a few of its members with him to found NeXT, a computer platform development company specializing in state-of-the-art computers for higher-education and business markets.
In addition, Jobs helped to initiate the development of the visual effects industry when he funded the spinout of the computer graphics division of George Lucas's Lucasfilm in 1986. The new company, Pixar, would eventually produce the first fully computer-animated film, Toy Story—an event made possible in part because of Jobs's financial support.
In 1997, Apple acquired and merged NeXT, allowing Jobs to become CEO once again, reviving the company at the verge of bankruptcy. Beginning in 1997 with the "Think different" advertising campaign, Jobs worked closely with designer Jonathan Ive to develop a line of products that would have larger cultural ramifications:
Mac OS was also revamped into OS X (renamed “macOS” in 2016), based on NeXT's NeXTSTEP platform.
Jobs was diagnosed with a pancreatic neuroendocrine tumor in 2003 and died of respiratory arrest related to the tumor on October 5, 2011.
Click on any of the following blue hyperlinks for more about Steve Jobs:
Jobs was adopted at birth in San Francisco, and raised in the San Francisco Bay Area during the 1960s. Jobs briefly attended Reed College in 1972 before dropping out. He then decided to travel through India in 1974 seeking enlightenment and studying Zen Buddhism.
Jobs's declassified FBI report says an acquaintance knew that Jobs used illegal drugs in college including marijuana and LSD. Jobs told a reporter once that taking LSD was "one of the two or three most important things" he did in his life.
Jobs co-founded Apple in 1976 to sell Wozniak's Apple I personal computer. The duo gained fame and wealth a year later for the Apple II, one of the first highly successful mass-produced personal computers. In 1979, after a tour of PARC, Jobs saw the commercial potential of the Xerox Alto, which was mouse-driven and had a graphical user interface (GUI).
This led to development of the unsuccessful Apple Lisa in 1983, followed by the breakthrough Macintosh in 1984.
In addition to being the first mass-produced computer with a GUI, the Macintosh instigated the sudden rise of the desktop publishing industry in 1985 with the addition of the Apple LaserWriter, the first laser printer to feature vector graphics. Following a long power struggle, Jobs was forced out of Apple in 1985.
After leaving Apple, Jobs took a few of its members with him to found NeXT, a computer platform development company specializing in state-of-the-art computers for higher-education and business markets.
In addition, Jobs helped to initiate the development of the visual effects industry when he funded the spinout of the computer graphics division of George Lucas's Lucasfilm in 1986. The new company, Pixar, would eventually produce the first fully computer-animated film, Toy Story—an event made possible in part because of Jobs's financial support.
In 1997, Apple acquired and merged NeXT, allowing Jobs to become CEO once again, reviving the company at the verge of bankruptcy. Beginning in 1997 with the "Think different" advertising campaign, Jobs worked closely with designer Jonathan Ive to develop a line of products that would have larger cultural ramifications:
- the iMac,
- iTunes and iTunes Store,
- Apple Store,
- iPod,
- iPhone,
- App Store,
- and the iPad (see picture above)
Mac OS was also revamped into OS X (renamed “macOS” in 2016), based on NeXT's NeXTSTEP platform.
Jobs was diagnosed with a pancreatic neuroendocrine tumor in 2003 and died of respiratory arrest related to the tumor on October 5, 2011.
Click on any of the following blue hyperlinks for more about Steve Jobs:
- Background
- Childhood
- Homestead High
- Reed College
- 1972–1985
- 1985–1997
- 1997–2011
- Portrayals and coverage in books, film, and theater
- Innovations and designs
- Honors and awards
- See also:
Laser Technology and its Applications
YouTube Video: Laser Assisted Cataract Surgery
YouTube Video of an Amazing Laser Show
Pictured: The lasers commonly employed in laser scanning confocal microscopy are high-intensity monochromatic light sources, which are useful as tools for a variety of techniques including optical trapping, lifetime imaging studies, photobleaching recovery, and total internal reflection fluorescence. In addition, lasers are also the most common light source for scanning confocal fluorescence microscopy, and have been utilized, although less frequently, in conventional widefield fluorescence investigations
A laser is a device that emits light through a process of optical amplification based on the stimulated emission of electromagnetic radiation. The term "laser" originated as an acronym for "light amplification by stimulated emission of radiation".
The first laser was built in 1960 by Theodore H. Maiman at Hughes Research Laboratories, based on theoretical work by Charles Hard Townes and Arthur Leonard Schawlow.
A laser differs from other sources of light in that it emits light coherently. Spatial coherence allows a laser to be focused to a tight spot, enabling applications such as laser cutting and lithography.
Spatial coherence also allows a laser beam to stay narrow over great distances (collimation), enabling applications such as laser pointers. Lasers can also have high temporal coherence, which allows them to emit light with a very narrow spectrum, i.e., they can emit a single color of light. Temporal coherence can be used to produce pulses of light as short as a femtosecond.
Among their many applications, lasers are used in:
Click here for more about Laser Technology
Click on any of the following blue hyperlinks for additional Laser Applications:
The first laser was built in 1960 by Theodore H. Maiman at Hughes Research Laboratories, based on theoretical work by Charles Hard Townes and Arthur Leonard Schawlow.
A laser differs from other sources of light in that it emits light coherently. Spatial coherence allows a laser to be focused to a tight spot, enabling applications such as laser cutting and lithography.
Spatial coherence also allows a laser beam to stay narrow over great distances (collimation), enabling applications such as laser pointers. Lasers can also have high temporal coherence, which allows them to emit light with a very narrow spectrum, i.e., they can emit a single color of light. Temporal coherence can be used to produce pulses of light as short as a femtosecond.
Among their many applications, lasers are used in:
- optical disk drives,
- laser printers,
- barcode scanners;
- DNA sequencing instruments,
- fiber-optic and free-space optical communication;
- laser surgery and skin treatments;
- cutting and welding materials;
- military and law enforcement devices for marking targets and measuring range and speed;
- and laser lighting displays in entertainment.
Click here for more about Laser Technology
Click on any of the following blue hyperlinks for additional Laser Applications:
- Scientific
- Military
- Medical
- Industrial and commercial
- Entertainment and recreation
- Surveying and ranging
- Bird deterrent
- Images
- See also:
Technologies for Clothing and Textiles, including their Timelines
YouTube Video: 100 Years of Fashion: Women
YouTube Video: 100 Years of Fashion: Men
Pictured Below:
TOP: Increases in capital investment from 2009-2014: 2016 State Of The U.S. Textile Industry
BOTTOM: Images of how female fashion has changed from (LEFT) 1950; to (RIGHT) Today: (L-R) Vanessa Hudgens, Miranda Kerr and Ashley Tisdale.
Click here for the Timeline of clothing and textiles technology
Clothing technology involves the manufacturing, materials, and design innovations that have been developed and used.
The timeline of clothing and textiles technology includes major changes in the manufacture and distribution of clothing.
From clothing in the ancient world into modernity the use of technology has dramatically influenced clothing and fashion in the modern age. Industrialization brought changes in the manufacture of goods. In many nations, homemade goods crafted by hand have largely been replaced factory produced goods on assembly lines purchased in a by consumer culture.
Innovations include man-made materials such as polyester, nylon, and vinyl as well as features like zippers and velcro. The advent of advanced electronics has resulted in wearable technology being developed and popularized since the 1980s.
Design is an important part of the industry beyond utilitarian concerns and the fashion and glamour industries have developed in relation to clothing marketing and retail.
Environmental and human rights issues have also become considerations for clothing and spurred the promotion and use of some natural materials such as bamboo that are considered environmentally friendly.
Click on any of the following blue hyperlinks for more information about clothing technology:
Textile manufacturing is a major industry. It is based on the conversion of fibre into yarn, yarn into fabric. These are then dyed or printed, fabricated into clothes.
Different types of fibre are used to produce yarn. Cotton remains the most important natural fiber, so is treated in depth. There are many variable processes available at the spinning and fabric-forming stages coupled with the complexities of the finishing and coloration processes to the production of a wide ranges of products. There remains a large industry that uses hand techniques to achieve the same results.
Click here for more about Textile Manufacturing.
Clothing technology involves the manufacturing, materials, and design innovations that have been developed and used.
The timeline of clothing and textiles technology includes major changes in the manufacture and distribution of clothing.
From clothing in the ancient world into modernity the use of technology has dramatically influenced clothing and fashion in the modern age. Industrialization brought changes in the manufacture of goods. In many nations, homemade goods crafted by hand have largely been replaced factory produced goods on assembly lines purchased in a by consumer culture.
Innovations include man-made materials such as polyester, nylon, and vinyl as well as features like zippers and velcro. The advent of advanced electronics has resulted in wearable technology being developed and popularized since the 1980s.
Design is an important part of the industry beyond utilitarian concerns and the fashion and glamour industries have developed in relation to clothing marketing and retail.
Environmental and human rights issues have also become considerations for clothing and spurred the promotion and use of some natural materials such as bamboo that are considered environmentally friendly.
Click on any of the following blue hyperlinks for more information about clothing technology:
- Production
- Sports
- Education
- See also
Textile manufacturing is a major industry. It is based on the conversion of fibre into yarn, yarn into fabric. These are then dyed or printed, fabricated into clothes.
Different types of fibre are used to produce yarn. Cotton remains the most important natural fiber, so is treated in depth. There are many variable processes available at the spinning and fabric-forming stages coupled with the complexities of the finishing and coloration processes to the production of a wide ranges of products. There remains a large industry that uses hand techniques to achieve the same results.
Click here for more about Textile Manufacturing.
Advancements in Technologies for Agriculture and Food, Including their Timeline
YouTube Video: Latest Technology Machines, New Modern Agriculture Machines compilation 2016
Pictured: LEFT: Six Ways Drones are Revolutionizing Agriculture; RIGHT: A corn farmer sprays weed killer across his corn field in Auburn, Ill.
Click here for a Timeline of Agriculture Technology Advancements.
By the United States Department of Agriculture (USDA), National Institute of Food and Agriculture:
Modern farms and agricultural operations work far differently than those a few decades ago, primarily because of advancements in technology, including sensors, devices, machines, and information technology.
Today’s agriculture routinely uses sophisticated technologies such as robots, temperature and moisture sensors, aerial images, and GPS technology. These advanced devices and precision agriculture and robotic systems allow businesses to be more profitable, efficient, safer, and more environmentally friendly.
IMPORTANCE OF AGRICULTURAL TECHNOLOGY:
Farmers no longer have to apply water, fertilizers, and pesticides uniformly across entire fields. Instead, they can use the minimum quantities required and target very specific areas, or even treat individual plants differently. Benefits include:
In addition, robotic technologies enable more reliable monitoring and management of natural resources, such as air and water quality. It also gives producers greater control over plant and animal production, processing, distribution, and storage, which results in:
NIFA’S IMPACT:
NIFA advances agricultural technology and ensures that the nation’s agricultural industries are able to utilize it by supporting:
Forbes Magazine, July 5, 2016 Issue:
ShareFullscreenAgriculture technology is no longer a niche that no one's heard about. Agriculture has confirmed its place as an industry of interest for the venture capital community after investment in agtech broke records for the past three years in a row, reaching $4.6 billion in 2015.
For a long time, it wasn’t a target sector for venture capitalists or entrepreneurs. Only a handful of funds served the market, largely focused on biotech opportunities. And until recently, entrepreneurs were also too focused on what Steve Case calls the “Second Wave” of innovation, -- web services, social media, and mobile technology -- to look at agriculture, the least digitized industry in the world, according to McKinsey & Co.
Michael Macrie, chief information officer at agriculture cooperative Land O’ Lakes recently told Forbes that he counted only 20 agtech companies as recently as 2010.
But now, the opportunity to bring agriculture, a $7.8 trillion industry representing 10% of global GDP, into the modern age has caught the attention of a growing number of investors globally. In our 2015 annual report, we recorded 503 individual companies raising funding.
This increasing interest in the sector coincides with a more general “Third Wave” in technological innovation, where all companies are internet-powered tech companies, and startups are challenging the biggest incumbent industries like hospitality, transport, and now agriculture.
There is huge potential, and need, to help the ag industry find efficiencies, conserve valuable resources, meet global demands for protein, and ensure consumers have access to clean, safe, healthy food. In all this, technological innovation is inevitable.
It’s a complex and diverse industry, however, with many subsectors for farmers, investors, and industry stakeholders to navigate. Entrepreneurs are innovating across agricultural disciplines, aiming to disrupt the beef, dairy, row crop, permanent crop, aquaculture, forestry, and fisheries sectors. Each discipline has a specific set of needs that will differ from the others.
___________________________________________________________________________
Advancements in Food Technology:
Food technology is a branch of food science that deals with the production processes that make foods.
Early scientific research into food technology concentrated on food preservation. Nicolas Appert’s development in 1810 of the canning process was a decisive event. The process wasn’t called canning then and Appert did not really know the principle on which his process worked, but canning has had a major impact on food preservation techniques.
Louis Pasteur's research on the spoilage of wine and his description of how to avoid spoilage in 1864 was an early attempt to apply scientific knowledge to food handling.
Besides research into wine spoilage, Pasteur researched the production of alcohol, vinegar, wines and beer, and the souring of milk. He developed pasteurization—the process of heating milk and milk products to destroy food spoilage and disease-producing organisms.
In his research into food technology, Pasteur became the pioneer into bacteriology and of modern preventive medicine.
Developments in food technology have contributed greatly to the food supply and have changed our world. Some of these developments are:
Consumer Acceptance:
In the past, consumer attitude towards food technologies was not common talk and was not important in food development. Nowadays the food chain is long and complicated, foods and food technologies are diverse; consequently the consumers are uncertain about the food quality and safety and find it difficult to orient themselves to the subject.
That is why consumer acceptance of food technologies is an important question. However, in these days acceptance of food products very often depends on potential benefits and risks associated with the food. This also includes the technology the food is processed with.
Attributes like “uncertain”, “unknown” or “unfamiliar” are associated with consumers’ risk perception and consumer very likely will reject products linked to these attributes. Especially innovative food processing technologies are connected to these characteristics and are perceived as risky by consumers.
Acceptance of the different food technologies is very different. Whereas pasteurization is well recognized, high pressure treatment or even microwaves are perceived as risky very often. In studies done within Hightech Europe project, it was found that traditional technologies were well accepted in contrast to innovative technologies.
Consumers form their attitude towards innovative food technologies by three main factors mechanisms. First, knowledge or beliefs about risks and benefits which are correlated with the technology. Second, attitudes are based on their own experience and third, based on higher order values and beliefs.
Acceptance of innovative technologies can be improved by providing non-emotional and concise information about these new technological processes methods. According to a study made by HighTech project also written information seems to have higher impact than audio-visual information on the consumer in case of sensory acceptance of products processed with innovative food technologies.
See Also:
By the United States Department of Agriculture (USDA), National Institute of Food and Agriculture:
Modern farms and agricultural operations work far differently than those a few decades ago, primarily because of advancements in technology, including sensors, devices, machines, and information technology.
Today’s agriculture routinely uses sophisticated technologies such as robots, temperature and moisture sensors, aerial images, and GPS technology. These advanced devices and precision agriculture and robotic systems allow businesses to be more profitable, efficient, safer, and more environmentally friendly.
IMPORTANCE OF AGRICULTURAL TECHNOLOGY:
Farmers no longer have to apply water, fertilizers, and pesticides uniformly across entire fields. Instead, they can use the minimum quantities required and target very specific areas, or even treat individual plants differently. Benefits include:
- Higher crop productivity
- Decreased use of water, fertilizer, and pesticides, which in turn keeps food prices down
- Reduced impact on natural ecosystems
- Less runoff of chemicals into rivers and groundwater
- Increased worker safety
In addition, robotic technologies enable more reliable monitoring and management of natural resources, such as air and water quality. It also gives producers greater control over plant and animal production, processing, distribution, and storage, which results in:
- Greater efficiencies and lower prices
- Safer growing conditions and safer foods
- Reduced environmental and ecological impact
NIFA’S IMPACT:
NIFA advances agricultural technology and ensures that the nation’s agricultural industries are able to utilize it by supporting:
- Basic research and development in physical sciences, engineering, and computer sciences
- Development of agricultural devices, sensors, and systems
- Applied research that assesses how to employ technologies economically and with minimal disruption to existing practices
- Assistance and instruction to farmers on how to use new technologies
Forbes Magazine, July 5, 2016 Issue:
ShareFullscreenAgriculture technology is no longer a niche that no one's heard about. Agriculture has confirmed its place as an industry of interest for the venture capital community after investment in agtech broke records for the past three years in a row, reaching $4.6 billion in 2015.
For a long time, it wasn’t a target sector for venture capitalists or entrepreneurs. Only a handful of funds served the market, largely focused on biotech opportunities. And until recently, entrepreneurs were also too focused on what Steve Case calls the “Second Wave” of innovation, -- web services, social media, and mobile technology -- to look at agriculture, the least digitized industry in the world, according to McKinsey & Co.
Michael Macrie, chief information officer at agriculture cooperative Land O’ Lakes recently told Forbes that he counted only 20 agtech companies as recently as 2010.
But now, the opportunity to bring agriculture, a $7.8 trillion industry representing 10% of global GDP, into the modern age has caught the attention of a growing number of investors globally. In our 2015 annual report, we recorded 503 individual companies raising funding.
This increasing interest in the sector coincides with a more general “Third Wave” in technological innovation, where all companies are internet-powered tech companies, and startups are challenging the biggest incumbent industries like hospitality, transport, and now agriculture.
There is huge potential, and need, to help the ag industry find efficiencies, conserve valuable resources, meet global demands for protein, and ensure consumers have access to clean, safe, healthy food. In all this, technological innovation is inevitable.
It’s a complex and diverse industry, however, with many subsectors for farmers, investors, and industry stakeholders to navigate. Entrepreneurs are innovating across agricultural disciplines, aiming to disrupt the beef, dairy, row crop, permanent crop, aquaculture, forestry, and fisheries sectors. Each discipline has a specific set of needs that will differ from the others.
___________________________________________________________________________
Advancements in Food Technology:
Food technology is a branch of food science that deals with the production processes that make foods.
Early scientific research into food technology concentrated on food preservation. Nicolas Appert’s development in 1810 of the canning process was a decisive event. The process wasn’t called canning then and Appert did not really know the principle on which his process worked, but canning has had a major impact on food preservation techniques.
Louis Pasteur's research on the spoilage of wine and his description of how to avoid spoilage in 1864 was an early attempt to apply scientific knowledge to food handling.
Besides research into wine spoilage, Pasteur researched the production of alcohol, vinegar, wines and beer, and the souring of milk. He developed pasteurization—the process of heating milk and milk products to destroy food spoilage and disease-producing organisms.
In his research into food technology, Pasteur became the pioneer into bacteriology and of modern preventive medicine.
Developments in food technology have contributed greatly to the food supply and have changed our world. Some of these developments are:
- Instantized Milk Powder - D.D. Peebles (U.S. patent 2,835,586) developed the first instant milk powder, which has become the basis for a variety of new products that are rehydratable. This process increases the surface area of the powdered product by partially rehydrating spray-dried milk powder.
- Freeze-drying - The first application of freeze drying was most likely in the pharmaceutical industry; however, a successful large-scale industrial application of the process was the development of continuous freeze drying of coffee.
- High-Temperature Short Time Processing - These processes for the most part are characterized by rapid heating and cooling, holding for a short time at a relatively high temperature and filling aseptically into sterile containers.
- Decaffeination of Coffee and Tea - Decaffeinated coffee and tea was first developed on a commercial basis in Europe around 1900. The process is described in U.S. patent 897,763. Green coffee beans are treated with water, heat and solvents to remove the caffeine from the beans.
- Process optimization - Food Technology now allows production of foods to be more efficient, Oil saving technologies are now available on different forms. Production methods and methodology have also become increasingly sophisticated.
Consumer Acceptance:
In the past, consumer attitude towards food technologies was not common talk and was not important in food development. Nowadays the food chain is long and complicated, foods and food technologies are diverse; consequently the consumers are uncertain about the food quality and safety and find it difficult to orient themselves to the subject.
That is why consumer acceptance of food technologies is an important question. However, in these days acceptance of food products very often depends on potential benefits and risks associated with the food. This also includes the technology the food is processed with.
Attributes like “uncertain”, “unknown” or “unfamiliar” are associated with consumers’ risk perception and consumer very likely will reject products linked to these attributes. Especially innovative food processing technologies are connected to these characteristics and are perceived as risky by consumers.
Acceptance of the different food technologies is very different. Whereas pasteurization is well recognized, high pressure treatment or even microwaves are perceived as risky very often. In studies done within Hightech Europe project, it was found that traditional technologies were well accepted in contrast to innovative technologies.
Consumers form their attitude towards innovative food technologies by three main factors mechanisms. First, knowledge or beliefs about risks and benefits which are correlated with the technology. Second, attitudes are based on their own experience and third, based on higher order values and beliefs.
Acceptance of innovative technologies can be improved by providing non-emotional and concise information about these new technological processes methods. According to a study made by HighTech project also written information seems to have higher impact than audio-visual information on the consumer in case of sensory acceptance of products processed with innovative food technologies.
See Also:
"Next-gen car technology just got another big upgrade" reported by the Washington Post (7/13/17)
Video: Chris Urmson: How a driverless car sees the road by TED Talk
Pictured: Driverless Robo-car guided by radar lasers.
By Brian Fung July 13
Federal regulators have approved a big swath of new airwaves for vehicle radar devices, opening the door to cheaper, more precise sensors that may accelerate the arrival of high-tech, next-generation cars.
Many consumer vehicles already use radar for collision avoidance, automatic lane-keeping and other purposes. But right now, vehicle radar is divided into a couple of different chunks of the radio spectrum. On Thursday, the Federal Communications Commission voted to consolidate these chunks — and added a little more, essentially giving extra bandwidth to vehicle radar.
“While we enthusiastically harness new technology that will ultimately propel us to a driverless future, we must maintain our focus on safety — and radar applications play an important role,” said Mignon Clyburn, a Democratic FCC commissioner.
Radar is a key component not only in today’s computer-assisted cars, but also in the fully self-driving cars of the future. There, the technology is even more important because it helps the computer make sound driving decisions.
Thursday’s decision by the FCC lets vehicle radar take advantage of all airwaves ranging from frequencies of 76 GHz to 81 GHz — reflecting an addition of four extra gigahertz — and ends support for the technology in the 24 GHz range.
Expanding the amount of airwaves devoted to vehicle radar could also make air travel safer, said FCC Chairman Ajit Pai, by allowing for the installation of radar devices on the wingtips of airplanes.
“Wingtip collisions account for 25 percent of all aircraft ground incidents,” said Pai. “Wingtip radars on aircraft may help with collision avoidance on the tarmac, among other areas.”
Although many analysts say fully self-driving cars are still years away from going mainstream, steps like these could help bring that future just a bit more within reach.
Federal regulators have approved a big swath of new airwaves for vehicle radar devices, opening the door to cheaper, more precise sensors that may accelerate the arrival of high-tech, next-generation cars.
Many consumer vehicles already use radar for collision avoidance, automatic lane-keeping and other purposes. But right now, vehicle radar is divided into a couple of different chunks of the radio spectrum. On Thursday, the Federal Communications Commission voted to consolidate these chunks — and added a little more, essentially giving extra bandwidth to vehicle radar.
“While we enthusiastically harness new technology that will ultimately propel us to a driverless future, we must maintain our focus on safety — and radar applications play an important role,” said Mignon Clyburn, a Democratic FCC commissioner.
Radar is a key component not only in today’s computer-assisted cars, but also in the fully self-driving cars of the future. There, the technology is even more important because it helps the computer make sound driving decisions.
Thursday’s decision by the FCC lets vehicle radar take advantage of all airwaves ranging from frequencies of 76 GHz to 81 GHz — reflecting an addition of four extra gigahertz — and ends support for the technology in the 24 GHz range.
Expanding the amount of airwaves devoted to vehicle radar could also make air travel safer, said FCC Chairman Ajit Pai, by allowing for the installation of radar devices on the wingtips of airplanes.
“Wingtip collisions account for 25 percent of all aircraft ground incidents,” said Pai. “Wingtip radars on aircraft may help with collision avoidance on the tarmac, among other areas.”
Although many analysts say fully self-driving cars are still years away from going mainstream, steps like these could help bring that future just a bit more within reach.
Radio Frequency Identification (RFID) including a case of RFID implant in Humans (ABC July 24, 2017)
Click Here then Click on arrow to the embedded Video "WATCH: Company offers to implant microchips in employees"
Tech company workers agree to have microchips implanted in their hands
By ENJOLI FRANCIS REBECCA JARVIS Jul 24, 2017, 6:48 PM ET ABC News
"Some workers at a company in Wisconsin will soon be getting microchips in order to enter the office, log into computers and even buy a snack or two with just a swipe of a hand.
Todd Westby, the CEO of tech company Three Square Market, told ABC News today that of the 80 employees at the company's River Falls headquarters, more than 50 agreed to get implants. He said that participation was not required.
The microchip uses radio frequency identification (RFID) technology and was approved by the Food and Drug Administration in 2004. The chip is the size of a grain of rice and will be placed between a thumb and forefinger.
Swedish company implants microchips in employees
Westby said that when his team was approached with the idea, there was some reluctance mixed with excitement.
But after further conversations and the sharing of more details, the majority of managers were on board, and the company opted to partner with BioHax International to get the microchips.
Westby said the chip is not GPS enabled, does not allow for tracking workers and does not require passwords.
"There's really nothing to hack in it, because it is encrypted just like credit cards are ... The chances of hacking into it are almost nonexistent because it's not connected to the internet," he said. "The only way for somebody to get connectivity to it is to basically chop off your hand."
Three Square Market is footing the bill for the microchips, which cost $300 each, and licensed piercers will be handling the implantations on Aug. 1. Westby said that if workers change their minds, the microchips can be removed, as if taking out a splinter.
He said his wife, young adult children and others will also be getting the microchips next week.
Critics warned that there could be dangers in how the company planned to store, use and protect workers' information.
Adam Levin, the chairman and founder of CyberScout, which provides identity protection and data risk services, said he would not put a microchip in his body.
"Many things start off with the best of intentions, but sometimes intentions turn," he said. "We've survived thousands of years as a species without being microchipped. Is there any particular need to do it now? ... Everyone has a decision to make. That is, how much privacy and security are they willing to trade for convenience?"
Jowan Osterlund of BioHax said implanting people was the next step for electronics.
"I'm certain that this will be the natural way to add another dimension to our everyday life," he told The Associated Press...."
Click here for rest of Article.
___________________________________________________________________________
Radio-frequency identification (RFID): uses electromagnetic fields to automatically identify and track tags attached to objects. The tags contain electronically stored information. Passive tags collect energy from a nearby RFID reader's interrogating radio waves.
Active tags have a local power source such as a battery and may operate at hundreds of meters from the RFID reader. Unlike a barcode, the tag need not be within the line of sight of the reader, so it may be embedded in the tracked object. RFID is one method for Automatic Identification and Data Capture (AIDC).
RFID tags are used in many industries, for example, an RFID tag attached to an automobile during production can be used to track its progress through the assembly line; RFID-tagged pharmaceuticals can be tracked through warehouses; and implanting RFID microchips in livestock and pets allows for positive identification of animals.
Since RFID tags can be attached to cash, clothing, and possessions, or implanted in animals and people, the possibility of reading personally-linked information without consent has raised serious privacy concerns. These concerns resulted in standard specifications development addressing privacy and security issues.
ISO/IEC 18000 and ISO/IEC 29167 use on-chip cryptography methods for untraceability, tag and reader authentication, and over-the-air privacy. ISO/IEC 20248 specifies a digital signature data structure for RFID and barcodes providing data, source and read method authenticity.
This work is done within ISO/IEC JTC 1/SC 31 Automatic identification and data capture techniques.
In 2014, the world RFID market is worth US$8.89 billion, up from US$7.77 billion in 2013 and US$6.96 billion in 2012. This includes tags, readers, and software/services for RFID cards, labels, fobs, and all other form factors. The market value is expected to rise to US$18.68 billion by 2026.
Click on any of the following blue hyperlinks for more about Radio-Frequency Idenfication (RFID):
By ENJOLI FRANCIS REBECCA JARVIS Jul 24, 2017, 6:48 PM ET ABC News
"Some workers at a company in Wisconsin will soon be getting microchips in order to enter the office, log into computers and even buy a snack or two with just a swipe of a hand.
Todd Westby, the CEO of tech company Three Square Market, told ABC News today that of the 80 employees at the company's River Falls headquarters, more than 50 agreed to get implants. He said that participation was not required.
The microchip uses radio frequency identification (RFID) technology and was approved by the Food and Drug Administration in 2004. The chip is the size of a grain of rice and will be placed between a thumb and forefinger.
Swedish company implants microchips in employees
Westby said that when his team was approached with the idea, there was some reluctance mixed with excitement.
But after further conversations and the sharing of more details, the majority of managers were on board, and the company opted to partner with BioHax International to get the microchips.
Westby said the chip is not GPS enabled, does not allow for tracking workers and does not require passwords.
"There's really nothing to hack in it, because it is encrypted just like credit cards are ... The chances of hacking into it are almost nonexistent because it's not connected to the internet," he said. "The only way for somebody to get connectivity to it is to basically chop off your hand."
Three Square Market is footing the bill for the microchips, which cost $300 each, and licensed piercers will be handling the implantations on Aug. 1. Westby said that if workers change their minds, the microchips can be removed, as if taking out a splinter.
He said his wife, young adult children and others will also be getting the microchips next week.
Critics warned that there could be dangers in how the company planned to store, use and protect workers' information.
Adam Levin, the chairman and founder of CyberScout, which provides identity protection and data risk services, said he would not put a microchip in his body.
"Many things start off with the best of intentions, but sometimes intentions turn," he said. "We've survived thousands of years as a species without being microchipped. Is there any particular need to do it now? ... Everyone has a decision to make. That is, how much privacy and security are they willing to trade for convenience?"
Jowan Osterlund of BioHax said implanting people was the next step for electronics.
"I'm certain that this will be the natural way to add another dimension to our everyday life," he told The Associated Press...."
Click here for rest of Article.
___________________________________________________________________________
Radio-frequency identification (RFID): uses electromagnetic fields to automatically identify and track tags attached to objects. The tags contain electronically stored information. Passive tags collect energy from a nearby RFID reader's interrogating radio waves.
Active tags have a local power source such as a battery and may operate at hundreds of meters from the RFID reader. Unlike a barcode, the tag need not be within the line of sight of the reader, so it may be embedded in the tracked object. RFID is one method for Automatic Identification and Data Capture (AIDC).
RFID tags are used in many industries, for example, an RFID tag attached to an automobile during production can be used to track its progress through the assembly line; RFID-tagged pharmaceuticals can be tracked through warehouses; and implanting RFID microchips in livestock and pets allows for positive identification of animals.
Since RFID tags can be attached to cash, clothing, and possessions, or implanted in animals and people, the possibility of reading personally-linked information without consent has raised serious privacy concerns. These concerns resulted in standard specifications development addressing privacy and security issues.
ISO/IEC 18000 and ISO/IEC 29167 use on-chip cryptography methods for untraceability, tag and reader authentication, and over-the-air privacy. ISO/IEC 20248 specifies a digital signature data structure for RFID and barcodes providing data, source and read method authenticity.
This work is done within ISO/IEC JTC 1/SC 31 Automatic identification and data capture techniques.
In 2014, the world RFID market is worth US$8.89 billion, up from US$7.77 billion in 2013 and US$6.96 billion in 2012. This includes tags, readers, and software/services for RFID cards, labels, fobs, and all other form factors. The market value is expected to rise to US$18.68 billion by 2026.
Click on any of the following blue hyperlinks for more about Radio-Frequency Idenfication (RFID):
- History
- Design
- Tags
Readers
Frequencies
Signaling
Miniaturization
- Tags
- Uses
- Commerce
- Access control
Advertising
Promotion tracking
- Access control
- Transportation and logistics
- Intelligent transportation systems
Hose stations and conveyance of fluids
Track & Trace test vehicles and prototype parts
- Intelligent transportation systems
- Infrastructure management and protection
- Passports
- Transportation payments
- Animal identification
- Human implantation
- Institutions
- Hospitals and healthcare
Libraries
Museums
Schools and universities
- Hospitals and healthcare
- Sports
- Complement to barcode
- Waste Management
- Telemetry
- Commerce
- Optical RFID
- Regulation and standardization
- Problems and concerns
- Data flooding
Global standardization
Security concerns
Health
Exploitation
Passports
Shielding
- Data flooding
- Controversies
- Privacy
Government control
Deliberate destruction in clothing and other items
- Privacy
- See also:
- AS5678
- Balise
- Bin bug
- Chipless RFID
- Internet of Things
- Mass surveillance
- Near Field Communication
- PositiveID
- Privacy by design
- Proximity card
- Resonant inductive coupling
- RFID on metal
- RSA blocker tag
- Smart label
- Speedpass
- TecTile
- Tracking system
- RFID in schools
- UHF regulations overview by GS1
- How RFID Works at HowStuffWorks
- Privacy concerns and proposed privacy legislation
- RFID at DMOZ
- What is RFID? - Animated Explanation
- IEEE Council on RFID
- RFID tracking system
The History of Technology
YouTube Video: Ellen Discusses Technology's Detailed History
(The Ellen Show)
The history of technology is the history of the invention of tools and techniques and is similar to other sides of the history of humanity. Technology can refer to methods ranging from as simple as language and stone tools to the complex genetic engineering and information technology that has emerged since the 1980s.
New knowledge has enabled people to create new things, and conversely, many scientific endeavors are made possible by technologies which assist humans in travelling to places they could not previously reach, and by scientific instruments by which we study nature in more detail than our natural senses allow.
Since much of technology is applied science, technical history is connected to the history of science. Since technology uses resources, technical history is tightly connected to economic history. From those resources, technology produces other resources, including technological artifacts used in everyday life.
Technological change affects, and is affected by, a society's cultural traditions. It is a force for economic growth and a means to develop and project economic, political and military power.
Measuring technological progress:
Many sociologists and anthropologists have created social theories dealing with social and cultural evolution. Some, like Lewis H. Morgan, Leslie White, and Gerhard Lenski have declared technological progress to be the primary factor driving the development of human civilization.
Morgan's concept of three major stages of social evolution (savagery, barbarism, and civilization) can be divided by technological milestones, such as fire. White argued the measure by which to judge the evolution of culture was energy.
For White, "the primary function of culture" is to "harness and control energy." White differentiates between five stages of human development:
White introduced a formula P=E*T, where E is a measure of energy consumed, and T is the measure of efficiency of technical factors utilizing the energy. In his own words, "culture evolves as the amount of energy harnessed per capita per year is increased, or as the efficiency of the instrumental means of putting the energy to work is increased". Nikolai Kardashev extrapolated his theory, creating the Kardashev scale, which categorizes the energy use of advanced civilizations.
Lenski's approach focuses on information. The more information and knowledge (especially allowing the shaping of natural environment) a given society has, the more advanced it is. He identifies four stages of human development, based on advances in the history of communication.
Lenski also differentiates societies based on their level of technology, communication and economy:
In economics productivity is a measure of technological progress. Productivity increases when fewer inputs (labor, energy, materials or land) are used in the production of a unit of output.
Another indicator of technological progress is the development of new products and services, which is necessary to offset unemployment that would otherwise result as labor inputs are reduced.
In developed countries productivity growth has been slowing since the late 1970s; however, productivity growth was higher in some economic sectors, such as manufacturing.
For example, in employment in manufacturing in the United States declined from over 30% in the 1940s to just over 10% 70 years later. Similar changes occurred in other developed countries. This stage is referred to as post-industrial.
In the late 1970s sociologists and anthropologists like Alvin Toffler (author of Future Shock), Daniel Bell and John Naisbitt have approached the theories of post-industrial societies, arguing that the current era of industrial society is coming to an end, and services and information are becoming more important than industry and goods. Some extreme visions of the post-industrial society, especially in fiction, are strikingly similar to the visions of near and post-Singularity societies.
Click on any of the following blue hyperlinks for more about The History of Technology:
New knowledge has enabled people to create new things, and conversely, many scientific endeavors are made possible by technologies which assist humans in travelling to places they could not previously reach, and by scientific instruments by which we study nature in more detail than our natural senses allow.
Since much of technology is applied science, technical history is connected to the history of science. Since technology uses resources, technical history is tightly connected to economic history. From those resources, technology produces other resources, including technological artifacts used in everyday life.
Technological change affects, and is affected by, a society's cultural traditions. It is a force for economic growth and a means to develop and project economic, political and military power.
Measuring technological progress:
Many sociologists and anthropologists have created social theories dealing with social and cultural evolution. Some, like Lewis H. Morgan, Leslie White, and Gerhard Lenski have declared technological progress to be the primary factor driving the development of human civilization.
Morgan's concept of three major stages of social evolution (savagery, barbarism, and civilization) can be divided by technological milestones, such as fire. White argued the measure by which to judge the evolution of culture was energy.
For White, "the primary function of culture" is to "harness and control energy." White differentiates between five stages of human development:
- In the first, people use energy of their own muscles.
- In the second, they use energy of domesticated animals.
- In the third, they use the energy of plants (agricultural revolution).
- In the fourth, they learn to use the energy of natural resources: coal, oil, gas.
- In the fifth, they harness nuclear energy.
White introduced a formula P=E*T, where E is a measure of energy consumed, and T is the measure of efficiency of technical factors utilizing the energy. In his own words, "culture evolves as the amount of energy harnessed per capita per year is increased, or as the efficiency of the instrumental means of putting the energy to work is increased". Nikolai Kardashev extrapolated his theory, creating the Kardashev scale, which categorizes the energy use of advanced civilizations.
Lenski's approach focuses on information. The more information and knowledge (especially allowing the shaping of natural environment) a given society has, the more advanced it is. He identifies four stages of human development, based on advances in the history of communication.
- In the first stage, information is passed by genes.
- In the second, when humans gain sentience, they can learn and pass information through by experience.
- In the third, the humans start using signs and develop logic.
- In the fourth, they can create symbols, develop language and writing. Advancements in communications technology translates into advancements in the economic system and political system, distribution of wealth, social inequality and other spheres of social life.
Lenski also differentiates societies based on their level of technology, communication and economy:
- hunter-gatherer,
- simple agricultural,
- advanced agricultural,
- industrial,
- specialized (such as fishing societies).
In economics productivity is a measure of technological progress. Productivity increases when fewer inputs (labor, energy, materials or land) are used in the production of a unit of output.
Another indicator of technological progress is the development of new products and services, which is necessary to offset unemployment that would otherwise result as labor inputs are reduced.
In developed countries productivity growth has been slowing since the late 1970s; however, productivity growth was higher in some economic sectors, such as manufacturing.
For example, in employment in manufacturing in the United States declined from over 30% in the 1940s to just over 10% 70 years later. Similar changes occurred in other developed countries. This stage is referred to as post-industrial.
In the late 1970s sociologists and anthropologists like Alvin Toffler (author of Future Shock), Daniel Bell and John Naisbitt have approached the theories of post-industrial societies, arguing that the current era of industrial society is coming to an end, and services and information are becoming more important than industry and goods. Some extreme visions of the post-industrial society, especially in fiction, are strikingly similar to the visions of near and post-Singularity societies.
Click on any of the following blue hyperlinks for more about The History of Technology:
- By period and geography
- By type
- See also:
- Related history
- Related disciplines
- Related subjects
- Related concepts
- Future (speculative)
- People
- Historiography
- Historians
- Book series
- Journals and periodicals
- Notebooks
- Research institutes
- Electropaedia on the History of Technology
- MIT 6.933J – The Structure of Engineering Revolutions. From MIT OpenCourseWare, course materials (graduate level) for a course on the history of technology through a Thomas Kuhn-ian lens.
- Concept of Civilization Events. From Jaroslaw Kessler, a chronology of "civilizing events".
- Ancient and Medieval City Technology
- Society for the History of Technology
Technology, including a List of Technologies
YouTube Video of the Top 10 Inventions of All Time by WatchMojo
Pictured: How to Check Your Wi-Fi Network for Suspicious Devices
Technology is the collection of techniques, skills, methods, and processes used in the production of goods or services or in the accomplishment of objectives, such as scientific investigation. Technology can be the knowledge of techniques, processes, and the like, or it can be embedded in machines to allow for operation without detailed knowledge of their workings.
The simplest form of technology is the development and use of basic tools.
The prehistoric discovery of how to control fire and the later Neolithic Revolution increased the available sources of food, and the invention of the wheel helped humans to travel in and control their environment.
Developments in historic times, including the printing press, the telephone, and the Internet, have lessened physical barriers to communication and allowed humans to interact freely on a global scale. The steady progress of military technology has brought weapons of ever-increasing destructive power, from clubs to nuclear weapons.
Technology has many effects. It has helped develop more advanced economies (including today's global economy) and has allowed the rise of a leisure class.
Many technological processes produce unwanted by-products known as pollution and deplete natural resources to the detriment of Earth's environment.
Innovations have always influenced the values of a society and raised new questions of the ethics of technology. Examples include the rise of the notion of efficiency in terms of human productivity, and the challenges of bioethics.
Philosophical debates have arisen over the use of technology, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar reactionary movements criticize the pervasiveness of technology, arguing that it harms the environment and alienates people; proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition.
Click on any of the following blue hyperlinks for more about Technology:
Click on the following blue hyperlinks for a List of Technologies by Category of Use:
The simplest form of technology is the development and use of basic tools.
The prehistoric discovery of how to control fire and the later Neolithic Revolution increased the available sources of food, and the invention of the wheel helped humans to travel in and control their environment.
Developments in historic times, including the printing press, the telephone, and the Internet, have lessened physical barriers to communication and allowed humans to interact freely on a global scale. The steady progress of military technology has brought weapons of ever-increasing destructive power, from clubs to nuclear weapons.
Technology has many effects. It has helped develop more advanced economies (including today's global economy) and has allowed the rise of a leisure class.
Many technological processes produce unwanted by-products known as pollution and deplete natural resources to the detriment of Earth's environment.
Innovations have always influenced the values of a society and raised new questions of the ethics of technology. Examples include the rise of the notion of efficiency in terms of human productivity, and the challenges of bioethics.
Philosophical debates have arisen over the use of technology, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar reactionary movements criticize the pervasiveness of technology, arguing that it harms the environment and alienates people; proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition.
Click on any of the following blue hyperlinks for more about Technology:
- Definition and usage
- Science, engineering and technology
- History
- Paleolithic (2.5 Ma – 10 ka)
- Stone tools
Fire
Clothing and shelter
- Stone tools
- Neolithic through classical antiquity (10 ka – 300 CE)
- Metal tools
Energy and transport
- Metal tools
- Medieval and modern history (300 CE – present)
- Paleolithic (2.5 Ma – 10 ka)
- Philosophy
- Competitiveness
- Other animal species
- Future technology
- See also:
- Outline of technology
- Architectural technology
- Critique of technology
- Greatest Engineering Achievements of the 20th Century
- History of science and technology
- Knowledge economy
- Law of the instrument – Golden hammer
- Lewis Mumford
- List of years in science
- Niche construction
- Technological convergence
- Technology and society
- Technology assessment
- Technology tree
- -logy
- Superpower § Possible factors
- Theories and concepts in technology:
- Appropriate technology
- Diffusion of innovations
- Human enhancement
- Instrumental conception of technology
- Jacques Ellul
- Paradigm
- Philosophy of technology
- Posthumanism
- Precautionary principle
- Singularitarianism
- Strategy of Technology
- Techno-progressivism
- Technocentrism
- Technocracy
- Technocriticism
- Technological determinism
- Technological evolution
- Technological nationalism
- Technological revival
- Technological singularity
- Technology management
- Technology readiness level
- Technorealism
- Transhumanism
- Economics of technology:
- Technology journalism:
- Other:
Click on the following blue hyperlinks for a List of Technologies by Category of Use:
- Practical Technologies
- Military Technologies
- Astronomical Technologies
- Practical Technologies #2
- Military Technologies #2
- Astronomical Technologies #2
- Medieval Era
- Renaissance Era
Emerging Technologies, including a List
YouTube Video: the Top Ten Emerging Technologies by The World Economic Forum
YouTube Video of the Top 10 Emerging Technologies That Will Change Your Life
(courtesy of WatchMojo)
Pictured: Example of Emerging Technologies
Click Here for a List of Emerging Technologies.
Emerging technologies are technologies that are perceived as capable of changing the status quo. These technologies are generally new but include older technologies that are still controversial and relatively undeveloped in potential, such as preimplantation genetic diagnosis and gene therapy which date to 1989 and 1990 respectively.
Emerging technologies are characterized by radical novelty, relatively fast growth, coherence, prominent impact, and uncertainty and ambiguity. In other words, an emerging technology can be defined as "a radically novel and relatively fast growing technology characterized by a certain degree of coherence persisting over time and with the potential to exert a considerable impact on the socio-economic domain(s) which is observed in terms of the composition of actors, institutions and patterns of interactions among those, along with the associated knowledge production processes. Its most prominent impact, however, lies in the future and so in the emergence phase is still somewhat uncertain and ambiguous.".
Emerging technologies include a variety of technologies such as the following:
New technological fields may result from the technological convergence of different systems evolving towards similar goals. Convergence brings previously separate technologies such as voice (and telephony features), data (and productivity applications) and video together so that they share resources and interact with each other, creating new efficiencies.
Emerging technologies are those technical innovations which represent progressive developments within a field for competitive advantage; converging technologies represent previously distinct fields which are in some way moving towards stronger inter-connection and similar goals. However, the opinion on the degree of the impact, status and economic viability of several emerging and converging technologies vary.
Click on any of the following blue hyperlinks for more about Emerging Technologies:
Emerging technologies are technologies that are perceived as capable of changing the status quo. These technologies are generally new but include older technologies that are still controversial and relatively undeveloped in potential, such as preimplantation genetic diagnosis and gene therapy which date to 1989 and 1990 respectively.
Emerging technologies are characterized by radical novelty, relatively fast growth, coherence, prominent impact, and uncertainty and ambiguity. In other words, an emerging technology can be defined as "a radically novel and relatively fast growing technology characterized by a certain degree of coherence persisting over time and with the potential to exert a considerable impact on the socio-economic domain(s) which is observed in terms of the composition of actors, institutions and patterns of interactions among those, along with the associated knowledge production processes. Its most prominent impact, however, lies in the future and so in the emergence phase is still somewhat uncertain and ambiguous.".
Emerging technologies include a variety of technologies such as the following:
- educational technology,
- information technology,
- nanotechnology,
- biotechnology,
- cognitive science,
- robotics,
- and artificial intelligence.
New technological fields may result from the technological convergence of different systems evolving towards similar goals. Convergence brings previously separate technologies such as voice (and telephony features), data (and productivity applications) and video together so that they share resources and interact with each other, creating new efficiencies.
Emerging technologies are those technical innovations which represent progressive developments within a field for competitive advantage; converging technologies represent previously distinct fields which are in some way moving towards stronger inter-connection and similar goals. However, the opinion on the degree of the impact, status and economic viability of several emerging and converging technologies vary.
Click on any of the following blue hyperlinks for more about Emerging Technologies:
- History of emerging technologies
- Emerging technology debates
- Examples
- Development of emerging technologies
- Role of science fiction
- See also:
- Foresight
- Futures studies
- Institute for Ethics and Emerging Technologies
- Institute on Biotechnology and the Human Future
- Technological change
- Transhumanism
- Upcoming software
- Websites
- Top 10 emerging technologies of 2015, Scientific American
- Collaborating on Converging Technologies: Education and Practice
- Converging Technologies NSF-sponsored reports
- EU-funded project CONTECS
- EU-funded project KNOWLEDGE NBIC
- EU-funded summer schools on ethics of emerging technologies
- EU High-Level Expert Group on Converging Technologies
- European Parliament Technology Assessment on Converging Technologies report
- ETC Group
- Institute for Ethics and Emerging Technologies
- Institute on Biotechnology and the Human Future
- Converging Technologies Conference 2010 Website
- Videos
- Web 3.0 on Vimeo
Human-like Robots including Sophia (Robot), "Citizen" of Saudi Arabia
YouTube Video: Interview With The Lifelike Hot Robot Named Sophia by CNBC
Pictured: Four facial expressions that Sophia can exhibit
A humanoid robot is a robot with its body shape built to resemble the human body. The design may be for functional purposes, such as interacting with human tools and environments, for experimental purposes, such as the study of al locomotion, or for other purposes.
In general, humanoid robots have a torso, a head, two arms, and two legs, though some forms of humanoid robots may model only part of the body, for example, from the waist up. Some humanoid robot also have heads designed to replicate human facial features such as eyes and mouths. Androids are humanoid robots built to aesthetically resemble humans.
Humanoid robots are now used as a research tool in several scientific areas.
Researchers need to understand the human body structure and behavior (biomechanics) to build and study humanoid robots.
On the other side, the attempt to the simulation of the human body leads to a better understanding of it.
Human cognition is a field of study which is focused on how humans learn from sensory information in order to acquire perceptual and motor skills. This knowledge is used to develop computational models of human behavior and it has been improving over time.
It has been suggested that very advanced robotics will facilitate the enhancement of ordinary humans. See transhumanism.
Although the initial aim of humanoid research was to build better orthosis and prosthesis for human beings, knowledge has been transferred between both disciplines. A few examples are: powered leg prosthesis for neuromuscularly impaired, ankle-foot orthosis, biological realistic leg prosthesis and forearm prosthesis.
Besides the research, humanoid robots are being developed to perform human tasks like personal assistance, where they should be able to assist the sick and elderly, and dirty or dangerous jobs.
Regular jobs like being a receptionist or a worker of an automotive manufacturing line are also suitable for humanoids. In essence, since they can use tools and operate equipment and vehicles designed for the human form, humanoids could theoretically perform any task a human being can, so long as they have the proper software. However, the complexity of doing so is immense.
They are becoming increasingly popular for providing entertainment too. For example, Ursula, a female robot, sings, play music, dances, and speaks to her audiences at Universal Studios.
Several Disney attractions employ the use of animatrons, robots that look, move, and speak much like human beings, in some of their theme park shows. These animatrons look so realistic that it can be hard to decipher from a distance whether or not they are actually human. Although they have a realistic look, they have no cognition or physical autonomy.
Various humanoid robots and their possible applications in daily life are featured in an independent documentary film called Plug & Pray, which was released in 2010.
Humanoid robots, especially with artificial intelligence algorithms, could be useful for future dangerous and/or distant space exploration missions, without having the need to turn back around again and return to Earth once the mission is completed.
Sensors:
A sensor is a device that measures some attribute of the world. Being one of the three primitives of robotics (besides planning and control), sensing plays an important role in robotic paradigms.
Sensors can be classified according to the physical process with which they work or according to the type of measurement information that they give as output. In this case, the second approach was used.
Proprioceptive sensors:
Proprioceptive sensors sense the position, the orientation and the speed of the humanoid's body and joints.
In human beings the otoliths and semi-circular canals (in the inner ear) are used to maintain balance and orientation. In addition humans use their own proprioceptive sensors (e.g. touch, muscle extension, limb position) to help with their orientation.
Humanoid robots use accelerometers to measure the acceleration, from which velocity can be calculated by integration; tilt sensors to measure inclination; force sensors placed in robot's hands and feet to measure contact force with environment; position sensors, that indicate the actual position of the robot (from which the velocity can be calculated by derivation) or even speed sensors.
Exteroceptive sensors:
Arrays of tactels can be used to provide data on what has been touched. The Shadow Hand uses an array of 34 tactels arranged beneath its polyurethane skin on each finger tip. Tactile sensors also provide information about forces and torques transferred between the robot and other objects.
Vision refers to processing data from any modality which uses the electromagnetic spectrum to produce an image. In humanoid robots it is used to recognize objects and determine their properties. Vision sensors work most similarly to the eyes of human beings. Most humanoid robots use CCD cameras as vision sensors.
Sound sensors allow humanoid robots to hear speech and environmental sounds, and perform as the ears of the human being. Microphones are usually used for this task.
Actuators:
Actuators are the motors responsible for motion in the robot.
Humanoid robots are constructed in such a way that they mimic the human body, so they use actuators that perform like muscles and joints, though with a different structure. To achieve the same effect as human motion, humanoid robots use mainly rotary actuators. They can be either electric, pneumatic, hydraulic, piezoelectric or ultrasonic.
Hydraulic and electric actuators have a very rigid behavior and can only be made to act in a compliant manner through the use of relatively complex feedback control strategies. While electric coreless motor actuators are better suited for high speed and low load applications, hydraulic ones operate well at low speed and high load applications.
Piezoelectric actuators generate a small movement with a high force capability when voltage is applied. They can be used for ultra-precise positioning and for generating and handling high forces or pressures in static or dynamic situations.
Ultrasonic actuators are designed to produce movements in a micrometer order at ultrasonic frequencies (over 20 kHz). They are useful for controlling vibration, positioning applications and quick switching.
Pneumatic actuators operate on the basis of gas compressibility. As they are inflated, they expand along the axis, and as they deflate, they contract. If one end is fixed, the other will move in a linear trajectory. These actuators are intended for low speed and low/medium load applications. Between pneumatic actuators there are: cylinders, bellows, pneumatic engines, pneumatic stepper motors and pneumatic artificial muscles.
Planning and Control:
In planning and control, the essential difference between humanoids and other kinds of robots (like industrial ones) is that the movement of the robot has to be human-like, using legged locomotion, especially biped gait.
The ideal planning for humanoid movements during normal walking should result in minimum energy consumption, as it does in the human body. For this reason, studies on dynamics and control of these kinds of structures become more and more important.
The question of walking biped robots stabilization on the surface is of great importance. Maintenance of the robot’s gravity center over the center of bearing area for providing a stable position can be chosen as a goal of control.
To maintain dynamic balance during the walk, a robot needs information about contact force and its current and desired motion. The solution to this problem relies on a major concept, the Zero Moment Point (ZMP).
Another characteristic of humanoid robots is that they move, gather information (using sensors) on the "real world" and interact with it. They don’t stay still like factory manipulators and other robots that work in highly structured environments. To allow humanoids to move in complex environments, planning and control must focus on self-collision detection, path planning and obstacle avoidance.
Humanoid robots do not yet have some features of the human body. They include structures with variable flexibility, which provide safety (to the robot itself and to the people), and redundancy of movements, i.e. more degrees of freedom and therefore wide task availability.
Although these characteristics are desirable to humanoid robots, they will bring more complexity and new problems to planning and control. The field of whole-body control deals with these issues and addresses the proper coordination of numerous degrees of freedom, e.g. to realize several control tasks simultaneously while following a given order of priority.
Click on the following for more about Humanoid Robots:___________________________________________________________________________
Sophia the Robot
Sophia is a humanoid robot developed by Hong Kong-based company Hanson Robotics. She has been designed to learn and adapt to human behavior and work with humans, and has been interviewed around the world.
In October 2017, she became a Saudi Arabian citizen, the first robot to receive citizenship of a country.
According to herself, Sophia was activated on April 19, 2015. She is modeled after actress Audrey Hepburn, and is known for her human-like appearance and behavior compared to previous robotic variants.
According to the manufacturer, David Hanson, Sophia has artificial intelligence, visual data processing and facial recognition. Sophia also imitates human gestures and facial expressions and is able to answer certain questions and to make simple conversations on predefined topics (e.g. on the weather).
The robot uses voice recognition technology from Alphabet Inc. (parent company of Google) and is designed to get smarter over time. Sophia's intelligence software is designed by SingularityNET.
The AI program analyses conversations and extracts data that allows her to improve responses in the future. It is conceptually similar to the computer program ELIZA, which was one of the first attempts at simulating a human conversation.
Hanson designed Sophia to be a suitable companion for the elderly at nursing homes, or to help crowds at large events or parks. He hopes that she can ultimately interact with other humans sufficiently to gain social skills.
Events:
Sophia has been interviewed in the same manner as a human, striking up conversations with hosts. Some replies have been nonsensical, while others have been impressive, such as lengthy discussions with Charlie Rose on 60 Minutes.
In a piece for CNBC, when the interviewer expressed concerns about robot behavior, Sophia joked that he had "been reading too much Elon Musk. And watching too many Hollywood movies". Musk tweeted that Sophia could watch The Godfather and suggested "what's the worst that could happen?"
On October 11, 2017, Sophia was introduced to the United Nations with a brief conversation with the United Nations Deputy Secretary-General, Amina J. Mohammed.
On October 25, at the Future Investment Summit in Riyadh, she was granted Saudi Arabian citizenship, becoming the first robot ever to have a nationality. This attracted controversy as some commentators wondered if this implied that Sophia could vote or marry, or whether a deliberate system shutdown could be considered murder.
Social media users used Sophia's citizenship to criticize Saudi Arabia's human rights record.
See also:
In general, humanoid robots have a torso, a head, two arms, and two legs, though some forms of humanoid robots may model only part of the body, for example, from the waist up. Some humanoid robot also have heads designed to replicate human facial features such as eyes and mouths. Androids are humanoid robots built to aesthetically resemble humans.
Humanoid robots are now used as a research tool in several scientific areas.
Researchers need to understand the human body structure and behavior (biomechanics) to build and study humanoid robots.
On the other side, the attempt to the simulation of the human body leads to a better understanding of it.
Human cognition is a field of study which is focused on how humans learn from sensory information in order to acquire perceptual and motor skills. This knowledge is used to develop computational models of human behavior and it has been improving over time.
It has been suggested that very advanced robotics will facilitate the enhancement of ordinary humans. See transhumanism.
Although the initial aim of humanoid research was to build better orthosis and prosthesis for human beings, knowledge has been transferred between both disciplines. A few examples are: powered leg prosthesis for neuromuscularly impaired, ankle-foot orthosis, biological realistic leg prosthesis and forearm prosthesis.
Besides the research, humanoid robots are being developed to perform human tasks like personal assistance, where they should be able to assist the sick and elderly, and dirty or dangerous jobs.
Regular jobs like being a receptionist or a worker of an automotive manufacturing line are also suitable for humanoids. In essence, since they can use tools and operate equipment and vehicles designed for the human form, humanoids could theoretically perform any task a human being can, so long as they have the proper software. However, the complexity of doing so is immense.
They are becoming increasingly popular for providing entertainment too. For example, Ursula, a female robot, sings, play music, dances, and speaks to her audiences at Universal Studios.
Several Disney attractions employ the use of animatrons, robots that look, move, and speak much like human beings, in some of their theme park shows. These animatrons look so realistic that it can be hard to decipher from a distance whether or not they are actually human. Although they have a realistic look, they have no cognition or physical autonomy.
Various humanoid robots and their possible applications in daily life are featured in an independent documentary film called Plug & Pray, which was released in 2010.
Humanoid robots, especially with artificial intelligence algorithms, could be useful for future dangerous and/or distant space exploration missions, without having the need to turn back around again and return to Earth once the mission is completed.
Sensors:
A sensor is a device that measures some attribute of the world. Being one of the three primitives of robotics (besides planning and control), sensing plays an important role in robotic paradigms.
Sensors can be classified according to the physical process with which they work or according to the type of measurement information that they give as output. In this case, the second approach was used.
Proprioceptive sensors:
Proprioceptive sensors sense the position, the orientation and the speed of the humanoid's body and joints.
In human beings the otoliths and semi-circular canals (in the inner ear) are used to maintain balance and orientation. In addition humans use their own proprioceptive sensors (e.g. touch, muscle extension, limb position) to help with their orientation.
Humanoid robots use accelerometers to measure the acceleration, from which velocity can be calculated by integration; tilt sensors to measure inclination; force sensors placed in robot's hands and feet to measure contact force with environment; position sensors, that indicate the actual position of the robot (from which the velocity can be calculated by derivation) or even speed sensors.
Exteroceptive sensors:
Arrays of tactels can be used to provide data on what has been touched. The Shadow Hand uses an array of 34 tactels arranged beneath its polyurethane skin on each finger tip. Tactile sensors also provide information about forces and torques transferred between the robot and other objects.
Vision refers to processing data from any modality which uses the electromagnetic spectrum to produce an image. In humanoid robots it is used to recognize objects and determine their properties. Vision sensors work most similarly to the eyes of human beings. Most humanoid robots use CCD cameras as vision sensors.
Sound sensors allow humanoid robots to hear speech and environmental sounds, and perform as the ears of the human being. Microphones are usually used for this task.
Actuators:
Actuators are the motors responsible for motion in the robot.
Humanoid robots are constructed in such a way that they mimic the human body, so they use actuators that perform like muscles and joints, though with a different structure. To achieve the same effect as human motion, humanoid robots use mainly rotary actuators. They can be either electric, pneumatic, hydraulic, piezoelectric or ultrasonic.
Hydraulic and electric actuators have a very rigid behavior and can only be made to act in a compliant manner through the use of relatively complex feedback control strategies. While electric coreless motor actuators are better suited for high speed and low load applications, hydraulic ones operate well at low speed and high load applications.
Piezoelectric actuators generate a small movement with a high force capability when voltage is applied. They can be used for ultra-precise positioning and for generating and handling high forces or pressures in static or dynamic situations.
Ultrasonic actuators are designed to produce movements in a micrometer order at ultrasonic frequencies (over 20 kHz). They are useful for controlling vibration, positioning applications and quick switching.
Pneumatic actuators operate on the basis of gas compressibility. As they are inflated, they expand along the axis, and as they deflate, they contract. If one end is fixed, the other will move in a linear trajectory. These actuators are intended for low speed and low/medium load applications. Between pneumatic actuators there are: cylinders, bellows, pneumatic engines, pneumatic stepper motors and pneumatic artificial muscles.
Planning and Control:
In planning and control, the essential difference between humanoids and other kinds of robots (like industrial ones) is that the movement of the robot has to be human-like, using legged locomotion, especially biped gait.
The ideal planning for humanoid movements during normal walking should result in minimum energy consumption, as it does in the human body. For this reason, studies on dynamics and control of these kinds of structures become more and more important.
The question of walking biped robots stabilization on the surface is of great importance. Maintenance of the robot’s gravity center over the center of bearing area for providing a stable position can be chosen as a goal of control.
To maintain dynamic balance during the walk, a robot needs information about contact force and its current and desired motion. The solution to this problem relies on a major concept, the Zero Moment Point (ZMP).
Another characteristic of humanoid robots is that they move, gather information (using sensors) on the "real world" and interact with it. They don’t stay still like factory manipulators and other robots that work in highly structured environments. To allow humanoids to move in complex environments, planning and control must focus on self-collision detection, path planning and obstacle avoidance.
Humanoid robots do not yet have some features of the human body. They include structures with variable flexibility, which provide safety (to the robot itself and to the people), and redundancy of movements, i.e. more degrees of freedom and therefore wide task availability.
Although these characteristics are desirable to humanoid robots, they will bring more complexity and new problems to planning and control. The field of whole-body control deals with these issues and addresses the proper coordination of numerous degrees of freedom, e.g. to realize several control tasks simultaneously while following a given order of priority.
Click on the following for more about Humanoid Robots:___________________________________________________________________________
Sophia the Robot
Sophia is a humanoid robot developed by Hong Kong-based company Hanson Robotics. She has been designed to learn and adapt to human behavior and work with humans, and has been interviewed around the world.
In October 2017, she became a Saudi Arabian citizen, the first robot to receive citizenship of a country.
According to herself, Sophia was activated on April 19, 2015. She is modeled after actress Audrey Hepburn, and is known for her human-like appearance and behavior compared to previous robotic variants.
According to the manufacturer, David Hanson, Sophia has artificial intelligence, visual data processing and facial recognition. Sophia also imitates human gestures and facial expressions and is able to answer certain questions and to make simple conversations on predefined topics (e.g. on the weather).
The robot uses voice recognition technology from Alphabet Inc. (parent company of Google) and is designed to get smarter over time. Sophia's intelligence software is designed by SingularityNET.
The AI program analyses conversations and extracts data that allows her to improve responses in the future. It is conceptually similar to the computer program ELIZA, which was one of the first attempts at simulating a human conversation.
Hanson designed Sophia to be a suitable companion for the elderly at nursing homes, or to help crowds at large events or parks. He hopes that she can ultimately interact with other humans sufficiently to gain social skills.
Events:
Sophia has been interviewed in the same manner as a human, striking up conversations with hosts. Some replies have been nonsensical, while others have been impressive, such as lengthy discussions with Charlie Rose on 60 Minutes.
In a piece for CNBC, when the interviewer expressed concerns about robot behavior, Sophia joked that he had "been reading too much Elon Musk. And watching too many Hollywood movies". Musk tweeted that Sophia could watch The Godfather and suggested "what's the worst that could happen?"
On October 11, 2017, Sophia was introduced to the United Nations with a brief conversation with the United Nations Deputy Secretary-General, Amina J. Mohammed.
On October 25, at the Future Investment Summit in Riyadh, she was granted Saudi Arabian citizenship, becoming the first robot ever to have a nationality. This attracted controversy as some commentators wondered if this implied that Sophia could vote or marry, or whether a deliberate system shutdown could be considered murder.
Social media users used Sophia's citizenship to criticize Saudi Arabia's human rights record.
See also:
- ELIZA effect
- Official website
- Sophia at Hanson Robotics website
Outline of Technology
YouTube Video of the 5 Most Secret Military Aircraft
The following outline is provided as an overview of and topical guide to technology:
Technology – collection of tools, including machinery, modifications, arrangements and procedures used by humans. Engineering is the discipline that seeks to study and design new technologies.
Technologies significantly affect human as well as other animal species' ability to control and adapt to their natural environments.
Click on any of the following blue hyperlinks for further amplification on the Outline of Technology:
Technology – collection of tools, including machinery, modifications, arrangements and procedures used by humans. Engineering is the discipline that seeks to study and design new technologies.
Technologies significantly affect human as well as other animal species' ability to control and adapt to their natural environments.
Click on any of the following blue hyperlinks for further amplification on the Outline of Technology:
- Components of technology
- Branches of technology
- Technology by region
- History of technology
- Hypothetical technology
- Philosophy of technology
- Management of technology including Advancement of technology
- Politics of technology
- Economics of technology
- Technology education
- Technology organizations
- Technology media
- Persons influential in technology
Content Management System (CMS) including a List of content management systems
YouTube Video: Understanding content management systems (CMS)
Click here for a List of Content Management Systems (CMS)
A content management system (CMS) is a computer application that supports the creation and modification of digital content. It typically supports multiple users in a collaborative environment.
CMS features vary widely. Most CMSs include Web-based publishing, format management, history editing and version control, indexing, search, and retrieval. By their nature, content management systems support the separation of content and presentation.
A web content management system (WCM or WCMS) is a CMS designed to support the management of the content of Web pages. Most popular CMSs are also WCMSs. Web content includes text and embedded graphics, photos, video, audio, maps, and program code (e.g., for applications) that displays content or interacts with the user.
Such a content management system (CMS) typically has two major components:
Digital asset management systems are another type of CMS. They manage things such as documents, movies, pictures, phone numbers, and scientific data. Companies also use CMSs to store, control, revise, and publish documentation.
Based on market share statistics, the most popular content management system is WordPress, used by over 28% of all websites on the internet, and by 59% of all websites using a known content management system. Other popular content management systems include Joomla and Drupal.
Common features:
Content management systems typically provide the following features:
Advantages:
Disadvantages:
See also:
A content management system (CMS) is a computer application that supports the creation and modification of digital content. It typically supports multiple users in a collaborative environment.
CMS features vary widely. Most CMSs include Web-based publishing, format management, history editing and version control, indexing, search, and retrieval. By their nature, content management systems support the separation of content and presentation.
A web content management system (WCM or WCMS) is a CMS designed to support the management of the content of Web pages. Most popular CMSs are also WCMSs. Web content includes text and embedded graphics, photos, video, audio, maps, and program code (e.g., for applications) that displays content or interacts with the user.
Such a content management system (CMS) typically has two major components:
- A content management application (CMA) is the front-end user interface that allows a user, even with limited expertise, to add, modify, and remove content from a website without the intervention of a webmaster.
- A content delivery application (CDA) compiles that information and updates the website.
Digital asset management systems are another type of CMS. They manage things such as documents, movies, pictures, phone numbers, and scientific data. Companies also use CMSs to store, control, revise, and publish documentation.
Based on market share statistics, the most popular content management system is WordPress, used by over 28% of all websites on the internet, and by 59% of all websites using a known content management system. Other popular content management systems include Joomla and Drupal.
Common features:
Content management systems typically provide the following features:
- SEO-friendly URLs
- Integrated and online help
- Modularity and extensibility
- User and group functionality
- Templating support for changing designs
- Install and upgrade wizards
- Integrated audit logs
- Compliance with various accessibility frameworks and standards, such as WAI-ARIA
Advantages:
- Reduced need to code from scratch
- Easy to create a unified look and feel
- Version control
- Edit permission management
Disadvantages:
- Limited or no ability to create functionality not envisioned in the CMS (e.g., layouts, web apps, etc.)
- Increased need for special expertise and training for content authors
See also:
- Content management
- Document management system
- Dynamic web page
- Enterprise content management
- Information management
- Knowledge management
- LAMP (software bundle)
- List of content management frameworks
- Revision control
- Web application framework
- Wiki
Bill Gates
YouTube Video: Bill Gates interview: How the world will change by 2030
William Henry "Bill" Gates III (born October 28, 1955) is an American business magnate, philanthropist, investor, and computer programmer. In 1975, Gates and Paul Allen co-founded Microsoft, which became the world's largest PC software company. During his career at Microsoft, Gates held the positions of chairman, CEO and chief software architect, and was the largest individual shareholder until May 2014. Gates has authored and co-authored several books.
Starting in 1987, Gates was included in the Forbes list of the world's wealthiest people and was the wealthiest from 1995 to 2007, again in 2009, and has been since 2014. Between 2009 and 2014, his wealth doubled from US $40 billion to more than US $82 billion. Between 2013 and 2014, his wealth increased by US $15 billion. Gates is currently the wealthiest person in the world.
Gates is one of the best-known entrepreneurs of the personal computer revolution. Gates has been criticized for his business tactics, which have been considered anti-competitive, an opinion which has in some cases been upheld by numerous court rulings. Later in his career Gates pursued a number of philanthropic endeavors, donating large amounts of money to various charitable organizations and scientific research programs through the Bill & Melinda Gates Foundation, established in 2000.
Gates stepped down as Chief Executive Officer of Microsoft in January 2000. He remained as Chairman and created the position of Chief Software Architect for himself. In June 2006, Gates announced that he would be transitioning from full-time work at Microsoft to part-time work, and full-time work at the Bill & Melinda Gates Foundation.
He gradually transferred his duties to Ray Ozzie, chief software architect and Craig Mundie, chief research and strategy officer. Ozzie later left the company. Gates's last full-time day at Microsoft was June 27, 2008. He stepped down as Chairman of Microsoft in February 2014, taking on a new post as technology advisor to support newly appointed CEO Satya Nadella.
Starting in 1987, Gates was included in the Forbes list of the world's wealthiest people and was the wealthiest from 1995 to 2007, again in 2009, and has been since 2014. Between 2009 and 2014, his wealth doubled from US $40 billion to more than US $82 billion. Between 2013 and 2014, his wealth increased by US $15 billion. Gates is currently the wealthiest person in the world.
Gates is one of the best-known entrepreneurs of the personal computer revolution. Gates has been criticized for his business tactics, which have been considered anti-competitive, an opinion which has in some cases been upheld by numerous court rulings. Later in his career Gates pursued a number of philanthropic endeavors, donating large amounts of money to various charitable organizations and scientific research programs through the Bill & Melinda Gates Foundation, established in 2000.
Gates stepped down as Chief Executive Officer of Microsoft in January 2000. He remained as Chairman and created the position of Chief Software Architect for himself. In June 2006, Gates announced that he would be transitioning from full-time work at Microsoft to part-time work, and full-time work at the Bill & Melinda Gates Foundation.
He gradually transferred his duties to Ray Ozzie, chief software architect and Craig Mundie, chief research and strategy officer. Ozzie later left the company. Gates's last full-time day at Microsoft was June 27, 2008. He stepped down as Chairman of Microsoft in February 2014, taking on a new post as technology advisor to support newly appointed CEO Satya Nadella.
Jeff Bezos, Amazon Founder
Click on Video for Jeff Bezos on "What matters more than your talents" TED
Jeffrey Preston Bezos (né Jorgensen; born January 12, 1964) is an American technology and retail entrepreneur, investor, electrical engineer, computer scientist, and philanthropist, best known as the founder, chairman, and chief executive officer of Amazon.com, the world's largest online shopping retailer.
The company began as an Internet merchant of books and expanded to a wide variety of products and services, most recently video and audio streaming. Amazon.com is currently the world's largest Internet sales company on the World Wide Web, as well as the world's largest provider of cloud infrastructure services, which is available through its Amazon Web Services arm.
Bezos' other diversified business interests include aerospace and newspapers. He is the founder and manufacturer of Blue Origin (founded in 2000) with test flights to space which started in 2015, and plans for commercial suborbital human spaceflight beginning in 2018.
In 2013, Bezos purchased The Washington Post newspaper. A number of other business investments are managed through Bezos Expeditions.
When the financial markets opened on July 27, 2017, Bezos briefly surpassed Bill Gates on the Forbes list of billionaires to become the world's richest person, with an estimated net worth of just over $90 billion. He lost the title later in the day when Amazon's stock dropped, returning him to second place with a net worth just below $90 billion.
On October 27, 2017, Bezos again surpassed Gates on the Forbes list as the richest person in the world. Bezos's net worth surpassed $100 billion for the first time on November 24, 2017 after Amazon's share price increased by more than 2.5%.
Click on any of the following blue hyperlinks for more about Jeff Bezos:
The company began as an Internet merchant of books and expanded to a wide variety of products and services, most recently video and audio streaming. Amazon.com is currently the world's largest Internet sales company on the World Wide Web, as well as the world's largest provider of cloud infrastructure services, which is available through its Amazon Web Services arm.
Bezos' other diversified business interests include aerospace and newspapers. He is the founder and manufacturer of Blue Origin (founded in 2000) with test flights to space which started in 2015, and plans for commercial suborbital human spaceflight beginning in 2018.
In 2013, Bezos purchased The Washington Post newspaper. A number of other business investments are managed through Bezos Expeditions.
When the financial markets opened on July 27, 2017, Bezos briefly surpassed Bill Gates on the Forbes list of billionaires to become the world's richest person, with an estimated net worth of just over $90 billion. He lost the title later in the day when Amazon's stock dropped, returning him to second place with a net worth just below $90 billion.
On October 27, 2017, Bezos again surpassed Gates on the Forbes list as the richest person in the world. Bezos's net worth surpassed $100 billion for the first time on November 24, 2017 after Amazon's share price increased by more than 2.5%.
Click on any of the following blue hyperlinks for more about Jeff Bezos:
- Early life and education
- Business career
- Philanthropy
- Recognition
- Criticism
- Personal life
- Politics
- See also:
Drones
YouTube Video The World's Deadliest Drone: MQ-9 Reaper
Pictured: LEFT: An MQ-9 Reaper, a hunter-killer surveillance UAV; RIGHT: A DJI Phantom UAV for commercial and recreational aerial photography (Courtesy of Capricorn4049 - Own work, CC BY-SA 4.0)
The following is highlights of the Consumer Reports article "10 Ways Drones are Changing Your World":
An unmanned aerial vehicle (UAV), commonly known as a drone, as an unmanned aircraft system (UAS), or by several other names, is an aircraft without a human pilot aboard.
The flight of UAVs may operate with various degrees of autonomy: either under remote control by a human operator, or fully or intermittently autonomously, by onboard computers.
Compared to manned aircraft, UAVs are often preferred for missions that are too "dull, dirty or dangerous" for humans. They originated mostly in military applications, although their use is expanding in commercial, scientific, recreational and other applications, such as policing and surveillance, aerial photography, agriculture and drone racing.
Civilian drones now vastly outnumber military drones, with estimates of over a million sold by 2015.
For more, click on any of the following:
- Package Delivery (Amazon: WIP)
- Agriculture (enabling farmers to view the health of their crops)
- Photos and Videos of places otherwise inaccessible
- Humanitarian Aid
- First Responders (enabling law enforcement to have an aerial view of a potential crime scene)
- Safety Inspections
- Viewing damage for insurance claims
- Enhancing Internet access through low-flying drones in an otherwise inaccessible area.
- Hurricane and Tornado Forecasts
- Wildlife Conservation
An unmanned aerial vehicle (UAV), commonly known as a drone, as an unmanned aircraft system (UAS), or by several other names, is an aircraft without a human pilot aboard.
The flight of UAVs may operate with various degrees of autonomy: either under remote control by a human operator, or fully or intermittently autonomously, by onboard computers.
Compared to manned aircraft, UAVs are often preferred for missions that are too "dull, dirty or dangerous" for humans. They originated mostly in military applications, although their use is expanding in commercial, scientific, recreational and other applications, such as policing and surveillance, aerial photography, agriculture and drone racing.
Civilian drones now vastly outnumber military drones, with estimates of over a million sold by 2015.
For more, click on any of the following:
- 1 Terminology
- 2 History
- 3 Classification
- 4 UAV components
- 5 Autonomy
- 6 Functions
- 7 Market trends
- 8 Development considerations
- 9 Applications
- 10 Existing UAVs
- 11 Events
- 12 Ethical concerns
- 13 Safety
- 14 Regulation
- 15 Popular culture
- 16 UAE Drones for Good Award
- See also:
- Drones in Agriculture
- International Aerial Robotics Competition
- Micro air vehicle
- Miniature UAV
- Quadcopter
- ParcAberporth
- Radio-controlled aircraft
- Satellite Sentinel Project
- Unmanned underwater vehicle
- Tactical Control System
- List of films featuring drones
- Micromechanical Flying Insect
- Research and groups:
- Drones and Drone Data Technical Interest Group (TIG) Technology and techniques (equipment, software, workflows, survey designs) to allow individuals to enhance their capabilities with data obtained from drones and drone surveys. Chaired by Karl Osvald and James McDonald.
Inventions, Their Inventors and Their Timeline
YouTube Video: Timeline of the inventions that changed the world
Pictured: L-R: The Light Bulb as pioneered by Thomas E. Edison, The Model T Ford by Henry Ford (displayed at the Science Museum: Getty Images), First flight of the Wright Flyer I (By the Wright Brothers, 12/17/1903 with Orville piloting, Wilbur running at wingtip)
Click here for a List of Inventors.
Click here for a Timeline of Historic Inventions.
An invention is a unique or novel device, method, composition or process. The invention process is a process within an overall engineering and product development process. It may be an improvement upon a machine or product, or a new process for creating an object or a result.
An invention that achieves a completely unique function or result may be a radical breakthrough. Such works are novel and not obvious to others skilled in the same field. An inventor may be taking a big step in success or failure.
Some inventions can be patented. A patent legally protects the intellectual property rights of the inventor and legally recognizes that a claimed invention is actually an invention. The rules and requirements for patenting an invention vary from country to country, and the process of obtaining a patent is often expensive.
Another meaning of invention is cultural invention, which is an innovative set of useful social behaviors adopted by people and passed on to others.
The Institute for Social Inventions collected many such ideas in magazines and books. Invention is also an important component of artistic and design creativity. Inventions often extend the boundaries of human knowledge, experience or capability.
An example of an invention we all take for granted include: the Remote Control.
An example of an invention that revolutionized the kitchen (and leading to less food spoilage) was the Refrigerator.
Click on any of the following blue hyperlinks for more about Inventions:
Click here for a Timeline of Historic Inventions.
An invention is a unique or novel device, method, composition or process. The invention process is a process within an overall engineering and product development process. It may be an improvement upon a machine or product, or a new process for creating an object or a result.
An invention that achieves a completely unique function or result may be a radical breakthrough. Such works are novel and not obvious to others skilled in the same field. An inventor may be taking a big step in success or failure.
Some inventions can be patented. A patent legally protects the intellectual property rights of the inventor and legally recognizes that a claimed invention is actually an invention. The rules and requirements for patenting an invention vary from country to country, and the process of obtaining a patent is often expensive.
Another meaning of invention is cultural invention, which is an innovative set of useful social behaviors adopted by people and passed on to others.
The Institute for Social Inventions collected many such ideas in magazines and books. Invention is also an important component of artistic and design creativity. Inventions often extend the boundaries of human knowledge, experience or capability.
An example of an invention we all take for granted include: the Remote Control.
An example of an invention that revolutionized the kitchen (and leading to less food spoilage) was the Refrigerator.
Click on any of the following blue hyperlinks for more about Inventions:
- Three areas of invention
- Process of invention
- Invention vs. innovation
- Purposes of invention
- Invention as defined by patent law
- Invention in the arts
- See also:
- Bayh-Dole Act
- Chindōgu
- Creativity techniques
- Directive on the legal protection of biotechnological inventions
- Discovery (observation)
- Edisonian approach
- Heroic theory of invention and scientific development
- Independent inventor
- Ingenuity
- INPEX (invention show)
- International Innovation Index
- Invention promotion firm
- Inventors' Day
- Kranzberg's laws of technology
- Lemelson-MIT Prize
- Category:Lists of inventions or discoveries
- List of inventions named after people
- List of prolific inventors
- Multiple discovery
- National Inventors Hall of Fame
- Patent model
- Proof of concept
- Proposed directive on the patentability of computer-implemented inventions - it was rejected
- Scientific priority
- Technological revolution
- The Illustrated Science and Invention Encyclopedia
- Science and invention in Birmingham - The first cotton spinning mill to plastics and steam power.
- Invention Ideas
- List of PCT (Patent Cooperation Treaty) Notable Inventions at WIPO
(Internet-enabled) Smart TV , including a List of Internet Television Providers in the United States
YouTube Video about the Samsung SMART TV Tutorial – Smart Hub 2015 [How-To-Video]
Pictured below: If you're new to streaming TV, here's your guide by StlToday.com
Click here for a List of Internet Television Providers in the United States.
A smart TV, sometimes referred to as connected TV or hybrid TV, is a television set with integrated Internet and interactive "Web 2.0" features.
Smart TV is a technological convergence between computers and flatscreen television sets and set-top boxes. Besides the traditional functions of television sets and set-top boxes provided through traditional broadcasting media, these devices can also provide
Smart TV should not be confused with Internet TV, IPTV or Web television. Internet TV refers to receiving television content over the Internet instead of traditional systems (terrestrial, cable and satellite) (although Internet itself is received by these methods).
IPTV is one of the Internet television technology standards for use by television broadcasters. Web television is a term used for programs created by a wide variety of companies and individuals for broadcast on Internet TV.
In smart TVs, the operating system is pre-loaded or is available through the set-top box. The software applications or "apps" can be pre-loaded into the device, or updated or installed on demand via an app store or marketplace, in a similar manner to how the apps are integrated in modern smartphones.
The technology that enables smart TVs is also incorporated in external devices such as set-top boxes and some Blu-ray players, game consoles, digital media players, hotel television systems, smartphones, and other network-connected interactive devices that utilize television-type display outputs.
These devices allow viewers to find and play videos, movies, TV shows, photos and other content from the Web, cable or satellite TV channel, or from a local storage device.
A smart TV device is either a television set with integrated Internet capabilities or a set-top box for television that offers more advanced computing ability and connectivity than a contemporary basic television set.
Smart TVs may be thought of as an information appliance or the computer system from a handheld computer integrated within a television set unit, as such a smart TV often allows the user to install and run more advanced applications or plugins/addons based on a specific platform. Smart TVs run a complete operating system or mobile operating system software providing a platform for application developers.
Smart TV platforms or middleware have a public Software development kit (SDK) and/or Native development kit (NDK) for apps so that third-party developers can develop applications for it, and an app store so that the end-users can install and uninstall apps themselves.
The public SDK enables third-party companies and other interactive application developers to “write” applications once and see them run successfully on any device that supports the smart TV platform or middleware architecture which it was written for, no matter who the hardware manufacturer is.
Smart TVs deliver content (such as photos, movies and music) from other computers or network attached storage devices on a network using either a Digital Living Network Alliance / Universal Plug and Play media server or similar service program like Windows Media Player or Network-attached storage (NAS), or via iTunes.
It also provides access to Internet-based services including:
Smart TV enables access to movies, shows, video games, apps and more. Some of those apps include Netflix, Spotify, YouTube, and Amazon.
Functions:
Smart TV devices also provide access to user-generated content (either stored on an external hard drive or in cloud storage) and to interactive services and Internet applications, such as YouTube, many using HTTP Live Streaming (also known as HLS) adaptive streaming.
Smart TV devices facilitate the curation of traditional content by combining information from the Internet with content from TV providers. Services offer users a means to track and receive reminders about shows or sporting events, as well as the ability to change channels for immediate viewing.
Some devices feature additional interactive organic user interface / natural user interface technologies for navigation controls and other human interaction with a Smart TV, with such as second screen companion devices, spatial gestures input like with Xbox Kinect, and even for speech recognition for natural language user interface.
Features:
Smart TV develops new features to satisfy consumers and companies, such as new payment processes. LG and PaymentWall have collaborated to allow consumers to access purchased apps, movies, games, and more using a remote control, laptop, tablet, or smartphone. This is intended for an easier and more convenient way for checkout.
Background:
In the early 1980s, "intelligent" television receivers were introduced in Japan. The addition of an LSI chip with memory and a character generator to a television receiver enabled Japanese viewers to receive a mix of programming and information transmitted over spare lines of the broadcast television signal.
A patent was published in 1994 (and extended the following year) for an "intelligent" television system, linked with data processing systems, by means of a digital or analog network.
Apart from being linked to data networks, one key point is its ability to automatically download necessary software routines, according to a user's demand, and process their needs.
The mass acceptance of digital television in late 2000s and early 2010s greatly improved smart TVs. Major TV manufacturers have announced production of smart TVs only, for their middle-end to high-end TVs in 2015.
Smart TVs are expected to become the dominant form of television by the late 2010s. At the beginning of 2016, Nielsen reported that 29 percent of those with incomes over $75,000 a year had a smart TV.
Click on any of the following blue hyperlinks for more about Smart TV:
A smart TV, sometimes referred to as connected TV or hybrid TV, is a television set with integrated Internet and interactive "Web 2.0" features.
Smart TV is a technological convergence between computers and flatscreen television sets and set-top boxes. Besides the traditional functions of television sets and set-top boxes provided through traditional broadcasting media, these devices can also provide
- Internet TV,
- online interactive media,
- over-the-top content (OTT),
- as well as on-demand streaming media, and home networking access.
Smart TV should not be confused with Internet TV, IPTV or Web television. Internet TV refers to receiving television content over the Internet instead of traditional systems (terrestrial, cable and satellite) (although Internet itself is received by these methods).
IPTV is one of the Internet television technology standards for use by television broadcasters. Web television is a term used for programs created by a wide variety of companies and individuals for broadcast on Internet TV.
In smart TVs, the operating system is pre-loaded or is available through the set-top box. The software applications or "apps" can be pre-loaded into the device, or updated or installed on demand via an app store or marketplace, in a similar manner to how the apps are integrated in modern smartphones.
The technology that enables smart TVs is also incorporated in external devices such as set-top boxes and some Blu-ray players, game consoles, digital media players, hotel television systems, smartphones, and other network-connected interactive devices that utilize television-type display outputs.
These devices allow viewers to find and play videos, movies, TV shows, photos and other content from the Web, cable or satellite TV channel, or from a local storage device.
A smart TV device is either a television set with integrated Internet capabilities or a set-top box for television that offers more advanced computing ability and connectivity than a contemporary basic television set.
Smart TVs may be thought of as an information appliance or the computer system from a handheld computer integrated within a television set unit, as such a smart TV often allows the user to install and run more advanced applications or plugins/addons based on a specific platform. Smart TVs run a complete operating system or mobile operating system software providing a platform for application developers.
Smart TV platforms or middleware have a public Software development kit (SDK) and/or Native development kit (NDK) for apps so that third-party developers can develop applications for it, and an app store so that the end-users can install and uninstall apps themselves.
The public SDK enables third-party companies and other interactive application developers to “write” applications once and see them run successfully on any device that supports the smart TV platform or middleware architecture which it was written for, no matter who the hardware manufacturer is.
Smart TVs deliver content (such as photos, movies and music) from other computers or network attached storage devices on a network using either a Digital Living Network Alliance / Universal Plug and Play media server or similar service program like Windows Media Player or Network-attached storage (NAS), or via iTunes.
It also provides access to Internet-based services including:
- traditional broadcast TV channels,
- catch-up services,
- video-on-demand (VOD),
- electronic program guide,
- interactive advertising, personalization,
- voting,
- games,
- social networking,
- and other multimedia applications.
Smart TV enables access to movies, shows, video games, apps and more. Some of those apps include Netflix, Spotify, YouTube, and Amazon.
Functions:
Smart TV devices also provide access to user-generated content (either stored on an external hard drive or in cloud storage) and to interactive services and Internet applications, such as YouTube, many using HTTP Live Streaming (also known as HLS) adaptive streaming.
Smart TV devices facilitate the curation of traditional content by combining information from the Internet with content from TV providers. Services offer users a means to track and receive reminders about shows or sporting events, as well as the ability to change channels for immediate viewing.
Some devices feature additional interactive organic user interface / natural user interface technologies for navigation controls and other human interaction with a Smart TV, with such as second screen companion devices, spatial gestures input like with Xbox Kinect, and even for speech recognition for natural language user interface.
Features:
Smart TV develops new features to satisfy consumers and companies, such as new payment processes. LG and PaymentWall have collaborated to allow consumers to access purchased apps, movies, games, and more using a remote control, laptop, tablet, or smartphone. This is intended for an easier and more convenient way for checkout.
Background:
In the early 1980s, "intelligent" television receivers were introduced in Japan. The addition of an LSI chip with memory and a character generator to a television receiver enabled Japanese viewers to receive a mix of programming and information transmitted over spare lines of the broadcast television signal.
A patent was published in 1994 (and extended the following year) for an "intelligent" television system, linked with data processing systems, by means of a digital or analog network.
Apart from being linked to data networks, one key point is its ability to automatically download necessary software routines, according to a user's demand, and process their needs.
The mass acceptance of digital television in late 2000s and early 2010s greatly improved smart TVs. Major TV manufacturers have announced production of smart TVs only, for their middle-end to high-end TVs in 2015.
Smart TVs are expected to become the dominant form of television by the late 2010s. At the beginning of 2016, Nielsen reported that 29 percent of those with incomes over $75,000 a year had a smart TV.
Click on any of the following blue hyperlinks for more about Smart TV:
- Technology
- Security and privacy
- Reliability
- Restriction of access
- Market share
- See also:
- Automatic content recognition
- 10-foot user interface
- Digital Living Network Alliance - DLNA
- Digital media player
- Enhanced TV
- Home theater PC
- Hotel television systems
- Hybrid Broadcast Broadband TV
- Interactive television
- List of smart TV platforms and middleware software
- Over-the-top content
- PC-on-a-stick
- Second screen
- Space shifting
- Telescreen
- Tivoization
- TV Genius
- Video on demand
Robots and Robotics as well as the resulting impact of Automation on Jobs.
YouTube Video of a Robot Disarming Bombs
YouTube Video CNET News - Meet the robots making Amazon even faster
(A look inside Amazon's warehouse where the Kiva robots are busy moving your orders around. Between December 2014 and January 2015, Amazon have deployed 15.000 of these robots in their warehouses.)
Pictured below:
Top-Left: KUKA industrial robots being used at a bakery for food production;
Top-Right: Automated side loader operation
Bottom: Demand for industrial robots to treble in automotive industry
In a lead-up to the following topics covering robots, robotics, and automation, you will find the following Brookings Institute April 18, 2018 article entitled "Will robots and AI take your job? The economic and political consequences of automation"
By Darrell M. West Wednesday, April 18, 2018
Editor's Note: Darrell M. West is author of the Brookings book “The Future of Work: Robots, AI, and Automation.”
In Edward Bellamy’s classic Looking Backward, the protagonist Julian West wakes up from a 113-year slumber and finds the United States in 2000 has changed dramatically from 1887. People stop working at age forty-five and devote their lives to mentoring other people and engaging in volunteer work that benefits the overall community. There are short work weeks for employees, and everyone receives full benefits, food, and housing.
The reason is that new technologies of the period have enabled people to be very productive while working part-time. Businesses do not need large numbers of employees, so individuals can devote most of their waking hours to hobbies, volunteering, and community service. In conjunction with periodic work stints, they have time to pursue new skills and personal identities that are independent of their jobs.
In the current era, developed countries may be on the verge of a similar transition. Robotics and machine learning have improved productivity and enhanced the economies of many nations.
Artificial intelligence (AI) has advanced into finance, transportation, defense, and energy management. The internet of things (IoT) is facilitated by high-speed networks and remote sensors to connect people and businesses. In all of this, there is a possibility of a new era that could improve the lives of many people.
Yet amid these possible benefits, there is widespread fear that robots and AI will take jobs and throw millions of people into poverty. A Pew Research Center study asked 1,896 experts about the impact of emerging technologies and found “half of these experts (48 percent) envision a future in which robots and digital agents [will] have displaced significant numbers of both blue- and white-collar workers—with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.”
These fears have been echoed by detailed analyses showing anywhere from a 14 to 54 percent automation impact on jobs. For example, a Bruegel analysis found that “54% of EU jobs [are] at risk of computerization.” Using European data, they argue that job losses are likely to be significant and people should prepare for large-scale disruption.
Meanwhile, Oxford University researchers Carl Frey and Michael Osborne claim that technology will transform many sectors of life. They studied 702 occupational groupings and found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”
A McKinsey Global Institute analysis of 750 jobs concluded that “45% of paid activities could be automated using ‘currently demonstrated technologies’ and . . . 60% of occupations could have 30% or more of their processes automated.”
A more recent McKinsey report, “Jobs Lost, Jobs Gained,” found that 30 percent of “work activities” could be automated by 2030 and up to 375 million workers worldwide could be affected by emerging technologies.
Researchers at the Organization for Economic Cooperation and Development (OECD) focused on “tasks” as opposed to “jobs” and found fewer job losses. Using task-related data from 32 OECD countries, they estimated that 14 percent of jobs are highly automatable and another 32 have a significant risk of automation. Although their job loss estimates are below those of other experts, they concluded that “low qualified workers are likely to bear the brunt of the adjustment costs as the automatibility of their jobs is higher compared to highly qualified workers.”
While some dispute the dire predictions on grounds new positions will be created to offset the job losses, the fact that all these major studies report significant workforce disruptions should be taken seriously.
If the employment impact falls at the 38 percent mean of these forecasts, Western democracies likely could resort to authoritarianism as happened in some countries during the Great Depression of the 1930s in order to keep their restive populations in check. If that happened, wealthy elites would require armed guards, security details, and gated communities to protect themselves, as is the case in poor countries today with high income inequality. The United States would look like Syria or Iraq, with armed bands of young men with few employment prospects other than war, violence, or theft.
Yet even if the job ramifications lie more at the low end of disruption, the political consequences still will be severe. Relatively small increases in unemployment or underemployment have an outsized political impact. We saw that a decade ago when 10 percent unemployment during the Great Recession spawned the Tea party and eventually helped to make Donald Trump president.
With some workforce disruption virtually guaranteed by trends already underway, it is safe to predict American politics will be chaotic and turbulent during the coming decades. As innovation accelerates and public anxiety intensifies, right-wing and left-wing populists will jockey for voter support.
Government control could gyrate between very conservative and very liberal leaders as each side blames a different set of scapegoats for economic outcomes voters don’t like. The calm and predictable politics of the post-World War II era likely will become a distant memory as the American system moves toward Trumpism on steroids.
[End of Article]
___________________________________________________________________________
A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically.
Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.
Robots can be autonomous or semi-autonomous and range from humanoids such as Honda's Advanced Step in Innovative Mobility (ASIMO) and TOSY's TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots.
By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own. Autonomous Things are expected to proliferate in the coming decade, with home robotics and the autonomous car as some of the main drivers.
The branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing is robotics. These technologies deal with automated machines that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behavior, or cognition. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics. These robots have also created a newer branch of robotics: soft robotics.
From the time of ancient civilization there have been many accounts of user-configurable automated devices and even automata resembling animals and humans, designed primarily as entertainment. As mechanical techniques developed through the Industrial age, there appeared more practical applications such as automated machines, remote-control and wireless remote-control.
The term comes from a Czech word, robota, meaning "forced labor"; the word 'robot' was first used to denote a fictional humanoid in a 1920 play R.U.R. by the Czech writer, Karel Čapek but it was Karel's brother Josef Čapek who was the word's true inventor.
Electronics evolved into the driving force of development with the advent of the first electronic autonomous robots created by William Grey Walter in Bristol, England in 1948, as well as Computer Numerical Control (CNC) machine tools in the late 1940s by John T. Parsons and Frank L. Stulen.
The first commercial, digital and programmable robot was built by George Devol in 1954 and was named the Unimate. It was sold to General Motors in 1961 where it was used to lift pieces of hot metal from die casting machines at the Inland Fisher Guide Plant in the West Trenton section of Ewing Township, New Jersey.
Robots have replaced humans in performing repetitive and dangerous tasks which humans prefer not to do, or are unable to do because of size limitations, or which take place in extreme environments such as outer space or the bottom of the sea. There are concerns about the increasing use of robots and their role in society.
Robots are blamed for rising technological unemployment as they replace workers in increasing numbers of functions. The use of robots in military combat raises ethical concerns. The possibilities of robot autonomy and potential repercussions have been addressed in fiction and may be a realistic concern in the future.
Click on any of the following blue hyperlinks for more about Robots:
Robotics is an interdisciplinary branch of engineering and science that includes mechanical engineering, electronics engineering, computer science, and others.
Robotics deals with the design, construction, operation, and use of robots (above), as well as computer systems for their control, sensory feedback, and information processing.
These technologies are used to develop machines that can substitute for humans and replicate human actions. Robots can be used in any situation and for any purpose, but today many are used in dangerous environments (including bomb detection and deactivation), manufacturing processes, or where humans cannot survive.
Robots can take on any form but some are made to resemble humans in appearance. This is said to help in the acceptance of a robot in certain replicative behaviors usually performed by people. Such robots attempt to replicate walking, lifting, speech, cognition, and basically anything a human can do. Many of today's robots are inspired by nature, contributing to the field of bio-inspired robotics.
The concept of creating machines that can operate autonomously dates back to classical times, but research into the functionality and potential uses of robots did not grow substantially until the 20th century. Throughout history, it has been frequently assumed that robots will one day be able to mimic human behavior and manage tasks in a human-like fashion.
Today, robotics is a rapidly growing field, as technological advances continue; researching, designing, and building new robots serve various practical purposes, whether domestically, commercially, or militarily. Many robots are built to do jobs that are hazardous to people such as defusing bombs, finding survivors in unstable ruins, and exploring mines and shipwrecks.
Robotics is also used in STEM (science, technology, engineering, and mathematics) as a teaching aid.
Robotics is a branch of engineering that involves the conception, design, manufacture, and operation of robots. This field overlaps with electronics, computer science, artificial intelligence, mechatronics, nanotechnology and bioengineering.
Science-fiction author Isaac Asimov is often given credit for being the first person to use the term robotics in a short story composed in the 1940s. In the story, Asimov suggested three principles to guide the behavior of robots and smart machines. Asimov's Three Laws of Robotics, as they are called, have survived to the present:
Click on any of the following blue hyperlinks for more about Robotics:
Automation is the technology by which a process or procedure is performed without human assistance.
Automation or automatic control, is the use of various control systems for operating equipment such as machinery, processes in factories, boilers and heat treating ovens, switching on telephone networks, steering and stabilization of ships, aircraft and other applications with minimal or reduced human intervention. Some processes have been completely automated.
The biggest benefit of automation is that it saves labor; however, it is also used to save energy and materials and to improve quality, accuracy and precision.
The term automation was not widely used before 1947, when General Motors established an automation department. It was during this time that industry was rapidly adopting feedback controllers, which were introduced in the 1930s.
Automation has been achieved by various means including mechanical, hydraulic, pneumatic, electrical, electronic devices and computers, usually in combination. Complicated systems, such as modern factories, airplanes and ships typically use all these combined techniques.
Click on any of the following hyperlinks for amplification:
By Darrell M. West Wednesday, April 18, 2018
Editor's Note: Darrell M. West is author of the Brookings book “The Future of Work: Robots, AI, and Automation.”
In Edward Bellamy’s classic Looking Backward, the protagonist Julian West wakes up from a 113-year slumber and finds the United States in 2000 has changed dramatically from 1887. People stop working at age forty-five and devote their lives to mentoring other people and engaging in volunteer work that benefits the overall community. There are short work weeks for employees, and everyone receives full benefits, food, and housing.
The reason is that new technologies of the period have enabled people to be very productive while working part-time. Businesses do not need large numbers of employees, so individuals can devote most of their waking hours to hobbies, volunteering, and community service. In conjunction with periodic work stints, they have time to pursue new skills and personal identities that are independent of their jobs.
In the current era, developed countries may be on the verge of a similar transition. Robotics and machine learning have improved productivity and enhanced the economies of many nations.
Artificial intelligence (AI) has advanced into finance, transportation, defense, and energy management. The internet of things (IoT) is facilitated by high-speed networks and remote sensors to connect people and businesses. In all of this, there is a possibility of a new era that could improve the lives of many people.
Yet amid these possible benefits, there is widespread fear that robots and AI will take jobs and throw millions of people into poverty. A Pew Research Center study asked 1,896 experts about the impact of emerging technologies and found “half of these experts (48 percent) envision a future in which robots and digital agents [will] have displaced significant numbers of both blue- and white-collar workers—with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.”
These fears have been echoed by detailed analyses showing anywhere from a 14 to 54 percent automation impact on jobs. For example, a Bruegel analysis found that “54% of EU jobs [are] at risk of computerization.” Using European data, they argue that job losses are likely to be significant and people should prepare for large-scale disruption.
Meanwhile, Oxford University researchers Carl Frey and Michael Osborne claim that technology will transform many sectors of life. They studied 702 occupational groupings and found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”
A McKinsey Global Institute analysis of 750 jobs concluded that “45% of paid activities could be automated using ‘currently demonstrated technologies’ and . . . 60% of occupations could have 30% or more of their processes automated.”
A more recent McKinsey report, “Jobs Lost, Jobs Gained,” found that 30 percent of “work activities” could be automated by 2030 and up to 375 million workers worldwide could be affected by emerging technologies.
Researchers at the Organization for Economic Cooperation and Development (OECD) focused on “tasks” as opposed to “jobs” and found fewer job losses. Using task-related data from 32 OECD countries, they estimated that 14 percent of jobs are highly automatable and another 32 have a significant risk of automation. Although their job loss estimates are below those of other experts, they concluded that “low qualified workers are likely to bear the brunt of the adjustment costs as the automatibility of their jobs is higher compared to highly qualified workers.”
While some dispute the dire predictions on grounds new positions will be created to offset the job losses, the fact that all these major studies report significant workforce disruptions should be taken seriously.
If the employment impact falls at the 38 percent mean of these forecasts, Western democracies likely could resort to authoritarianism as happened in some countries during the Great Depression of the 1930s in order to keep their restive populations in check. If that happened, wealthy elites would require armed guards, security details, and gated communities to protect themselves, as is the case in poor countries today with high income inequality. The United States would look like Syria or Iraq, with armed bands of young men with few employment prospects other than war, violence, or theft.
Yet even if the job ramifications lie more at the low end of disruption, the political consequences still will be severe. Relatively small increases in unemployment or underemployment have an outsized political impact. We saw that a decade ago when 10 percent unemployment during the Great Recession spawned the Tea party and eventually helped to make Donald Trump president.
With some workforce disruption virtually guaranteed by trends already underway, it is safe to predict American politics will be chaotic and turbulent during the coming decades. As innovation accelerates and public anxiety intensifies, right-wing and left-wing populists will jockey for voter support.
Government control could gyrate between very conservative and very liberal leaders as each side blames a different set of scapegoats for economic outcomes voters don’t like. The calm and predictable politics of the post-World War II era likely will become a distant memory as the American system moves toward Trumpism on steroids.
[End of Article]
___________________________________________________________________________
A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically.
Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.
Robots can be autonomous or semi-autonomous and range from humanoids such as Honda's Advanced Step in Innovative Mobility (ASIMO) and TOSY's TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots.
By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own. Autonomous Things are expected to proliferate in the coming decade, with home robotics and the autonomous car as some of the main drivers.
The branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing is robotics. These technologies deal with automated machines that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behavior, or cognition. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics. These robots have also created a newer branch of robotics: soft robotics.
From the time of ancient civilization there have been many accounts of user-configurable automated devices and even automata resembling animals and humans, designed primarily as entertainment. As mechanical techniques developed through the Industrial age, there appeared more practical applications such as automated machines, remote-control and wireless remote-control.
The term comes from a Czech word, robota, meaning "forced labor"; the word 'robot' was first used to denote a fictional humanoid in a 1920 play R.U.R. by the Czech writer, Karel Čapek but it was Karel's brother Josef Čapek who was the word's true inventor.
Electronics evolved into the driving force of development with the advent of the first electronic autonomous robots created by William Grey Walter in Bristol, England in 1948, as well as Computer Numerical Control (CNC) machine tools in the late 1940s by John T. Parsons and Frank L. Stulen.
The first commercial, digital and programmable robot was built by George Devol in 1954 and was named the Unimate. It was sold to General Motors in 1961 where it was used to lift pieces of hot metal from die casting machines at the Inland Fisher Guide Plant in the West Trenton section of Ewing Township, New Jersey.
Robots have replaced humans in performing repetitive and dangerous tasks which humans prefer not to do, or are unable to do because of size limitations, or which take place in extreme environments such as outer space or the bottom of the sea. There are concerns about the increasing use of robots and their role in society.
Robots are blamed for rising technological unemployment as they replace workers in increasing numbers of functions. The use of robots in military combat raises ethical concerns. The possibilities of robot autonomy and potential repercussions have been addressed in fiction and may be a realistic concern in the future.
Click on any of the following blue hyperlinks for more about Robots:
- Summary
- History
- Future development and trends
- New functionalities and prototypes
- Etymology
- Modern robots
- Robots in society
- Contemporary uses
- Robots in popular culture
- See also:
- Specific robotics concepts
- Robotics methods and categories
- Specific robots and devices
- Index of robotics articles
- Outline of robotics
- William Grey Walter
- Specific robotics concepts:
- Robotics methods and categories
- Specific robots and devices
Robotics is an interdisciplinary branch of engineering and science that includes mechanical engineering, electronics engineering, computer science, and others.
Robotics deals with the design, construction, operation, and use of robots (above), as well as computer systems for their control, sensory feedback, and information processing.
These technologies are used to develop machines that can substitute for humans and replicate human actions. Robots can be used in any situation and for any purpose, but today many are used in dangerous environments (including bomb detection and deactivation), manufacturing processes, or where humans cannot survive.
Robots can take on any form but some are made to resemble humans in appearance. This is said to help in the acceptance of a robot in certain replicative behaviors usually performed by people. Such robots attempt to replicate walking, lifting, speech, cognition, and basically anything a human can do. Many of today's robots are inspired by nature, contributing to the field of bio-inspired robotics.
The concept of creating machines that can operate autonomously dates back to classical times, but research into the functionality and potential uses of robots did not grow substantially until the 20th century. Throughout history, it has been frequently assumed that robots will one day be able to mimic human behavior and manage tasks in a human-like fashion.
Today, robotics is a rapidly growing field, as technological advances continue; researching, designing, and building new robots serve various practical purposes, whether domestically, commercially, or militarily. Many robots are built to do jobs that are hazardous to people such as defusing bombs, finding survivors in unstable ruins, and exploring mines and shipwrecks.
Robotics is also used in STEM (science, technology, engineering, and mathematics) as a teaching aid.
Robotics is a branch of engineering that involves the conception, design, manufacture, and operation of robots. This field overlaps with electronics, computer science, artificial intelligence, mechatronics, nanotechnology and bioengineering.
Science-fiction author Isaac Asimov is often given credit for being the first person to use the term robotics in a short story composed in the 1940s. In the story, Asimov suggested three principles to guide the behavior of robots and smart machines. Asimov's Three Laws of Robotics, as they are called, have survived to the present:
- Robots must never harm human beings.
- Robots must follow instructions from humans without violating rule 1.
- Robots must protect themselves without violating the other rules.
Click on any of the following blue hyperlinks for more about Robotics:
- Etymology
- History
- Robotic aspects
- Applications
- Components
- Control
- Research
- Education and training
- Summer robotics camp
- Robotics competitions
- Employment
- Occupational safety and health implications
- See also:
- Robotics portal
- Anderson Powerpole connector
- Artificial intelligence
- Autonomous robot
- Cloud robotics
- Cognitive robotics
- Evolutionary robotics
- Glossary of robotics
- Index of robotics articles
- Mechatronics
- Multi-agent system
- Outline of robotics
- Roboethics
- Robot rights
- Robotic governance
- Soft robotics
- IEEE Robotics and Automation Society
- Investigation of social robots – Robots that mimic human behaviors and gestures.
- Wired's guide to the '50 best robots ever', a mix of robots in fiction (Hal, R2D2, K9) to real robots (Roomba, Mobot, Aibo).
- Notable Chinese Firms Emerging in Medical Robots Sector(GCiS)
Automation is the technology by which a process or procedure is performed without human assistance.
Automation or automatic control, is the use of various control systems for operating equipment such as machinery, processes in factories, boilers and heat treating ovens, switching on telephone networks, steering and stabilization of ships, aircraft and other applications with minimal or reduced human intervention. Some processes have been completely automated.
The biggest benefit of automation is that it saves labor; however, it is also used to save energy and materials and to improve quality, accuracy and precision.
The term automation was not widely used before 1947, when General Motors established an automation department. It was during this time that industry was rapidly adopting feedback controllers, which were introduced in the 1930s.
Automation has been achieved by various means including mechanical, hydraulic, pneumatic, electrical, electronic devices and computers, usually in combination. Complicated systems, such as modern factories, airplanes and ships typically use all these combined techniques.
Click on any of the following hyperlinks for amplification:
- Types of automation
- History
- Advantages and disadvantages
- Lights out manufacturing
- Health and environment
- Convertibility and turnaround time
- Automation tools
- Recent and emerging applications
- Relationship to unemployment
- See also:
- Accelerating change
- Artificial intelligence – automated thought
- Automated reasoning
- Automatic Tool Changer
- Automation protocols
- Automation Technician
- BELBIC
- Controller
- Conveyor
- Conveyor belt
- Cybernetics
- EnOcean
- Feedforward Control
- Hardware architect
- Hardware architecture
- Industrial engineering
- International Society of Automation
- Machine to Machine
- Mobile manipulator
- Multi-agent system
- Odo J. Struger
- OLE for process control
- OPC Foundation
- Pharmacy automation
- Pneumatics automation
- Process control
- Retraining
- Robotics
- Autonomous robot
- Run Book Automation (RBA)
- Sensor-based sorting
- Stepper motor
- Support automation
- System integration
- Systems architect
- Systems architecture
- Luddite
- Mechatronics
- mechanization
- deskilling
Alphabet, Inc. (Google's Parent Company)
YouTube Video: How did Google Get So Big? (60 Minutes 5/21/18)
YouTube Video:How Does Google Maps Work?
Pictured below: Google announced its plans to form a conglomerate of companies called Alphabet back in August. The announcement has finally been converted into reality and a new parent company with the name Alphabet is formed. Google will now split into various companies on the basis of the functions they provide with Google itself retaining its identity.
The below links provide critical viewpoints about Google:
Alphabet Inc. is an American multinational conglomerate headquartered in Mountain View, California. It was created through a corporate restructuring of Google on October 2, 2015 and became the parent company of Google and several former Google subsidiaries.
The two founders of Google assumed executive roles in the new company, with Larry Page serving as CEO and Sergey Brin as President. It has 80,110 employees (as of December 2017).
Alphabet's portfolio encompasses several industries, including technology, life sciences, investment capital, and research. Some of its subsidiaries include:
Some of the subsidiaries of Alphabet have altered their names since leaving Google and becoming part of the new parent company:
Following the restructuring, Page became CEO of Alphabet and Sundar Pichai took his position as CEO of Google. Shares of Google's stock have been converted into Alphabet stock, which trade under Google's former ticker symbols of "GOOG" and "GOOGL".
The establishment of Alphabet was prompted by a desire to make the core Google Internet services business "cleaner and more accountable" while allowing greater autonomy to group companies that operate in businesses other than Internet services.
Click on any of the following blue hyperlinks to learn more about Alphabet, Inc.:
- Click here: How Did Google Get So Big? (60 Minutes 5/21/18: see above YouTube).
- Click here: Google Tries Being Slightly Less Evil. (Vanity Fair 6/8/18)
Alphabet Inc. is an American multinational conglomerate headquartered in Mountain View, California. It was created through a corporate restructuring of Google on October 2, 2015 and became the parent company of Google and several former Google subsidiaries.
The two founders of Google assumed executive roles in the new company, with Larry Page serving as CEO and Sergey Brin as President. It has 80,110 employees (as of December 2017).
Alphabet's portfolio encompasses several industries, including technology, life sciences, investment capital, and research. Some of its subsidiaries include:
Some of the subsidiaries of Alphabet have altered their names since leaving Google and becoming part of the new parent company:
Following the restructuring, Page became CEO of Alphabet and Sundar Pichai took his position as CEO of Google. Shares of Google's stock have been converted into Alphabet stock, which trade under Google's former ticker symbols of "GOOG" and "GOOGL".
The establishment of Alphabet was prompted by a desire to make the core Google Internet services business "cleaner and more accountable" while allowing greater autonomy to group companies that operate in businesses other than Internet services.
Click on any of the following blue hyperlinks to learn more about Alphabet, Inc.:
- History
- Website
- Structure
- Proposed growth
- Restructuring process
- Lawsuit
- Investments and acquisitions
- See also:
- Official website
- Business data for Alphabet Inc: Google Finance
- Yahoo! Finance
- Reuters
- SEC filings
HDMI (High-Definition Multimedia Interface)
YouTube Video: Connect Computer to TV With HDMI With AUDIO/Sound
Pictured: The HDMI Advantage
HDMI (High-Definition Multimedia Interface) is a proprietary audio/video interface for transmitting uncompressed video data and compressed or uncompressed digital audio data from a HDMI-compliant source device, such as a display controller, to a compatible computer monitor, video projector, digital television, or digital audio device. HDMI is a digital replacement for analog video standards.
HDMI implements the EIA/CEA-861 standards, which define video formats and waveforms, transport of compressed, uncompressed, and LPCM audio, auxiliary data, and implementations of the VESA EDID. (p. III) CEA-861 signals carried by HDMI are electrically compatible with the CEA-861 signals used by the digital visual interface (DVI).
No signal conversion is necessary, nor is there a loss of video quality when a DVI-to-HDMI adapter is used.(§C) The CEC (Consumer Electronics Control) capability allows HDMI devices to control each other when necessary and allows the user to operate multiple devices with one handheld remote control device.(§6.3)
Several versions of HDMI have been developed and deployed since initial release of the technology but all use the same cable and connector. Other than improved audio and video capacity, performance, resolution and color spaces, newer versions have optional advanced features such as 3D, Ethernet data connection, and CEC (Consumer Electronics Control) extensions.
Production of consumer HDMI products started in late 2003. In Europe either DVI-HDCP or HDMI is included in the HD ready in-store labeling specification for TV sets for HDTV, formulated by EICTA with SES Astra in 2005. HDMI began to appear on consumer HDTV, camcorders and digital still cameras in 2006. As of January 6, 2015 (twelve years after the release of the first HDMI specification), over 4 billion HDMI devices have been sold.
Click on any of the following blue hyperlinks for more information about HDMI:
HDMI implements the EIA/CEA-861 standards, which define video formats and waveforms, transport of compressed, uncompressed, and LPCM audio, auxiliary data, and implementations of the VESA EDID. (p. III) CEA-861 signals carried by HDMI are electrically compatible with the CEA-861 signals used by the digital visual interface (DVI).
No signal conversion is necessary, nor is there a loss of video quality when a DVI-to-HDMI adapter is used.(§C) The CEC (Consumer Electronics Control) capability allows HDMI devices to control each other when necessary and allows the user to operate multiple devices with one handheld remote control device.(§6.3)
Several versions of HDMI have been developed and deployed since initial release of the technology but all use the same cable and connector. Other than improved audio and video capacity, performance, resolution and color spaces, newer versions have optional advanced features such as 3D, Ethernet data connection, and CEC (Consumer Electronics Control) extensions.
Production of consumer HDMI products started in late 2003. In Europe either DVI-HDCP or HDMI is included in the HD ready in-store labeling specification for TV sets for HDTV, formulated by EICTA with SES Astra in 2005. HDMI began to appear on consumer HDTV, camcorders and digital still cameras in 2006. As of January 6, 2015 (twelve years after the release of the first HDMI specification), over 4 billion HDMI devices have been sold.
Click on any of the following blue hyperlinks for more information about HDMI:
- History
- Specifications
- Versions
- Version comparison
- Applications
- HDMI Alternate Mode for USB Type-C
- Relationship with DisplayPort
- Relationship with MHL
- See also:
Animation
YouTube Video: The 5 Types of Animation
Animation is a dynamic medium in which images or objects are manipulated to appear as moving images. In traditional animation the images were drawn (or painted) by hand on cels to be photographed and exhibited on film. Nowadays most animations are made with computer-generated imagery (CGI).
Computer animation can be very detailed 3D animation, while 2D computer animation can be used for stylistic reasons, low bandwidth or faster real-time renderings. Other common animation methods apply a stop motion technique to two and three-dimensional objects like paper cutouts, puppets or clay figures. The stop motion technique where live actors are used as a frame-by-frame subject is known as pixilation.
Commonly the effect of animation is achieved by a rapid succession of sequential images that minimally differ from each other. The illusion—as in motion pictures in general—is thought to rely on the phi phenomenon and beta movement, but the exact causes are still uncertain.
Analog mechanical animation media that rely on the rapid display of sequential images include the phénakisticope, zoetrope, flip book, praxinoscope and film.
Television and video are popular electronic animation media that originally were analog and now operate digitally. For display on the computer, techniques like animated GIF and Flash animation were developed.
Apart from short films, feature films, animated gifs and other media dedicated to the display moving images, animation is also heavily used for video games, motion graphics and special effects.
The physical movement of image parts through simple mechanics in for instance the moving images in magic lantern shows can also be considered animation. Mechanical animation of actual robotic devices is known as animatronics.
Animators are artists who specialize in creating animation.
Click on any of the following blue hyperlinks for more about Animation Technology:
Computer animation can be very detailed 3D animation, while 2D computer animation can be used for stylistic reasons, low bandwidth or faster real-time renderings. Other common animation methods apply a stop motion technique to two and three-dimensional objects like paper cutouts, puppets or clay figures. The stop motion technique where live actors are used as a frame-by-frame subject is known as pixilation.
Commonly the effect of animation is achieved by a rapid succession of sequential images that minimally differ from each other. The illusion—as in motion pictures in general—is thought to rely on the phi phenomenon and beta movement, but the exact causes are still uncertain.
Analog mechanical animation media that rely on the rapid display of sequential images include the phénakisticope, zoetrope, flip book, praxinoscope and film.
Television and video are popular electronic animation media that originally were analog and now operate digitally. For display on the computer, techniques like animated GIF and Flash animation were developed.
Apart from short films, feature films, animated gifs and other media dedicated to the display moving images, animation is also heavily used for video games, motion graphics and special effects.
The physical movement of image parts through simple mechanics in for instance the moving images in magic lantern shows can also be considered animation. Mechanical animation of actual robotic devices is known as animatronics.
Animators are artists who specialize in creating animation.
Click on any of the following blue hyperlinks for more about Animation Technology:
- History
- Techniques
- Traditional animation
- Full animation
Limited animation
Rotoscoping
Live-action/animation
- Full animation
- Stop motion animation
- Computer animation
- 2D animation
3D animation
3D terms
- 2D animation
- Mechanical animation
- Other animation styles, techniques, and approaches
- Traditional animation
- Production
- Criticism
- Animation and Human Rights
- Awards
- See also:
- 12 basic principles of animation
- Animated war film
- Animation department
- Animation software
- Architectural animation
- Avar (animation variable)
- Independent animation
- International Animated Film Association
- International Tournée of Animation
- List of motion picture topics
- Model sheet
- Motion graphic design
- Society for Animation Studies
- Tradigital art
- Wire-frame model
- The making of an 8-minute cartoon short
- Importance of animation and its utilization in varied industries
- "Animando", a 12-minute film demonstrating 10 different animation techniques (and teaching how to use them).
- 19 types of animation techniques and styles
Image File Formats
YouTube Video: how to pick the correct file format for images
Image file formats are standardized means of organizing and storing digital images. Image files are composed of digital data in one of these formats that can be rasterized for use on a computer display or printer.
An image file format may store data in uncompressed, compressed, or vector formats. Once rasterized, an image becomes a grid of pixels, each of which has a number of bits to designate its color equal to the color depth of the device displaying it.
Image File Sizes:
The size of raster image files is positively correlated with the resolution and images size (number of pixels) and the color depth (bits per pixel). Images can be compressed in various ways, however.
A compression algorithm stores either an exact representation or an approximation of the original image in a smaller number of bytes that can be expanded back to its uncompressed form with a corresponding decompression algorithm. Images with the same number of pixels and color depth can have very different compressed file size.
Considering exactly the same compression, number of pixels, and color depth for two images, different graphical complexity of the original images may also result in very different file sizes after compression due to the nature of compression algorithms. With some compression formats, images that are less complex may result in smaller compressed file sizes.
This characteristic sometimes results in a smaller file size for some lossless formats than lossy formats. For example, graphically simple images (i.e. images with large continuous regions like line art or animation sequences) may be losslessly compressed into a GIF or PNG format and result in a smaller file size than a lossy JPEG format.
Vector images, unlike raster images, can be any dimension independent of file size. File size increases only with the addition of more vectors.
For example, a 640 * 480 pixel image with 24-bit color would occupy almost a megabyte of space:
640 * 480 * 24 = 7,372,800 bits = 921,600 bytes = 900 kB
Image File Compression:
There are two types of image file compression algorithms: lossless and lossy.
Lossless compression algorithms reduce file size while preserving a perfect copy of the original uncompressed image. Lossless compression generally, but not always, results in larger files than lossy compression. Lossless compression should be used to avoid accumulating stages of re-compression when editing images.
Lossy compression algorithms preserve a representation of the original uncompressed image that may appear to be a perfect copy, but it is not a perfect copy. Often lossy compression is able to achieve smaller file sizes than lossless compression. Most lossy compression algorithms allow for variable compression that trades image quality for file size.
Major graphic file formats:
See also: Comparison of graphics file formats § Technical details
Click on any of the following blue hyperlinks for more about Image File Formats:
An image file format may store data in uncompressed, compressed, or vector formats. Once rasterized, an image becomes a grid of pixels, each of which has a number of bits to designate its color equal to the color depth of the device displaying it.
Image File Sizes:
The size of raster image files is positively correlated with the resolution and images size (number of pixels) and the color depth (bits per pixel). Images can be compressed in various ways, however.
A compression algorithm stores either an exact representation or an approximation of the original image in a smaller number of bytes that can be expanded back to its uncompressed form with a corresponding decompression algorithm. Images with the same number of pixels and color depth can have very different compressed file size.
Considering exactly the same compression, number of pixels, and color depth for two images, different graphical complexity of the original images may also result in very different file sizes after compression due to the nature of compression algorithms. With some compression formats, images that are less complex may result in smaller compressed file sizes.
This characteristic sometimes results in a smaller file size for some lossless formats than lossy formats. For example, graphically simple images (i.e. images with large continuous regions like line art or animation sequences) may be losslessly compressed into a GIF or PNG format and result in a smaller file size than a lossy JPEG format.
Vector images, unlike raster images, can be any dimension independent of file size. File size increases only with the addition of more vectors.
For example, a 640 * 480 pixel image with 24-bit color would occupy almost a megabyte of space:
640 * 480 * 24 = 7,372,800 bits = 921,600 bytes = 900 kB
Image File Compression:
There are two types of image file compression algorithms: lossless and lossy.
Lossless compression algorithms reduce file size while preserving a perfect copy of the original uncompressed image. Lossless compression generally, but not always, results in larger files than lossy compression. Lossless compression should be used to avoid accumulating stages of re-compression when editing images.
Lossy compression algorithms preserve a representation of the original uncompressed image that may appear to be a perfect copy, but it is not a perfect copy. Often lossy compression is able to achieve smaller file sizes than lossless compression. Most lossy compression algorithms allow for variable compression that trades image quality for file size.
Major graphic file formats:
See also: Comparison of graphics file formats § Technical details
Click on any of the following blue hyperlinks for more about Image File Formats:
Video including Film and Video Technology
YouTube Video: How to become a Videographer - Videographer tips
Click here for a List of Film and Video Technologies
Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media.
Video was first developed for mechanical television systems, which were quickly replaced by cathode ray tube (CRT) systems which were later replaced by flat panel displays of several types.
Video systems vary in display resolution, aspect ratio, refresh rate, color capabilities and other qualities. Analog and digital variants exist and can be carried on a variety of media, including radio broadcast, magnetic tape, optical discs, computer files, and network streaming.
History.
See also: History of television
Video technology was first developed for mechanical television systems, which were quickly replaced by cathode ray tube (CRT) television systems, but several new technologies for video display devices have since been invented. Video was originally exclusively a live technology. Charles Ginsburg led an Ampex research team developing one of the first practical video tape recorder (VTR). In 1951 the first video tape recorder captured live images from television cameras by converting the camera's electrical impulses and saving the information onto magnetic video tape.
Video recorders were sold for US $50,000 in 1956, and videotapes cost US $300 per one-hour reel. However, prices gradually dropped over the years; in 1971, Sony began selling videocassette recorder (VCR) decks and tapes into the consumer market.
The use of digital techniques in video created digital video, which allows higher quality and, eventually, much lower cost than earlier analog technology.
After the invention of the DVD in 1997 and Blu-ray Disc in 2006, sales of videotape and recording equipment plummeted.
Advances in computer technology allows even inexpensive personal computers and smartphones to capture, store, edit and transmit digital video, further reducing the cost of video production, allowing program-makers and broadcasters to move to tapeless production.
The advent of digital broadcasting and the subsequent digital television transition is in the process of relegating analog video to the status of a legacy technology in most parts of the world.
As of 2015, with the increasing use of high-resolution video cameras with improved dynamic range and color gamuts, and high-dynamic-range digital intermediate data formats with improved color depth, modern digital video technology is converging with digital film technology.
Characteristics of video streams:
Number of frames per second
Frame rate, the number of still pictures per unit of time of video, ranges from six or eight frames per second (frame/s) for old mechanical cameras to 120 or more frames per second for new professional cameras. PAL standards (Europe, Asia, Australia, etc.) and SECAM (France, Russia, parts of Africa etc.) specify 25 frame/s, while NTSC standards (USA, Canada, Japan, etc.) specify 29.97 frame/s.
Film is shot at the slower frame rate of 24 frames per second, which slightly complicates the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve a comfortable illusion of a moving image is about sixteen frames per second.
Interlaced vs progressive:
Video can be interlaced or progressive. In progressive scan systems, each refresh period updates all scan lines in each frame in sequence. When displaying a natively progressive broadcast or recorded signal, the result is optimum spatial resolution of both the stationary and moving parts of the image.
Interlacing was invented as a way to reduce flicker in early mechanical and CRT video displays without increasing the number of complete frames per second. Interlacing retains detail while requiring lower bandwidth compared to progressive scanning.
In interlaced video, the horizontal scan lines of each complete frame are treated as if numbered consecutively, and captured as two fields: an odd field (upper field) consisting of the odd-numbered lines and an even field (lower field) consisting of the even-numbered lines.
Analog display devices reproduce each frame, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more lifelike reproduction of rapidly moving parts of the image when viewed on an interlaced CRT display.
NTSC, PAL and SECAM are interlaced formats. Abbreviated video resolution specifications often include an i to indicate interlacing. For example, PAL video format is often described as 576i50, where 576 indicates the total number of horizontal scan lines, i indicates interlacing, and 50 indicates 50 fields (half-frames) per second.
When displaying a natively interlaced signal on a progressive scan device, overall spatial resolution is degraded by simple line doubling—artifacts such as flickering or "comb" effects in moving parts of the image which appear unless special signal processing eliminates them.
A procedure known as deinterlacing can optimize the display of an interlaced video signal from an analog, DVD or satellite source on a progressive scan device such as an LCD television, digital video projector or plasma panel. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material.
Aspect ratio:
Aspect ratio describes the dimensions of video screens and video picture elements. All popular video formats are rectilinear, and so can be described by a ratio between width and height. The screen aspect ratio of a traditional television screen is 4:3, or about 1.33:1. High definition televisions use an aspect ratio of 16:9, or about 1.78:1. The aspect ratio of a full 35 mm film frame with soundtrack (also known as the Academy ratio) is 1.375:1.
Pixels on computer monitors are usually square, but pixels used in digital video often have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard, and the corresponding anamorphic widescreen formats.
Therefore, a 720 by 480 pixel NTSC DV image displays with the 4:3 aspect ratio (the traditional television standard) if the pixels are thin, and displays at the 16:9 aspect ratio (the anamorphic widescreen format) if the pixels are fat.
The popularity of viewing video on mobile phones has led to the growth of vertical video. Mary Meeker, a partner at Silicon Valley venture capital firm Kleiner Perkins Caufield & Byers, highlighted the growth of vertical video viewing in her 2015 Internet Trends Report – growing from 5% of video viewing in 2010 to 29% in 2015.
Vertical video ads like Snapchat’s are watched in their entirety 9X more than landscape video ads. The format was rapidly taken up by leading social platforms and media publishers such as Mashable In October 2015 video platform Grabyo launched technology to help video publishers adapt horizotonal 16:9 video into mobile formats such as vertical and square.
Color space and bits per pixel
Color model name describes the video color representation. YIQ was used in NTSC television. It corresponds closely to the YUV scheme used in NTSC and PAL television and the YDbDr scheme used by SECAM television.
The number of distinct colors a pixel can represent depends on the number of bits per pixel (bpp). A common way to reduce the amount of data required in digital video is by chroma subsampling (e.g., 4:4:4, 4:2:2, 4:2:0/4:1:1).
Because the human eye is less sensitive to details in color than brightness, the luminance data for all pixels is maintained, while the chrominance data is averaged for a number of pixels in a block and that same value is used for all of them. For example, this results in a 50% reduction in chrominance data using 2 pixel blocks (4:2:2) or 75% using 4 pixel blocks(4:2:0). This process does not reduce the number of possible color values that can be displayed, it reduces the number of distinct points at which the color changes.
Video quality
Video quality can be measured with formal metrics like PSNR or with subjective video quality using expert observation.
The subjective video quality of a video processing system is evaluated as follows:
Many subjective video quality methods are described in the ITU-T recommendation BT.500.
One of the standardized method is the Double Stimulus Impairment Scale (DSIS). In DSIS, each expert views an unimpaired reference video followed by an impaired version of the same video. The expert then rates the impaired video using a scale ranging from "impairments are imperceptible" to "impairments are very annoying".
Video compression method (digital only)
Main article: Video compression
Uncompressed video delivers maximum quality, but with a very high data rate. A variety of methods are used to compress video streams, with the most effective ones using a Group Of Pictures (GOP) to reduce spatial and temporal redundancy. Broadly speaking, spatial redundancy is reduced by registering differences between parts of a single frame; this task is known as intraframe compression and is closely related to image compression.
Likewise, temporal redundancy can be reduced by registering differences between frames; this task is known as interframe compression, including motion compensation and other techniques. The most common modern standards are MPEG-2, used for DVD, Blu-ray and satellite television, and MPEG-4, used for AVCHD, Mobile phones (3GP) and Internet.
Stereoscopic
Stereoscopic video can be created using several different methods:
Blu-ray Discs greatly improve the sharpness and detail of the two-color 3D effect in color-coded stereo programs. See articles Stereoscopy and 3-D film.
Formats:
Different layers of video transmission and storage each provide their own set of formats to choose from.
For transmission, there is a physical connector and signal protocol ("video connection standard" below). A given physical link can carry certain "display standards" that specify a particular refresh rate, display resolution, and color space.
Many analog and digital recording formats are in use, and digital video clips can also be stored on a computer file system as files, which have their own formats. In addition to the physical format used by the data storage device or transmission medium, the stream of ones and zeros that is sent must be in a particular digital video compression format, of which a number are available.
Analog video
Analog video is a video signal transferred by an analog signal. An analog color video signal contains luminance, brightness (Y) and chrominance (C) of an analog television image. When combined into one channel, it is called composite video as is the case, among others with NTSC, PAL and SECAM.
Analog video may be carried in separate channels, as in two channel S-Video (YC) and multi-channel component video formats. Analog video is used in both consumer and professional television production applications.
Digital video
Digital video signal formats with higher quality have been adopted, including serial digital interface (SDI), Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI) and DisplayPort Interface, though analog video interfaces are still used and widely available. There exist different adaptors and variants.
Click on any of the following hyperlinks for more about Video:
Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media.
Video was first developed for mechanical television systems, which were quickly replaced by cathode ray tube (CRT) systems which were later replaced by flat panel displays of several types.
Video systems vary in display resolution, aspect ratio, refresh rate, color capabilities and other qualities. Analog and digital variants exist and can be carried on a variety of media, including radio broadcast, magnetic tape, optical discs, computer files, and network streaming.
History.
See also: History of television
Video technology was first developed for mechanical television systems, which were quickly replaced by cathode ray tube (CRT) television systems, but several new technologies for video display devices have since been invented. Video was originally exclusively a live technology. Charles Ginsburg led an Ampex research team developing one of the first practical video tape recorder (VTR). In 1951 the first video tape recorder captured live images from television cameras by converting the camera's electrical impulses and saving the information onto magnetic video tape.
Video recorders were sold for US $50,000 in 1956, and videotapes cost US $300 per one-hour reel. However, prices gradually dropped over the years; in 1971, Sony began selling videocassette recorder (VCR) decks and tapes into the consumer market.
The use of digital techniques in video created digital video, which allows higher quality and, eventually, much lower cost than earlier analog technology.
After the invention of the DVD in 1997 and Blu-ray Disc in 2006, sales of videotape and recording equipment plummeted.
Advances in computer technology allows even inexpensive personal computers and smartphones to capture, store, edit and transmit digital video, further reducing the cost of video production, allowing program-makers and broadcasters to move to tapeless production.
The advent of digital broadcasting and the subsequent digital television transition is in the process of relegating analog video to the status of a legacy technology in most parts of the world.
As of 2015, with the increasing use of high-resolution video cameras with improved dynamic range and color gamuts, and high-dynamic-range digital intermediate data formats with improved color depth, modern digital video technology is converging with digital film technology.
Characteristics of video streams:
Number of frames per second
Frame rate, the number of still pictures per unit of time of video, ranges from six or eight frames per second (frame/s) for old mechanical cameras to 120 or more frames per second for new professional cameras. PAL standards (Europe, Asia, Australia, etc.) and SECAM (France, Russia, parts of Africa etc.) specify 25 frame/s, while NTSC standards (USA, Canada, Japan, etc.) specify 29.97 frame/s.
Film is shot at the slower frame rate of 24 frames per second, which slightly complicates the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve a comfortable illusion of a moving image is about sixteen frames per second.
Interlaced vs progressive:
Video can be interlaced or progressive. In progressive scan systems, each refresh period updates all scan lines in each frame in sequence. When displaying a natively progressive broadcast or recorded signal, the result is optimum spatial resolution of both the stationary and moving parts of the image.
Interlacing was invented as a way to reduce flicker in early mechanical and CRT video displays without increasing the number of complete frames per second. Interlacing retains detail while requiring lower bandwidth compared to progressive scanning.
In interlaced video, the horizontal scan lines of each complete frame are treated as if numbered consecutively, and captured as two fields: an odd field (upper field) consisting of the odd-numbered lines and an even field (lower field) consisting of the even-numbered lines.
Analog display devices reproduce each frame, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more lifelike reproduction of rapidly moving parts of the image when viewed on an interlaced CRT display.
NTSC, PAL and SECAM are interlaced formats. Abbreviated video resolution specifications often include an i to indicate interlacing. For example, PAL video format is often described as 576i50, where 576 indicates the total number of horizontal scan lines, i indicates interlacing, and 50 indicates 50 fields (half-frames) per second.
When displaying a natively interlaced signal on a progressive scan device, overall spatial resolution is degraded by simple line doubling—artifacts such as flickering or "comb" effects in moving parts of the image which appear unless special signal processing eliminates them.
A procedure known as deinterlacing can optimize the display of an interlaced video signal from an analog, DVD or satellite source on a progressive scan device such as an LCD television, digital video projector or plasma panel. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material.
Aspect ratio:
Aspect ratio describes the dimensions of video screens and video picture elements. All popular video formats are rectilinear, and so can be described by a ratio between width and height. The screen aspect ratio of a traditional television screen is 4:3, or about 1.33:1. High definition televisions use an aspect ratio of 16:9, or about 1.78:1. The aspect ratio of a full 35 mm film frame with soundtrack (also known as the Academy ratio) is 1.375:1.
Pixels on computer monitors are usually square, but pixels used in digital video often have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard, and the corresponding anamorphic widescreen formats.
Therefore, a 720 by 480 pixel NTSC DV image displays with the 4:3 aspect ratio (the traditional television standard) if the pixels are thin, and displays at the 16:9 aspect ratio (the anamorphic widescreen format) if the pixels are fat.
The popularity of viewing video on mobile phones has led to the growth of vertical video. Mary Meeker, a partner at Silicon Valley venture capital firm Kleiner Perkins Caufield & Byers, highlighted the growth of vertical video viewing in her 2015 Internet Trends Report – growing from 5% of video viewing in 2010 to 29% in 2015.
Vertical video ads like Snapchat’s are watched in their entirety 9X more than landscape video ads. The format was rapidly taken up by leading social platforms and media publishers such as Mashable In October 2015 video platform Grabyo launched technology to help video publishers adapt horizotonal 16:9 video into mobile formats such as vertical and square.
Color space and bits per pixel
Color model name describes the video color representation. YIQ was used in NTSC television. It corresponds closely to the YUV scheme used in NTSC and PAL television and the YDbDr scheme used by SECAM television.
The number of distinct colors a pixel can represent depends on the number of bits per pixel (bpp). A common way to reduce the amount of data required in digital video is by chroma subsampling (e.g., 4:4:4, 4:2:2, 4:2:0/4:1:1).
Because the human eye is less sensitive to details in color than brightness, the luminance data for all pixels is maintained, while the chrominance data is averaged for a number of pixels in a block and that same value is used for all of them. For example, this results in a 50% reduction in chrominance data using 2 pixel blocks (4:2:2) or 75% using 4 pixel blocks(4:2:0). This process does not reduce the number of possible color values that can be displayed, it reduces the number of distinct points at which the color changes.
Video quality
Video quality can be measured with formal metrics like PSNR or with subjective video quality using expert observation.
The subjective video quality of a video processing system is evaluated as follows:
- Choose the video sequences (the SRC) to use for testing.
- Choose the settings of the system to evaluate (the HRC).
- Choose a test method for how to present video sequences to experts and to collect their ratings.
- Invite a sufficient number of experts, preferably not fewer than 15.
- Carry out testing.
- Calculate the average marks for each HRC based on the experts' ratings.
Many subjective video quality methods are described in the ITU-T recommendation BT.500.
One of the standardized method is the Double Stimulus Impairment Scale (DSIS). In DSIS, each expert views an unimpaired reference video followed by an impaired version of the same video. The expert then rates the impaired video using a scale ranging from "impairments are imperceptible" to "impairments are very annoying".
Video compression method (digital only)
Main article: Video compression
Uncompressed video delivers maximum quality, but with a very high data rate. A variety of methods are used to compress video streams, with the most effective ones using a Group Of Pictures (GOP) to reduce spatial and temporal redundancy. Broadly speaking, spatial redundancy is reduced by registering differences between parts of a single frame; this task is known as intraframe compression and is closely related to image compression.
Likewise, temporal redundancy can be reduced by registering differences between frames; this task is known as interframe compression, including motion compensation and other techniques. The most common modern standards are MPEG-2, used for DVD, Blu-ray and satellite television, and MPEG-4, used for AVCHD, Mobile phones (3GP) and Internet.
Stereoscopic
Stereoscopic video can be created using several different methods:
- Two channels: a right channel for the right eye and a left channel for the left eye. Both channels may be viewed simultaneously by using light-polarizing filters 90 degrees off-axis from each other on two video projectors. These separately polarized channels are viewed wearing eyeglasses with matching polarization filters.
- One channel with two overlaid color-coded layers. This left and right layer technique is occasionally used for network broadcast, or recent "anaglyph" releases of 3D movies on DVD. Simple Red/Cyan plastic glasses provide the means to view the images discretely to form a stereoscopic view of the content.
- One channel with alternating left and right frames for the corresponding eye, using LCD shutter glasses that read the frame sync from the VGA Display Data Channel to alternately block the image to each eye, so the appropriate eye sees the correct frame. This method is most common in computer virtual reality applications such as in a Cave Automatic Virtual Environment, but reduces effective video framerate to one-half of normal (for example, from 120 Hz to 60 Hz).
Blu-ray Discs greatly improve the sharpness and detail of the two-color 3D effect in color-coded stereo programs. See articles Stereoscopy and 3-D film.
Formats:
Different layers of video transmission and storage each provide their own set of formats to choose from.
For transmission, there is a physical connector and signal protocol ("video connection standard" below). A given physical link can carry certain "display standards" that specify a particular refresh rate, display resolution, and color space.
Many analog and digital recording formats are in use, and digital video clips can also be stored on a computer file system as files, which have their own formats. In addition to the physical format used by the data storage device or transmission medium, the stream of ones and zeros that is sent must be in a particular digital video compression format, of which a number are available.
Analog video
Analog video is a video signal transferred by an analog signal. An analog color video signal contains luminance, brightness (Y) and chrominance (C) of an analog television image. When combined into one channel, it is called composite video as is the case, among others with NTSC, PAL and SECAM.
Analog video may be carried in separate channels, as in two channel S-Video (YC) and multi-channel component video formats. Analog video is used in both consumer and professional television production applications.
Digital video
Digital video signal formats with higher quality have been adopted, including serial digital interface (SDI), Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI) and DisplayPort Interface, though analog video interfaces are still used and widely available. There exist different adaptors and variants.
Click on any of the following hyperlinks for more about Video:
- Transport medium
- Video connectors, cables, and signal standards
- Video display standards
- Digital television
- Analog television
- Computer displays
- Recording formats before video tape
- Analog tape formats
- Digital tape formats
- Optical disc storage formats
- Digital encoding formats
- Standards
- See also:
- General
- Video format
- Video usage
- Video screen recording
- Programmer's Guide to Video Systems: in-depth technical info on 480i, 576i, 1080i, 720p, etc.
- Format Descriptions for Moving Images - www.digitalpreservation.gov
Photography including a Timeline of Photography Technology
YouTube Video: How to Take The Perfect Picture of Yourself
Pictured: Clockwise from Upper Left: Kodak Instamatic 500, Polaroid Land Camera 360 Electronic Flash, Google Pixel XL Smartphone, Sony DSC-W830 20.1-Megapixel Digital Camera
Click here for a Timeline of Photography Technology.
Photography is the science, art, application and practice of creating durable images by recording light or other electromagnetic radiation, either electronically by means of an image sensor, or chemically by means of a light-sensitive material such as photographic film.
Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. With an electronic image sensor, this produces an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing.
The result with photographic emulsion is an invisible latent image, which is later chemically "developed" into a visible image, either negative or positive depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing.
Photography is employed in many fields of science, manufacturing (e.g., photolithography), and business, as well as its more direct uses for art, film and video production, recreational purposes, hobby, and mass communication.
Click on any of the following hyperlinks for more about the Technology of Photography:
Photography is the science, art, application and practice of creating durable images by recording light or other electromagnetic radiation, either electronically by means of an image sensor, or chemically by means of a light-sensitive material such as photographic film.
Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. With an electronic image sensor, this produces an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing.
The result with photographic emulsion is an invisible latent image, which is later chemically "developed" into a visible image, either negative or positive depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing.
Photography is employed in many fields of science, manufacturing (e.g., photolithography), and business, as well as its more direct uses for art, film and video production, recreational purposes, hobby, and mass communication.
Click on any of the following hyperlinks for more about the Technology of Photography:
- History
- Photographic techniques
- Modes of production
- Social and cultural implications
- Law
- See also:
- Outline of photography
- Science of photography
- List of photographers
- Image Editing
- Photolab and minilab
- World History of Photography From The History of Art.
- Daguerreotype to Digital: A Brief History of the Photographic Process From the State Library & Archives of Florida.
3D Printing Technology:
YouTube Video: Can a 3D printer make guns?
Pictured below: 3D Printing: Innovation's New Lifeblood
- Five Myths about 3-D Printing, by the Washington Post, August 10, 2018
- 3D Printing, including 3D Printed Firearms
- Applications of 3D Printing
YouTube Video: Can a 3D printer make guns?
Pictured below: 3D Printing: Innovation's New Lifeblood
[I added this article above the 3D Printing text following it, as it is a recent trend in the vision of ultimate application of 3D printing, potentially on a much larger scale than originally anticipated.]
Washington Post August 10, 2018 Opinion Piece by By Richard A. D'Aveni : the Bakala professor of strategy at Dartmouth’s Tuck School of Business. He is the author of the forthcoming book “The Pan-Industrial Revolution: How New Manufacturing Titans Will Transform the World.”
Five Myths about 3-D Printing by the Washington Post.
"Like any fast-developing technology, 3-D printing, described more technically as “additive manufacturing,” is susceptible to a variety of misconceptions. While recent debates have revolved around 3-D-printed firearms, most of the practical issues in the field come down to the emergence of new manufacturing techniques. The resulting culture of innovation has led to some persistent myths. Here are five of the most common.
MYTH NO. 1: 3-D printing is slow and expensive:
Early 3-D printing was indeed agonizingly slow, requiring pricey equipment, select materials and tedious trial-and-error fiddling to make improvements. In 2015, Quartz magazine said that 3-D printers are “still slow, inaccurate and generally only print one material at a time. And that’s not going to change any time soon.”
When the stock price of the leading printer manufacturers was free-falling in 2016, Inc. magazine announced that 3-D printing was “dying,” mostly because people were realizing the high cost of printer feedstock.
But a variety of new techniques for additive manufacturing are proving those premises wrong. Desktop Metal’s Single Pass Jetting, HP’s Multi Jet Fusion and Carbon’s Digital Light Synthesis can all make products in minutes, not hours.
Lab tests show that these printers are cost-competitive with conventional manufacturing in the tens or even hundreds of thousands of units. Many of the newest printers also use lower-price commodity materials rather than specially formulated proprietary feedstocks, so the cost is falling rapidly.
MYTH NO. 2: 3-D printers are limited to small products.
By design, 3-D printers are not large. They need an airtight build chamber to function, so most are no larger than a copy machine. Nick Allen, an early 3-D printing evangelist, once said, “Producing anything in bulk that is bigger than your fist seems to be a waste of time.”
TechRepublic warned in 2014 that 3-D printing from “plastic filament can’t make anything too sturdy,” which would further limit the size of printed objects.
But some techniques, such as Big Area Additive Manufacturing , work in the open air and can generate highly resilient pieces. They’ve been used to build products from automobiles to jet fighters. A new method for building involves a roving “printer bot” that gradually adds fast-hardening materials to carry out the construction. This spring, a Dutch company completed a pedestrian bridge using 3-D printing methods.
MYTH NO. 3: 3-D printers produce only low quality products:
As anyone who’s handled a crude 3-D-printed keychain can probably guess, the hardest part of 3-D printing is ensuring that a product looks good. When you print layer upon layer, you don’t get the smooth finish of conventional manufacturing.
“There’s no device that you’re using today that can be 3-D printed to the standard you’re going to accept as the consumer,” said Liam Casey, the chief executive of PCH International, in 2015.
The Additive Manufacturing and 3D Printing Research Group at the University of Nottingham in Britain likewise predicted that high post-printing costs, among other challenges, would help keep 3-D printing from expanding much beyond customized or highly complex parts.
But some new techniques, such as Digital Light Synthesis, can generate a high-quality finish from the start. That’s because they aren’t based on layering. The products are monolithic — they emerge smoothly from a vat of liquid , similar to the reassembled robot in the Terminator movies.
Other printer manufacturers are building automated hybrid systems that combine 3-D-printed products with conventional finishing.
If we think of quality more broadly, additive is likely to improve on conventional products.
That’s because 3-D printing can handle sophisticated internal structures and radical geometries that would be impossible via conventional manufacturing. Boeing is now installing additive support struts in its jets. They’re a good deal lighter than conventional equivalents, but they’re stronger because they have honeycomb structures that couldn’t be made before. Adidas is making running shoes with complex lattices that are firmer and better at shock absorption than conventional shoes.
MYTH NO. 4: 3-D printing will give us artificial organs:
One of the most exciting areas of additive manufacturing is bioprinting. Thousands of people die every year waiting for replacement hearts, kidneys and other organs; if we could generate artificial organs, we could eliminate a leading cause of death in the United States.
We’ve already made major advances with customized 3-D-printed prosthetics and orthodontics, and most hearing aids now come from additive manufacturing. Why not organs? A 2014 CNN article predicted that 3-D-printed organs might soon be a reality, since the machines’ “precise process can reproduce vascular systems required to make organs viable.” Smithsonian magazine likewise announced in 2015 that “Soon, Your Doctor Could Print a Human Organ on Demand.”
But scientists have yet to crack the fundamental problem of creating life. We can build a matrix that will support living tissue, and we can add a kind of “cell ink” from the recipient’s stem cells to create the tissue. But we haven’t been able to generate a microscopic capillary network to feed oxygen to this tissue.
The most promising current work focuses on artificial skin, which is of special interest to cosmetic companies looking for an unlimited supply of skin for testing new products. Skin is the easiest organ to manufacture because it’s relatively stable, but success is several years away at best. Other organs are decades away from reality: Even if we could solve the capillary problem, the cost of each organ might be prohibitive.
MYTH NO. 5: Small-scale users will dominate 3-D printing:
In his best-selling 2012 book, “Makers: The New Industrial Revolution,” Chris Anderson argued that 3-D printing would usher in a decentralized economy of people generating small quantities of products for local use from printers at home or in community workshops.
The 2017 volume “Designing Reality: How to Survive and Thrive in the Third Digital Revolution ” similarly portrays a future of self-sufficient cities supplied by community “fab labs.”
In reality, relatively few individuals have bought 3-D printers. Corporations and educational institutions have purchased the majority of them, and that trend is unlikely to change.
Over time, 3-D printing may bring about the opposite of Anderson’s vision: a world where corporate manufacturers, not ordinary civilians, are empowered by the technology. With 3-D printing, companies can make a specialized product one month, then switch to a different kind of product the next if demand falls.
General Electric’s factory in Pune, India, for example, can adjust its output of parts for medical equipment or turbines depending on demand.
As a result, companies will be able to profit from operating in multiple industries. If demand in one industry slows, the firm can shift the unused factory capacity over to making products for higher-demand industries. Eventually, we’re likely to see a new wave of diversification, leading to pan-industrial behemoths that could cover much of the manufacturing economy.
[End of Article]
___________________________________________________________________________
3D Printing
3D printing is any of various processes in which material is joined or solidified under computer control to create a three-dimensional object, with material being added together (such as liquid molecules or powder grains being fused together). 3D printing is used in both rapid prototyping and additive manufacturing (AM).
Objects can be of almost any shape or geometry and typically are produced using digital model data from a 3D model or another electronic data source such as an Additive Manufacturing File (AMF) file (usually in sequential layers).
There are many different technologies, like stereolithography (SLA) or fused deposit modeling (FDM). Thus, unlike material removed from a stock in the conventional machining process, 3D printing or AM builds a three-dimensional object from computer-aided design (CAD) model or AMF file, usually by successively adding material layer by layer.
The term "3D printing" originally referred to a process that deposits a binder material onto a powder bed with inkjet printer heads layer by layer. More recently, the term is being used in popular vernacular to encompass a wider variety of additive manufacturing techniques. United States and global technical standards use the official term additive manufacturing for this broader sense.
The umbrella term additive manufacturing (AM) gained wide currency in the 2000s, inspired by the theme of material being added together (in any of various ways). In contrast, the term subtractive manufacturing appeared as a retronym for the large family of machining processes with material removal as their common theme.
The term 3D printing still referred only to the polymer technologies in most minds, and the term AM was likelier to be used in metalworking and end use part production contexts than among polymer, inkjet, or stereolithography enthusiasts.
By the early 2010s, the terms 3D printing and additive manufacturing evolved senses in which they were alternate umbrella terms for AM technologies, one being used in popular vernacular by consumer-maker communities and the media, and the other used more formally by industrial AM end-use part producers, AM machine manufacturers, and global technical standards organizations.
Until recently, the term 3D printing has been associated with machines low-end in price or in capability. Both terms reflect that the technologies share the theme of material addition or joining throughout a 3D work envelope under automated control.
Peter Zelinski, the editor-in-chief of Additive Manufacturing magazine, pointed out in 2017 that the terms are still often synonymous in casual usage but that some manufacturing industry experts are increasingly making a sense distinction whereby AM comprises 3D printing plus other technologies or other aspects of a manufacturing process.
Other terms that have been used as AM synonyms or hypernyms have included desktop manufacturing, rapid manufacturing (as the logical production-level successor to rapid prototyping), and on-demand manufacturing (which echoes on-demand printing in the 2D sense of printing).
That such application of the adjectives rapid and on-demand to the noun manufacturing was novel in the 2000s reveals the prevailing mental model of the long industrial era in which almost all production manufacturing involved long lead times for laborious tooling development.
Today, the term subtractive has not replaced the term machining, instead complementing it when a term that covers any removal method is needed. Agile tooling is the use of modular means to design tooling that is produced by additive manufacturing or 3D printing methods to enable quick prototyping and responses to tooling and fixture needs.
Agile tooling uses a cost effective and high quality method to quickly respond to customer and market needs, and it can be used in hydro-forming, stamping, injection molding and other manufacturing processes.
Click on any of the following blue hyperlinks for more about 3D Printing:
3D printed firearms:
In 2012, the U.S.-based group Defense Distributed disclosed plans to design a working plastic gun that could be downloaded and reproduced by anybody with a 3D printer.
Defense Distributed has also designed a 3D printable AR-15 type rifle lower receiver (capable of lasting more than 650 rounds) and a variety of magazines, including for the AK-47.
In May 2013, Defense Distributed completed design of the first working blueprint to produce a plastic gun with a 3D printer. The United States Department of State demanded removal of the instructions from the Defense Distributed website, deeming them a violation of the Arms Export Control Act.
In 2015, Defense Distributed founder Cody Wilson sued the United States government on free speech grounds and in 2018 the Department of Justice settled, acknowledging Wilson's right to publish instructions for the production of 3D printed firearms.
In 2013 a Texas company, Solid Concepts, demonstrated a 3D printed version of an M1911 pistol made of metal, using an industrial 3D printer.
Effect on Gun Control:
After Defense Distributed released their plans, questions were raised regarding the effects that 3D printing and widespread consumer-level CNC machining may have on gun control effectiveness.
The U.S. Department of Homeland Security and the Joint Regional Intelligence Center released a memo stating "Significant advances in three-dimensional (3D) printing capabilities, availability of free digital 3D printer files for firearms components, and difficulty regulating file sharing may present public safety risks from unqualified gun seekers who obtain or manufacture 3D printed guns," and that "proposed legislation to ban 3D printing of weapons may deter, but cannot completely prevent their production.
Even if the practice is prohibited by new legislation, online distribution of these digital files will be as difficult to control as any other illegally traded music, movie or software files."
Internationally, where gun controls are generally tighter than in the United States, some commentators have said the impact may be more strongly felt, as alternative firearms are not as easily obtainable.
European officials have noted that producing a 3D printed gun would be illegal under their gun control laws, and that criminals have access to other sources of weapons, but noted that as the technology improved the risks of an effect would increase. Downloads of the plans from the UK, Germany, Spain, and Brazil were heavy.
Attempting to restrict the distribution over the Internet of gun plans has been likened to the futility of preventing the widespread distribution of DeCSS which enabled DVD ripping. After the US government had Defense Distributed take down the plans, they were still widely available via The Pirate Bay and other file sharing sites.
Some US legislators have proposed regulations on 3D printers to prevent their use for printing guns. 3D printing advocates have suggested that such regulations would be futile, could cripple the 3D printing industry, and could infringe on free speech rights.
Legal Status in the United States:
Under the Undetectable Firearms Act any firearm that cannot be detected by a metal detector is illegal to manufacture, so legal designs for firearms such as the Liberator require a metal plate to be inserted into the printed body.
The act had a sunset provision to expire December 9, 2013. Senator Charles Schumer proposed renewing the law, and expanding the type of guns that would be prohibited.
Proposed renewals and expansions of the current Undetectable Firearms Act (H.R. 1474, S. 1149) include provisions to criminalize individual production of firearm receivers and magazines that do not include arbitrary amounts of metal, measures outside the scope of the original UFA and not extended to cover commercial manufacture.
On December 3, 2013, the United States House of Representatives passed the bill To extend the Undetectable Firearms Act of 1988 for 10 years (H.R. 3626; 113th Congress). The bill extended the Act, but did not change any of the law's provisions.
See also:
Applications of 3D Printing:
3D printing has many applications. In manufacturing, medicine, architecture, and custom art and design. Some people use 3D printers to create more 3D printers. In the current scenario, 3D printing process has been used in manufacturing, medical, industry and sociocultural sectors which facilitate 3D printing to become successful commercial technology.
Click on any of the following blue hyperlinks for more about each 3D Application:
Washington Post August 10, 2018 Opinion Piece by By Richard A. D'Aveni : the Bakala professor of strategy at Dartmouth’s Tuck School of Business. He is the author of the forthcoming book “The Pan-Industrial Revolution: How New Manufacturing Titans Will Transform the World.”
Five Myths about 3-D Printing by the Washington Post.
"Like any fast-developing technology, 3-D printing, described more technically as “additive manufacturing,” is susceptible to a variety of misconceptions. While recent debates have revolved around 3-D-printed firearms, most of the practical issues in the field come down to the emergence of new manufacturing techniques. The resulting culture of innovation has led to some persistent myths. Here are five of the most common.
MYTH NO. 1: 3-D printing is slow and expensive:
Early 3-D printing was indeed agonizingly slow, requiring pricey equipment, select materials and tedious trial-and-error fiddling to make improvements. In 2015, Quartz magazine said that 3-D printers are “still slow, inaccurate and generally only print one material at a time. And that’s not going to change any time soon.”
When the stock price of the leading printer manufacturers was free-falling in 2016, Inc. magazine announced that 3-D printing was “dying,” mostly because people were realizing the high cost of printer feedstock.
But a variety of new techniques for additive manufacturing are proving those premises wrong. Desktop Metal’s Single Pass Jetting, HP’s Multi Jet Fusion and Carbon’s Digital Light Synthesis can all make products in minutes, not hours.
Lab tests show that these printers are cost-competitive with conventional manufacturing in the tens or even hundreds of thousands of units. Many of the newest printers also use lower-price commodity materials rather than specially formulated proprietary feedstocks, so the cost is falling rapidly.
MYTH NO. 2: 3-D printers are limited to small products.
By design, 3-D printers are not large. They need an airtight build chamber to function, so most are no larger than a copy machine. Nick Allen, an early 3-D printing evangelist, once said, “Producing anything in bulk that is bigger than your fist seems to be a waste of time.”
TechRepublic warned in 2014 that 3-D printing from “plastic filament can’t make anything too sturdy,” which would further limit the size of printed objects.
But some techniques, such as Big Area Additive Manufacturing , work in the open air and can generate highly resilient pieces. They’ve been used to build products from automobiles to jet fighters. A new method for building involves a roving “printer bot” that gradually adds fast-hardening materials to carry out the construction. This spring, a Dutch company completed a pedestrian bridge using 3-D printing methods.
MYTH NO. 3: 3-D printers produce only low quality products:
As anyone who’s handled a crude 3-D-printed keychain can probably guess, the hardest part of 3-D printing is ensuring that a product looks good. When you print layer upon layer, you don’t get the smooth finish of conventional manufacturing.
“There’s no device that you’re using today that can be 3-D printed to the standard you’re going to accept as the consumer,” said Liam Casey, the chief executive of PCH International, in 2015.
The Additive Manufacturing and 3D Printing Research Group at the University of Nottingham in Britain likewise predicted that high post-printing costs, among other challenges, would help keep 3-D printing from expanding much beyond customized or highly complex parts.
But some new techniques, such as Digital Light Synthesis, can generate a high-quality finish from the start. That’s because they aren’t based on layering. The products are monolithic — they emerge smoothly from a vat of liquid , similar to the reassembled robot in the Terminator movies.
Other printer manufacturers are building automated hybrid systems that combine 3-D-printed products with conventional finishing.
If we think of quality more broadly, additive is likely to improve on conventional products.
That’s because 3-D printing can handle sophisticated internal structures and radical geometries that would be impossible via conventional manufacturing. Boeing is now installing additive support struts in its jets. They’re a good deal lighter than conventional equivalents, but they’re stronger because they have honeycomb structures that couldn’t be made before. Adidas is making running shoes with complex lattices that are firmer and better at shock absorption than conventional shoes.
MYTH NO. 4: 3-D printing will give us artificial organs:
One of the most exciting areas of additive manufacturing is bioprinting. Thousands of people die every year waiting for replacement hearts, kidneys and other organs; if we could generate artificial organs, we could eliminate a leading cause of death in the United States.
We’ve already made major advances with customized 3-D-printed prosthetics and orthodontics, and most hearing aids now come from additive manufacturing. Why not organs? A 2014 CNN article predicted that 3-D-printed organs might soon be a reality, since the machines’ “precise process can reproduce vascular systems required to make organs viable.” Smithsonian magazine likewise announced in 2015 that “Soon, Your Doctor Could Print a Human Organ on Demand.”
But scientists have yet to crack the fundamental problem of creating life. We can build a matrix that will support living tissue, and we can add a kind of “cell ink” from the recipient’s stem cells to create the tissue. But we haven’t been able to generate a microscopic capillary network to feed oxygen to this tissue.
The most promising current work focuses on artificial skin, which is of special interest to cosmetic companies looking for an unlimited supply of skin for testing new products. Skin is the easiest organ to manufacture because it’s relatively stable, but success is several years away at best. Other organs are decades away from reality: Even if we could solve the capillary problem, the cost of each organ might be prohibitive.
MYTH NO. 5: Small-scale users will dominate 3-D printing:
In his best-selling 2012 book, “Makers: The New Industrial Revolution,” Chris Anderson argued that 3-D printing would usher in a decentralized economy of people generating small quantities of products for local use from printers at home or in community workshops.
The 2017 volume “Designing Reality: How to Survive and Thrive in the Third Digital Revolution ” similarly portrays a future of self-sufficient cities supplied by community “fab labs.”
In reality, relatively few individuals have bought 3-D printers. Corporations and educational institutions have purchased the majority of them, and that trend is unlikely to change.
Over time, 3-D printing may bring about the opposite of Anderson’s vision: a world where corporate manufacturers, not ordinary civilians, are empowered by the technology. With 3-D printing, companies can make a specialized product one month, then switch to a different kind of product the next if demand falls.
General Electric’s factory in Pune, India, for example, can adjust its output of parts for medical equipment or turbines depending on demand.
As a result, companies will be able to profit from operating in multiple industries. If demand in one industry slows, the firm can shift the unused factory capacity over to making products for higher-demand industries. Eventually, we’re likely to see a new wave of diversification, leading to pan-industrial behemoths that could cover much of the manufacturing economy.
[End of Article]
___________________________________________________________________________
3D Printing
3D printing is any of various processes in which material is joined or solidified under computer control to create a three-dimensional object, with material being added together (such as liquid molecules or powder grains being fused together). 3D printing is used in both rapid prototyping and additive manufacturing (AM).
Objects can be of almost any shape or geometry and typically are produced using digital model data from a 3D model or another electronic data source such as an Additive Manufacturing File (AMF) file (usually in sequential layers).
There are many different technologies, like stereolithography (SLA) or fused deposit modeling (FDM). Thus, unlike material removed from a stock in the conventional machining process, 3D printing or AM builds a three-dimensional object from computer-aided design (CAD) model or AMF file, usually by successively adding material layer by layer.
The term "3D printing" originally referred to a process that deposits a binder material onto a powder bed with inkjet printer heads layer by layer. More recently, the term is being used in popular vernacular to encompass a wider variety of additive manufacturing techniques. United States and global technical standards use the official term additive manufacturing for this broader sense.
The umbrella term additive manufacturing (AM) gained wide currency in the 2000s, inspired by the theme of material being added together (in any of various ways). In contrast, the term subtractive manufacturing appeared as a retronym for the large family of machining processes with material removal as their common theme.
The term 3D printing still referred only to the polymer technologies in most minds, and the term AM was likelier to be used in metalworking and end use part production contexts than among polymer, inkjet, or stereolithography enthusiasts.
By the early 2010s, the terms 3D printing and additive manufacturing evolved senses in which they were alternate umbrella terms for AM technologies, one being used in popular vernacular by consumer-maker communities and the media, and the other used more formally by industrial AM end-use part producers, AM machine manufacturers, and global technical standards organizations.
Until recently, the term 3D printing has been associated with machines low-end in price or in capability. Both terms reflect that the technologies share the theme of material addition or joining throughout a 3D work envelope under automated control.
Peter Zelinski, the editor-in-chief of Additive Manufacturing magazine, pointed out in 2017 that the terms are still often synonymous in casual usage but that some manufacturing industry experts are increasingly making a sense distinction whereby AM comprises 3D printing plus other technologies or other aspects of a manufacturing process.
Other terms that have been used as AM synonyms or hypernyms have included desktop manufacturing, rapid manufacturing (as the logical production-level successor to rapid prototyping), and on-demand manufacturing (which echoes on-demand printing in the 2D sense of printing).
That such application of the adjectives rapid and on-demand to the noun manufacturing was novel in the 2000s reveals the prevailing mental model of the long industrial era in which almost all production manufacturing involved long lead times for laborious tooling development.
Today, the term subtractive has not replaced the term machining, instead complementing it when a term that covers any removal method is needed. Agile tooling is the use of modular means to design tooling that is produced by additive manufacturing or 3D printing methods to enable quick prototyping and responses to tooling and fixture needs.
Agile tooling uses a cost effective and high quality method to quickly respond to customer and market needs, and it can be used in hydro-forming, stamping, injection molding and other manufacturing processes.
Click on any of the following blue hyperlinks for more about 3D Printing:
- History
- General principles
- Processes and printers
- Applications
- Legal aspects
- Health and safety
- Impact
- See also:
- 3D bioprinting
- 3D Manufacturing Format
- Actuator
- Additive Manufacturing File Format
- AstroPrint
- Cloud manufacturing
- Computer numeric control
- Fusion3
- Laser cutting
- Limbitless Solutions
- List of 3D printer manufacturers
- List of common 3D test models
- List of emerging technologies
- List of notable 3D printed weapons and parts
- Magnetically assisted slip casting
- MakerBot Industries
- Milling center
- Organ-on-a-chip
- Self-replicating machine
- Ultimaker
- Volumetric printing
- 3D printing – Wikipedia book
- 3D Printing White Papers Expert insights on additive manufacturing and 3D printing
3D printed firearms:
In 2012, the U.S.-based group Defense Distributed disclosed plans to design a working plastic gun that could be downloaded and reproduced by anybody with a 3D printer.
Defense Distributed has also designed a 3D printable AR-15 type rifle lower receiver (capable of lasting more than 650 rounds) and a variety of magazines, including for the AK-47.
In May 2013, Defense Distributed completed design of the first working blueprint to produce a plastic gun with a 3D printer. The United States Department of State demanded removal of the instructions from the Defense Distributed website, deeming them a violation of the Arms Export Control Act.
In 2015, Defense Distributed founder Cody Wilson sued the United States government on free speech grounds and in 2018 the Department of Justice settled, acknowledging Wilson's right to publish instructions for the production of 3D printed firearms.
In 2013 a Texas company, Solid Concepts, demonstrated a 3D printed version of an M1911 pistol made of metal, using an industrial 3D printer.
Effect on Gun Control:
After Defense Distributed released their plans, questions were raised regarding the effects that 3D printing and widespread consumer-level CNC machining may have on gun control effectiveness.
The U.S. Department of Homeland Security and the Joint Regional Intelligence Center released a memo stating "Significant advances in three-dimensional (3D) printing capabilities, availability of free digital 3D printer files for firearms components, and difficulty regulating file sharing may present public safety risks from unqualified gun seekers who obtain or manufacture 3D printed guns," and that "proposed legislation to ban 3D printing of weapons may deter, but cannot completely prevent their production.
Even if the practice is prohibited by new legislation, online distribution of these digital files will be as difficult to control as any other illegally traded music, movie or software files."
Internationally, where gun controls are generally tighter than in the United States, some commentators have said the impact may be more strongly felt, as alternative firearms are not as easily obtainable.
European officials have noted that producing a 3D printed gun would be illegal under their gun control laws, and that criminals have access to other sources of weapons, but noted that as the technology improved the risks of an effect would increase. Downloads of the plans from the UK, Germany, Spain, and Brazil were heavy.
Attempting to restrict the distribution over the Internet of gun plans has been likened to the futility of preventing the widespread distribution of DeCSS which enabled DVD ripping. After the US government had Defense Distributed take down the plans, they were still widely available via The Pirate Bay and other file sharing sites.
Some US legislators have proposed regulations on 3D printers to prevent their use for printing guns. 3D printing advocates have suggested that such regulations would be futile, could cripple the 3D printing industry, and could infringe on free speech rights.
Legal Status in the United States:
Under the Undetectable Firearms Act any firearm that cannot be detected by a metal detector is illegal to manufacture, so legal designs for firearms such as the Liberator require a metal plate to be inserted into the printed body.
The act had a sunset provision to expire December 9, 2013. Senator Charles Schumer proposed renewing the law, and expanding the type of guns that would be prohibited.
Proposed renewals and expansions of the current Undetectable Firearms Act (H.R. 1474, S. 1149) include provisions to criminalize individual production of firearm receivers and magazines that do not include arbitrary amounts of metal, measures outside the scope of the original UFA and not extended to cover commercial manufacture.
On December 3, 2013, the United States House of Representatives passed the bill To extend the Undetectable Firearms Act of 1988 for 10 years (H.R. 3626; 113th Congress). The bill extended the Act, but did not change any of the law's provisions.
See also:
- Defense Distributed
- Ghost gun
- Gun control
- Gun politics in the United States
- Improvised firearm
- List of 3D printed weapons and parts
Applications of 3D Printing:
3D printing has many applications. In manufacturing, medicine, architecture, and custom art and design. Some people use 3D printers to create more 3D printers. In the current scenario, 3D printing process has been used in manufacturing, medical, industry and sociocultural sectors which facilitate 3D printing to become successful commercial technology.
Click on any of the following blue hyperlinks for more about each 3D Application:
Microwave Technology including Microwave BurnPictured below: New Innovative Microwave Technology for Processing Timber
Microwaves are a form of electromagnetic radiation with wavelengths ranging from about one meter to one millimeter; with frequencies between 300 MHz (1 m) and 300 GHz (1 mm).
Different sources define different frequency ranges as microwaves; the above broad definition includes both UHF and EHF (millimeter wave) bands.
A more common definition in radio engineering is the range between 1 and 100 GHz (wavelengths between 0.3 m and 3 mm). In all cases, microwaves include the entire SHF band (3 to 30 GHz, or 10 to 1 cm) at minimum.
Frequencies in the microwave range are often referred to by their IEEE radar band designations: S, C, X, Ku, K, or Ka band, or by similar NATO or EU designations.
The prefix micro- in microwave is not meant to suggest a wavelength in the micrometer range. Rather, it indicates that microwaves are "small" (having shorter wavelengths), compared to the radio waves used prior to microwave technology. The boundaries between far infrared, terahertz radiation, microwaves, and ultra-high-frequency radio waves are fairly arbitrary and are used variously between different fields of study.
Microwaves travel by line-of-sight; unlike lower frequency radio waves they do not diffract around hills, follow the earth's surface as ground waves, or reflect from the ionosphere, so terrestrial microwave communication links are limited by the visual horizon to about 40 miles (64 km). At the high end of the band they are absorbed by gases in the atmosphere, limiting practical communication distances to around a kilometer.
Microwaves are widely used in modern technology, for example in point-to-point communication links:
Click on any of the following blue hyperlinks for more about Microwave Technology:
Microwave burns are burn injuries caused by thermal effects of microwave radiation absorbed in a living organism.
In comparison with radiation burns caused by ionizing radiation, where the dominant mechanism of tissue damage is internal cell damage caused by free radicals, the primary damage mechanism of microwave radiation is by heat.
Microwave damage can manifest with a delay; pain or signs of skin damage can show some time after microwave exposure.
Click on any of the following blue hyperlinks for more about Microwave Burn:
Different sources define different frequency ranges as microwaves; the above broad definition includes both UHF and EHF (millimeter wave) bands.
A more common definition in radio engineering is the range between 1 and 100 GHz (wavelengths between 0.3 m and 3 mm). In all cases, microwaves include the entire SHF band (3 to 30 GHz, or 10 to 1 cm) at minimum.
Frequencies in the microwave range are often referred to by their IEEE radar band designations: S, C, X, Ku, K, or Ka band, or by similar NATO or EU designations.
The prefix micro- in microwave is not meant to suggest a wavelength in the micrometer range. Rather, it indicates that microwaves are "small" (having shorter wavelengths), compared to the radio waves used prior to microwave technology. The boundaries between far infrared, terahertz radiation, microwaves, and ultra-high-frequency radio waves are fairly arbitrary and are used variously between different fields of study.
Microwaves travel by line-of-sight; unlike lower frequency radio waves they do not diffract around hills, follow the earth's surface as ground waves, or reflect from the ionosphere, so terrestrial microwave communication links are limited by the visual horizon to about 40 miles (64 km). At the high end of the band they are absorbed by gases in the atmosphere, limiting practical communication distances to around a kilometer.
Microwaves are widely used in modern technology, for example in point-to-point communication links:
- wireless networks,
- microwave radio relay networks,
- radar,
- satellite and spacecraft communication,
- medical diathermy and cancer treatment,
- remote sensing,
- radio astronomy,
- particle accelerators,
- spectroscopy,
- industrial heating,
- collision avoidance systems,
- garage door openers
- and keyless entry systems,
- and for cooking food in microwave ovens.
Click on any of the following blue hyperlinks for more about Microwave Technology:
- Electromagnetic spectrum
- Propagation
- Antennas
- Design and analysis
- Microwave sources
- Microwave uses
- Microwave frequency bands
- Microwave frequency measurement
- Effects on health
- History
- See also:
- Block upconverter (BUC)
- Cosmic microwave background
- Electron cyclotron resonance
- International Microwave Power Institute
- Low-noise block converter (LNB)
- Maser
- Microwave auditory effect
- Microwave cavity
- Microwave chemistry
- Microwave radio relay
- Microwave transmission
- Rain fade
- RF switch matrix
- The Thing (listening device)
- EM Talk, Microwave Engineering Tutorials and Tools
- Millimeter Wave and Microwave Waveguide dimension chart.
Microwave burns are burn injuries caused by thermal effects of microwave radiation absorbed in a living organism.
In comparison with radiation burns caused by ionizing radiation, where the dominant mechanism of tissue damage is internal cell damage caused by free radicals, the primary damage mechanism of microwave radiation is by heat.
Microwave damage can manifest with a delay; pain or signs of skin damage can show some time after microwave exposure.
Click on any of the following blue hyperlinks for more about Microwave Burn:
- Frequency vs depth
- Tissue damage
- Injury cases
- Medical uses
- Perception thresholds
- Other concerns
- Low-level exposure
- Myths
Special Effects in TV, Movies and Other Entertainment Venues including Computer-generated Imagery (CGI)
- YouTube Video: Top Special Effects Software Available 2018
- YouTube Video: How To Get Started in Visual Effects
- YouTube Video: Top 10 Landmark CGI Movie Effects (by WatchMojo)
Special effects (often abbreviated as SFX, SPFX, or simply FX) are illusions or visual tricks used in the film, television, theater, video game and simulator industries to simulate the imagined events in a story or virtual world.
Special effects are traditionally divided into the categories of mechanical effects and optical effects. With the emergence of digital film-making a distinction between special effects and visual effects has grown, with the latter referring to digital post-production while "special effects" referring to mechanical and optical effects.
Mechanical effects (also called practical or physical effects) are usually accomplished during the live-action shooting. This includes the use of mechanized props, scenery, scale models, animatronics, pyrotechnics and atmospheric effects: creating physical wind, rain, fog, snow, clouds, making a car appear to drive by itself and blowing up a building, etc.
Mechanical effects are also often incorporated into set design and makeup. For example, a set may be built with break-away doors or walls to enhance a fight scene, or prosthetic makeup can be used to make an actor look like a non-human creature.
Optical effects (also called photographic effects) are techniques in which images or film frames are created photographically, either "in-camera" using multiple exposure, mattes or the Schüfftan process or in post-production using an optical printer. An optical effect might be used to place actors or sets against a different background.
Since the 1990s, computer-generated imagery (CGI: See next topic below) has come to the forefront of special effects technologies. It gives filmmakers greater control, and allows many effects to be accomplished more safely and convincingly and—as technology improves—at lower costs. As a result, many optical and mechanical effects techniques have been superseded by CGI.
Click on any of the following blue hyperlinks for more about Special Effects:
Computer-generated imagery (CGI) is the application of gogouse computer graphics to create or contribute to images in art, printed media, video games, films, television programs, shorts, commercials, videos, and simulators.
The visual scenes may be dynamic or static and may be two-dimensional (2D), though the term "CGI" is most commonly used to refer to 3D computer graphics used for creating scenes or special effects in films and television.
Additionally, the use of 2D CGI is often mistakenly referred to as "traditional animation", most often in the case when dedicated animation software such as Adobe Flash or Toon Boom is not used or the CGI is hand drawn using a tablet and mouse.
The term 'CGI animation' refers to dynamic CGI rendered as a movie. The term virtual world refers to agent-based, interactive environments. Computer graphics software is used to make computer-generated imagery for films, etc.
Availability of CGI software and increased computer speeds have allowed individual artists and small companies to produce professional-grade films, games, and fine art from their home computers. This has brought about an Internet subculture with its own set of global celebrities, clichés, and technical vocabulary.
The evolution of CGI led to the emergence of virtual cinematography in the 1990s where runs of the simulated camera are not constrained by the laws of physics.
Click on any of the following blue hyperlinks for more about Computer-Generated Imagery:
Special effects are traditionally divided into the categories of mechanical effects and optical effects. With the emergence of digital film-making a distinction between special effects and visual effects has grown, with the latter referring to digital post-production while "special effects" referring to mechanical and optical effects.
Mechanical effects (also called practical or physical effects) are usually accomplished during the live-action shooting. This includes the use of mechanized props, scenery, scale models, animatronics, pyrotechnics and atmospheric effects: creating physical wind, rain, fog, snow, clouds, making a car appear to drive by itself and blowing up a building, etc.
Mechanical effects are also often incorporated into set design and makeup. For example, a set may be built with break-away doors or walls to enhance a fight scene, or prosthetic makeup can be used to make an actor look like a non-human creature.
Optical effects (also called photographic effects) are techniques in which images or film frames are created photographically, either "in-camera" using multiple exposure, mattes or the Schüfftan process or in post-production using an optical printer. An optical effect might be used to place actors or sets against a different background.
Since the 1990s, computer-generated imagery (CGI: See next topic below) has come to the forefront of special effects technologies. It gives filmmakers greater control, and allows many effects to be accomplished more safely and convincingly and—as technology improves—at lower costs. As a result, many optical and mechanical effects techniques have been superseded by CGI.
Click on any of the following blue hyperlinks for more about Special Effects:
- Developmental history
- Planning and use
- Live special effects
- Mechanical effects
- Visual special effects techniques
- Notable special effects companies
- Notable special effects directors
- See also:
Computer-generated imagery (CGI) is the application of gogouse computer graphics to create or contribute to images in art, printed media, video games, films, television programs, shorts, commercials, videos, and simulators.
The visual scenes may be dynamic or static and may be two-dimensional (2D), though the term "CGI" is most commonly used to refer to 3D computer graphics used for creating scenes or special effects in films and television.
Additionally, the use of 2D CGI is often mistakenly referred to as "traditional animation", most often in the case when dedicated animation software such as Adobe Flash or Toon Boom is not used or the CGI is hand drawn using a tablet and mouse.
The term 'CGI animation' refers to dynamic CGI rendered as a movie. The term virtual world refers to agent-based, interactive environments. Computer graphics software is used to make computer-generated imagery for films, etc.
Availability of CGI software and increased computer speeds have allowed individual artists and small companies to produce professional-grade films, games, and fine art from their home computers. This has brought about an Internet subculture with its own set of global celebrities, clichés, and technical vocabulary.
The evolution of CGI led to the emergence of virtual cinematography in the 1990s where runs of the simulated camera are not constrained by the laws of physics.
Click on any of the following blue hyperlinks for more about Computer-Generated Imagery:
- Static images and landscapes
- Architectural scenes
- Anatomical models
- Generating cloth and skin images
- Interactive simulation and visualization
- Computer animation
- Virtual worlds
- In courtrooms
- See also:
- 3D modeling
- Cinema Research Corporation
- Anime Studio
- Animation database
- List of computer-animated films
- Digital image
- Parallel rendering
- Photoshop is the industry standard commercial digital photo editing tool. Its FOSS counterpart is GIMP.
- Poser DIY CGI optimized for soft models
- Ray tracing (graphics)
- Real-time computer graphics
- Shader
- Virtual human
- Virtual Physiological Human
- A Critical History of Computer Graphics and Animation – a course page at Ohio State University that includes all the course materials and extensive supplementary materials (videos, articles, links).
- CG101: A Computer Graphics Industry Reference ISBN 073570046X Unique and personal histories of early computer graphics production, plus a comprehensive foundation of the industry for all reading levels.
- F/X Gods, by Anne Thompson, Wired, February 2005.
- "History Gets A Computer Graphics Make-Over" Tayfun King, Click, BBC World News (2004-11-19)
- NIH Visible Human Gallery
Michael Dell, Founder of Dell Technologies
- YouTube Video: Michael Dell's Top 10 Rules For Success
- YouTube Video: Michael Dell addresses Dell's future | Fortune
- YouTube Video 15 Things You Didn't Know About Michael S. Dell
Michael Dell on Going Private, Company Management, and the Future of the Computer (Inc., Magazine)
Business Insider sat down with the famed entrepreneur at the World Economic Forum in Davos this week:
"We caught up with Dell CEO Michael Dell on Friday morning at the World Economic Forum in Davos.
We had heard from one of Dell's investors a few days ago that the company was thriving as a private company. Mr. Dell enthusiastically confirmed that.
Dell says the Dell team is energized, reinvigorated, and aligned. He says it is a great relief to be private after 25 years as a public company. PCs aren't dead, he says, and Windows 10 looks cool.
Dell's debt is getting upgraded as analysts realize the company isn't toast. Dell's software, server, and services business are growing in double digits. And, no, Google's Chromebooks aren't going to take over the world.
Highlights:
* It is a wonderful relief to be private after 25 years as a public company. The administrative hassles and costs are much less, and you have far greater flexibility. You don't have to react to daily volatility and repricing, and there's less distraction, so you can focus on your business and team.
* Dell went public because the company needed capital--but these days plenty of capital is available in the private market. Dell went public in 1988, when Michael Dell was 23.
Over the next 25 years, the stock produced fantastic returns. But the one-two punch of the financial crisis and concerns about the "death of the PC" poleaxed Dell's stock and caused many people to write the company off for dead.
Dell thought it would be easier to retool the company in the relative quiet of the private market, and he found investors willing to provide all the capital he needed. He understands why red-hot emerging tech companies like Uber, Palantir, Facebook, and Twitter don't go public until they are very mature--they can raise all the capital they need in the private market. There's no reason to subject yourself to the headaches of the public market if you don't have to.
* Dell's new management team is energized and aligned around the new mission--serving mid-market growth companies with comprehensive hardware, software, and services solutions. Some of Dell's old managers did not want to sign up for another tour of duty, Dell says. After going private, Dell replaced these folks with younger, hungrier executives who were excited to take on their bosses' responsibilities.
* Dell's debt, which some doomsayers thought would swamp the company, is now getting upgraded, as analysts realize Dell's future is much brighter than they thought. S&P, for example, recently raised its rating on Dell's debt to just a notch below "investment grade."
* It turns out the PC isn't dead. There are 1.8 billion of them out there, Dell says, and a big percentage of them are more than four years old. Dell's PC business got a bump from the retirement of Windows XP last year. Dell expects there will be another bump from the launch of Microsoft's next version of Windows, Windows 10.
* Windows 10 looks good so far. Microsoft's most recent version of Windows, Windows 8, was a dud. No one wanted to buy a PC to get it. (In fact, many people and companies chose to avoid getting new PCs to avoid it.) Windows 10, in contrast, looks like a positive step. It will most likely cause many older PC owners to upgrade, driving some growth in the PC market.
* Dell isn't just PCs anymore! The company's server, software, and services businesses are doing well, Dell says. Server growth is up double-digits year over year.
* Google's ChromeBooks--cheap, stripped-down computers that don't run Windows -; are popular in some segments of the market, but they're not going to take over the world. Dell sells some ChromeBooks. They're doing well in some market segments, like education. But Michael Dell thinks they may end up looking like the netbook market of a few years ago. They sound great, at first, especially for the low price of $249. But then many buyers find that they don't do what they want or expect them to do.
--This story first appeared on Business Insider.
___________________________________________________________________________
Michael Saul Dell (born February 23, 1965) is an American businessman, investor, philanthropist, and author. He is the Founder and CEO of Dell Technologies (see below), one of the world's largest technology infrastructure companies. He is ranked as the 20th richest person in the world by Forbes, with a net worth of $32.2 billion as of June 2019.
In 2011, his 243.35 million shares of Dell Inc. stock were worth $3.5 billion, giving him 12% ownership of the company. His remaining wealth of roughly $10 billion is invested in other companies and is managed by a firm whose name, MSD Capital, incorporates Dell's initials.
On January 5, 2013 it was announced that Dell had bid to take Dell Inc. private for $24.4 billion in the biggest management buyout since the Great Recession. Dell Inc. officially went private on October 29, 2013. The company once again went public in December 2018.
Click on any of the following blue hyperlinks for more about Michael Dell:
Dell Technologies Inc. is an American multinational technology company headquartered in Round Rock, Texas. It was formed as a result of the September 2016 merger of Dell and EMC Corporation (which later became Dell EMC).
Dell's products include personal computers, servers, smartphones, televisions, computer software, computer security and network security, as well as information security services. Dell ranked 35th on the 2018 Fortune 500 rankings of the largest United States corporations by total revenue.
Current operations:
Approximately 50% of the company's revenue is derived in the United States.
Dell operates under 3 divisions as follows:
Dell also owns 5 separate businesses:
Click on any of the following blue hyperlinks for more about Dell Technologies:
Business Insider sat down with the famed entrepreneur at the World Economic Forum in Davos this week:
"We caught up with Dell CEO Michael Dell on Friday morning at the World Economic Forum in Davos.
We had heard from one of Dell's investors a few days ago that the company was thriving as a private company. Mr. Dell enthusiastically confirmed that.
Dell says the Dell team is energized, reinvigorated, and aligned. He says it is a great relief to be private after 25 years as a public company. PCs aren't dead, he says, and Windows 10 looks cool.
Dell's debt is getting upgraded as analysts realize the company isn't toast. Dell's software, server, and services business are growing in double digits. And, no, Google's Chromebooks aren't going to take over the world.
Highlights:
* It is a wonderful relief to be private after 25 years as a public company. The administrative hassles and costs are much less, and you have far greater flexibility. You don't have to react to daily volatility and repricing, and there's less distraction, so you can focus on your business and team.
* Dell went public because the company needed capital--but these days plenty of capital is available in the private market. Dell went public in 1988, when Michael Dell was 23.
Over the next 25 years, the stock produced fantastic returns. But the one-two punch of the financial crisis and concerns about the "death of the PC" poleaxed Dell's stock and caused many people to write the company off for dead.
Dell thought it would be easier to retool the company in the relative quiet of the private market, and he found investors willing to provide all the capital he needed. He understands why red-hot emerging tech companies like Uber, Palantir, Facebook, and Twitter don't go public until they are very mature--they can raise all the capital they need in the private market. There's no reason to subject yourself to the headaches of the public market if you don't have to.
* Dell's new management team is energized and aligned around the new mission--serving mid-market growth companies with comprehensive hardware, software, and services solutions. Some of Dell's old managers did not want to sign up for another tour of duty, Dell says. After going private, Dell replaced these folks with younger, hungrier executives who were excited to take on their bosses' responsibilities.
* Dell's debt, which some doomsayers thought would swamp the company, is now getting upgraded, as analysts realize Dell's future is much brighter than they thought. S&P, for example, recently raised its rating on Dell's debt to just a notch below "investment grade."
* It turns out the PC isn't dead. There are 1.8 billion of them out there, Dell says, and a big percentage of them are more than four years old. Dell's PC business got a bump from the retirement of Windows XP last year. Dell expects there will be another bump from the launch of Microsoft's next version of Windows, Windows 10.
* Windows 10 looks good so far. Microsoft's most recent version of Windows, Windows 8, was a dud. No one wanted to buy a PC to get it. (In fact, many people and companies chose to avoid getting new PCs to avoid it.) Windows 10, in contrast, looks like a positive step. It will most likely cause many older PC owners to upgrade, driving some growth in the PC market.
* Dell isn't just PCs anymore! The company's server, software, and services businesses are doing well, Dell says. Server growth is up double-digits year over year.
* Google's ChromeBooks--cheap, stripped-down computers that don't run Windows -; are popular in some segments of the market, but they're not going to take over the world. Dell sells some ChromeBooks. They're doing well in some market segments, like education. But Michael Dell thinks they may end up looking like the netbook market of a few years ago. They sound great, at first, especially for the low price of $249. But then many buyers find that they don't do what they want or expect them to do.
--This story first appeared on Business Insider.
___________________________________________________________________________
Michael Saul Dell (born February 23, 1965) is an American businessman, investor, philanthropist, and author. He is the Founder and CEO of Dell Technologies (see below), one of the world's largest technology infrastructure companies. He is ranked as the 20th richest person in the world by Forbes, with a net worth of $32.2 billion as of June 2019.
In 2011, his 243.35 million shares of Dell Inc. stock were worth $3.5 billion, giving him 12% ownership of the company. His remaining wealth of roughly $10 billion is invested in other companies and is managed by a firm whose name, MSD Capital, incorporates Dell's initials.
On January 5, 2013 it was announced that Dell had bid to take Dell Inc. private for $24.4 billion in the biggest management buyout since the Great Recession. Dell Inc. officially went private on October 29, 2013. The company once again went public in December 2018.
Click on any of the following blue hyperlinks for more about Michael Dell:
- Early life and education
- Business career
- Penalty
- Accolades
- Affiliations
- Writings
- Wealth and personal life
- See also:
- Media related to Michael Dell at Wikimedia Commons
- Appearances on C-SPAN
Dell Technologies Inc. is an American multinational technology company headquartered in Round Rock, Texas. It was formed as a result of the September 2016 merger of Dell and EMC Corporation (which later became Dell EMC).
Dell's products include personal computers, servers, smartphones, televisions, computer software, computer security and network security, as well as information security services. Dell ranked 35th on the 2018 Fortune 500 rankings of the largest United States corporations by total revenue.
Current operations:
Approximately 50% of the company's revenue is derived in the United States.
Dell operates under 3 divisions as follows:
- Dell Client Solutions Group (48% of fiscal 2019 revenues) – produces desktop PCs, notebooks, tablets, and peripherals, such as monitors, printers, and projectors under the Dell brand name
- Dell EMC Infrastructure Solutions Group (41% of fiscal 2019 revenues) – storage solutions
- VMware (10% of fiscal 2019 revenues) – a publicly traded company focused on virtualization and cloud infrastructure
Dell also owns 5 separate businesses:
Click on any of the following blue hyperlinks for more about Dell Technologies:
- History
- See also:
- Official website
- Business data for Dell Technologies: