Copyright © 2015 Bert N. Langford (Images may be subject to copyright. Please send feedback)
Welcome to Our Generation USA!
Innovations (& Their Innovators)
found in smart electronics, communications, military, transportation, science, engineering, and other fields and disciplines.
For a List of Medical Breakthroughs topics Click Here
For Computer Advancements, Click Here
For the Internet, Click Here
For Smartphones, Video and Online Games, Click Here
Innovations and Inventions
YouTube Video: Top 10 Inventions of the 20th Century by WatchMojo
Pictured below: Innovations in Technology that Humanity Will Reach by the Year 2030
Innovation can be defined simply as a "new idea, device or method". However, innovation is often also viewed as the application of better solutions that meet new requirements, unarticulated needs, or existing market needs.
Such innovation takes place through the provision of more-effective products, processes, services, technologies, or business models that are made available to markets, governments and society. The term "innovation" can be defined as something original and more effective and, as a consequence, new, that "breaks into" the market or society.
Innovation is related to, but not the same as, invention (see next topic below), as innovation is more apt to involve the practical implementation of an invention (i.e. new/improved ability) to make a meaningful impact in the market or society, and not all innovations require an invention. Innovation often manifests itself via the engineering process, when the problem being solved is of a technical or scientific nature. The opposite of innovation is exnovation.
While a novel device is often described as an innovation, in economics, management science, and other fields of practice and analysis, innovation is generally considered to be the result of a process that brings together various novel ideas in such a way that they affect society.
In industrial economics, innovations are created and found empirically from services to meet growing consumer demand.
A 2014 survey of literature on innovation found over 40 definitions. In an industrial survey of how the software industry defined innovation, the following definition given by Crossan and Apaydin was considered to be the most complete, which builds on the Organisation for Economic Co-operation and Development (OECD) manual's definition:
Innovation is:
According to Kanter, innovation includes original invention and creative use and defines innovation as a generation, admission and realization of new ideas, products, services and processes.
Two main dimensions of innovation were degree of novelty (patent) (i.e. whether an innovation is new to the firm, new to the market, new to the industry, or new to the world) and type of innovation (i.e. whether it is process or product-service system innovation). In recent organizational scholarship, researchers of workplaces have also distinguished innovation to be separate from creativity, by providing an updated definition of these two related but distinct constructs:
Workplace creativity concerns the cognitive and behavioral processes applied when attempting to generate novel ideas. Workplace innovation concerns the processes applied when attempting to implement new ideas.
Specifically, innovation involves some combination of problem/opportunity identification, the introduction, adoption or modification of new ideas germane to organizational needs, the promotion of these ideas, and the practical implementation of these ideas.
Click on any of the following blue hyperlinks for more about Innovation:
An invention is a unique or novel device, method, composition or process. The invention process is a process within an overall engineering and product development process. It may be an improvement upon a machine or product or a new process for creating an object or a result.
An invention that achieves a completely unique function or result may be a radical breakthrough. Such works are novel and not obvious to others skilled in the same field. An inventor may be taking a big step in success or failure.
Some inventions can be patented. A patent legally protects the intellectual property rights of the inventor and legally recognizes that a claimed invention is actually an invention. The rules and requirements for patenting an invention vary from country to country and the process of obtaining a patent is often expensive.
Another meaning of invention is cultural invention, which is an innovative set of useful social behaviors adopted by people and passed on to others. The Institute for Social Inventions collected many such ideas in magazines and books. Invention is also an important component of artistic and design creativity.
Inventions often extend the boundaries of human knowledge, experience or capability.
Inventions are of three kinds:
Scientific-technological inventions include:
Sociopolitical inventions comprise new laws, institutions, and procedures that change modes of social behavior and establish new forms of human interaction and organization. Examples include:
Humanistic inventions encompass culture in its entirety and are as transformative and important as any in the sciences, although people tend to take them for granted. In the domain of linguistics, for example, many alphabets have been inventions, as are all neologisms (Shakespeare invented about 1,700 words).
Literary inventions include:
Among the inventions of artists and musicians are:
Philosophers have invented:
Religious thinkers are responsible for such inventions as:
Some of these disciplines, genres, and trends may seem to have existed eternally or to have emerged spontaneously of their own accord, but most of them have had inventors.
For more about Inventions, click on any of the following blue hyperlinks:
Such innovation takes place through the provision of more-effective products, processes, services, technologies, or business models that are made available to markets, governments and society. The term "innovation" can be defined as something original and more effective and, as a consequence, new, that "breaks into" the market or society.
Innovation is related to, but not the same as, invention (see next topic below), as innovation is more apt to involve the practical implementation of an invention (i.e. new/improved ability) to make a meaningful impact in the market or society, and not all innovations require an invention. Innovation often manifests itself via the engineering process, when the problem being solved is of a technical or scientific nature. The opposite of innovation is exnovation.
While a novel device is often described as an innovation, in economics, management science, and other fields of practice and analysis, innovation is generally considered to be the result of a process that brings together various novel ideas in such a way that they affect society.
In industrial economics, innovations are created and found empirically from services to meet growing consumer demand.
A 2014 survey of literature on innovation found over 40 definitions. In an industrial survey of how the software industry defined innovation, the following definition given by Crossan and Apaydin was considered to be the most complete, which builds on the Organisation for Economic Co-operation and Development (OECD) manual's definition:
Innovation is:
- production or adoption, assimilation, and exploitation of a value-added novelty in economic and social spheres;
- renewal and enlargement of products, services, and markets;
- development of new methods of production;
- and establishment of new management systems. It is both a process and an outcome.
According to Kanter, innovation includes original invention and creative use and defines innovation as a generation, admission and realization of new ideas, products, services and processes.
Two main dimensions of innovation were degree of novelty (patent) (i.e. whether an innovation is new to the firm, new to the market, new to the industry, or new to the world) and type of innovation (i.e. whether it is process or product-service system innovation). In recent organizational scholarship, researchers of workplaces have also distinguished innovation to be separate from creativity, by providing an updated definition of these two related but distinct constructs:
Workplace creativity concerns the cognitive and behavioral processes applied when attempting to generate novel ideas. Workplace innovation concerns the processes applied when attempting to implement new ideas.
Specifically, innovation involves some combination of problem/opportunity identification, the introduction, adoption or modification of new ideas germane to organizational needs, the promotion of these ideas, and the practical implementation of these ideas.
Click on any of the following blue hyperlinks for more about Innovation:
- Inter-disciplinary views
- Diffusion
- Measures
- Government policies
- See also:
- Communities of innovation
- Creative competitive intelligence
- Creative problem solving
- Creativity
- Diffusion of innovations
- Deployment
- Disruptive innovation
- Diffusion (anthropology)
- Ecoinnovation
- Global Innovation Index (Boston Consulting Group)
- Global Innovation Index (INSEAD)
- Greatness
- Hype cycle
- Individual capital
- Induced innovation
- Information revolution
- Ingenuity
- Innovation leadership
- Innovation management
- Innovation system
- Knowledge economy
- List of countries by research and development spending
- List of emerging technologies
- List of Russian inventors
- Multiple discovery
- Obsolescence
- Open Innovation
- Open Innovations (Forum and Technology Show)
- Outcome-Driven Innovation
- Paradigm shift
- Participatory design
- Pro-innovation bias
- Public domain
- Research
- State of art
- Sustainable Development Goals (Agenda 9)
- Technology Life Cycle
- Technological innovation system
- Theories of technology
- Timeline of historic inventions
- Toolkits for User Innovation
- UNDP Innovation Facility
- Value network
- Virtual product development
An invention is a unique or novel device, method, composition or process. The invention process is a process within an overall engineering and product development process. It may be an improvement upon a machine or product or a new process for creating an object or a result.
An invention that achieves a completely unique function or result may be a radical breakthrough. Such works are novel and not obvious to others skilled in the same field. An inventor may be taking a big step in success or failure.
Some inventions can be patented. A patent legally protects the intellectual property rights of the inventor and legally recognizes that a claimed invention is actually an invention. The rules and requirements for patenting an invention vary from country to country and the process of obtaining a patent is often expensive.
Another meaning of invention is cultural invention, which is an innovative set of useful social behaviors adopted by people and passed on to others. The Institute for Social Inventions collected many such ideas in magazines and books. Invention is also an important component of artistic and design creativity.
Inventions often extend the boundaries of human knowledge, experience or capability.
Inventions are of three kinds:
- scientific-technological (including medicine),
- sociopolitical (including economics and law),
- and humanistic, or cultural.
Scientific-technological inventions include:
- railroads,
- aviation,
- vaccination,
- hybridization,
- antibiotics,
- astronautics,
- holography,
- the atomic bomb,
- computing,
- the Internet,
- and the smartphone.
Sociopolitical inventions comprise new laws, institutions, and procedures that change modes of social behavior and establish new forms of human interaction and organization. Examples include:
- the British Parliament,
- the US Constitution,
- the Manchester (UK) General Union of Trades,
- the Boy Scouts,
- the Red Cross,
- the Olympic Games,
- the United Nations,
- the European Union,
- and the Universal Declaration of Human Rights,
- as well as movements such as:
- socialism,
- Zionism,
- suffragism,
- feminism,
- and animal-rights veganism.
Humanistic inventions encompass culture in its entirety and are as transformative and important as any in the sciences, although people tend to take them for granted. In the domain of linguistics, for example, many alphabets have been inventions, as are all neologisms (Shakespeare invented about 1,700 words).
Literary inventions include:
- the epic,
- tragedy,
- comedy,
- the novel,
- the sonnet,
- the Renaissance,
- neoclassicism,
- Romanticism,
- Symbolism,
- Aestheticism,
- Socialist Realism,
- Surrealism,
- postmodernism,
- and (according to Freud) psychoanalysis.
Among the inventions of artists and musicians are:
- oil painting,
- printmaking,
- photography,
- cinema,
- musical tonality,
- atonality,
- jazz,
- rock,
- opera,
- and the symphony orchestra.
Philosophers have invented:
- logic (several times),
- dialectics,
- idealism,
- materialism,
- utopia,
- anarchism,
- semiotics,
- phenomenology,
- behaviorism,
- positivism,
- pragmatism,
- and deconstruction.
Religious thinkers are responsible for such inventions as:
- monotheism,
- pantheism,
- Methodism,
- Mormonism,
- iconoclasm,
- puritanism,
- deism,
- secularism,
- ecumenism,
- and Baha’i.
Some of these disciplines, genres, and trends may seem to have existed eternally or to have emerged spontaneously of their own accord, but most of them have had inventors.
For more about Inventions, click on any of the following blue hyperlinks:
- Process of invention
- Invention vs. innovation
- Purposes of invention
- Invention as defined by patent law
- Invention in the arts
- See also:
- portal
- Bayh-Dole Act
- Chindōgu
- Creativity techniques
- Directive on the legal protection of biotechnological inventions
- Discovery (observation)
- Edisonian approach
- Heroic theory of invention and scientific development
- Independent inventor
- Ingenuity
- INPEX (invention show)
- International Innovation Index
- Invention promotion firm
- Inventors' Day
- Kranzberg's laws of technology
- Lemelson-MIT Prize
- Category:Lists of inventions or discoveries
- List of inventions named after people
- List of inventors
- List of prolific inventors
- Multiple discovery
- National Inventors Hall of Fame
- Patent model
- Proof of concept
- Proposed directive on the patentability of computer-implemented inventions - it was rejected
- Scientific priority
- Technological revolution
- The Illustrated Science and Invention Encyclopedia
- Timeline of historic inventions
- Science and invention in Birmingham - The first cotton spinning mill to plastics and steam power.
- Invention Ideas
- List of PCT (Patent Cooperation Treaty) Notable Inventions at WIPO
- Hottelet, Ulrich (October 2007). "Invented in Germany - made in Asia". The Asia Pacific Times. Archived from the original on 2012-05-01
Smart Home Technology
YouTube Video Video Example of Smart Home Technology in Action...
Pictured: Illustration courtesy of http://www.futureforall.org/home/homeofthefuture.htm
Home automation or smart home is the residential extension of building automation and involves the control and automation of lighting, heating (such as smart thermostats), ventilation, air conditioning (HVAC), and security, as well as home appliances such as washer/dryers, ovens or refrigerators/freezers that use WiFi for remote monitoring.
Modern systems generally consist of switches and sensors connected to a central hub sometimes called a "gateway" from which the system is controlled with a user interface that is interacted either with a wall-mounted terminal, mobile phone software, tablet computer or a web interface, often but not always via internet cloud services.
While there are many competing vendors, there are very few world-wide accepted industry standards and the smart home space is heavily fragmented.
Popular communications protocol for products include the following:
Manufacturers often prevent independent implementations by withholding documentation and by suing people.
The home automation market was worth US$5.77 billion in 2015, predicted to have a market value over US$10 billion by the year 2020.
According to Li et. al. (2016) there are three generations of home automation:
Applications and technologies:
Implementations:
In a review of home automation devices, Consumer Reports found two main concerns for consumers:
Microsoft Research found in 2011, that home automation could involve high cost of ownership, inflexibility of interconnected devices, and poor manageability.
Historically systems have been sold as complete systems where the consumer relies on one vendor for the entire system including the hardware, the communications protocol, the central hub, and the user interface. However, there are now open source software systems which can be used with proprietary hardware.
Protocols:
There are a wide variety of technology platforms, or protocols, on which a smart home can be built. Each one is, essentially, its own language. Each language speaks to the various connected devices and instructs them to perform a function.
The automation protocol transport has involved direct wire connectivity, powerline (UPB) and wireless hybrid and wireless.
Most of the protocols below are not open. All have an API.
Click here for a chart listing available protocols.
Criticism and controversies:
Home automation suffers from platform fragmentation and lack of technical standards, a situation where the variety of home automation devices, in terms of both hardware variations and differences in the software running on them, makes the task of developing applications that work consistently between different inconsistent technology ecosystems hard.
Customers may be hesitant to bet their IoT future on proprietary software or hardware devices that use proprietary protocols that may fade or become difficult to customize and interconnect.
Home automation devices' amorphous computing nature is also a problem for security, since patches to bugs found in the core operating system often do not reach users of older and lower-price devices.
One set of researchers say that the failure of vendors to support older devices with patches and updates leaves more than 87% of active devices vulnerable.
See Also:
Modern systems generally consist of switches and sensors connected to a central hub sometimes called a "gateway" from which the system is controlled with a user interface that is interacted either with a wall-mounted terminal, mobile phone software, tablet computer or a web interface, often but not always via internet cloud services.
While there are many competing vendors, there are very few world-wide accepted industry standards and the smart home space is heavily fragmented.
Popular communications protocol for products include the following:
- X10,
- Ethernet,
- RS-485,
- 6LoWPAN,
- Bluetooth LE (BLE),
- ZigBee,
- and Z-Wave,
- or other proprietary protocols all of which are incompatible with each other.
Manufacturers often prevent independent implementations by withholding documentation and by suing people.
The home automation market was worth US$5.77 billion in 2015, predicted to have a market value over US$10 billion by the year 2020.
According to Li et. al. (2016) there are three generations of home automation:
- First generation: wireless technology with proxy server, e.g. Zigbee automation;
- Second generation: artificial intelligence controls electrical devices, e.g. amazon echo;
- Third generation: robot buddy "who" interacts with humans, e.g. Robot Rovio, Roomba.
Applications and technologies:
- Heating, ventilation and air conditioning (HVAC): it is possible to have remote control of all home energy monitors over the internet incorporating a simple and friendly user interface.
- Lighting control system
- Appliance control and integration with the smart grid and a smart meter, taking advantage, for instance, of high solar panel output in the middle of the day to run washing machines.
- Security: a household security system integrated with a home automation system can provide additional services such as remote surveillance of security cameras over the Internet, or central locking of all perimeter doors and windows.
- Leak detection, smoke and CO detectors
- Indoor positioning systems
- Home automation for the elderly and disabled
Implementations:
In a review of home automation devices, Consumer Reports found two main concerns for consumers:
- A WiFi network connected to the internet can be vulnerable to hacking.
- Technology is still in its infancy, and consumers could invest in a system that becomes abandonware. In 2014, Google bought the company selling the Revolv Hub home automation system, integrated it with Nest and in 2016 shut down the servers Revolv Hub depended on, rendering the hardware useless.
Microsoft Research found in 2011, that home automation could involve high cost of ownership, inflexibility of interconnected devices, and poor manageability.
Historically systems have been sold as complete systems where the consumer relies on one vendor for the entire system including the hardware, the communications protocol, the central hub, and the user interface. However, there are now open source software systems which can be used with proprietary hardware.
Protocols:
There are a wide variety of technology platforms, or protocols, on which a smart home can be built. Each one is, essentially, its own language. Each language speaks to the various connected devices and instructs them to perform a function.
The automation protocol transport has involved direct wire connectivity, powerline (UPB) and wireless hybrid and wireless.
Most of the protocols below are not open. All have an API.
Click here for a chart listing available protocols.
Criticism and controversies:
Home automation suffers from platform fragmentation and lack of technical standards, a situation where the variety of home automation devices, in terms of both hardware variations and differences in the software running on them, makes the task of developing applications that work consistently between different inconsistent technology ecosystems hard.
Customers may be hesitant to bet their IoT future on proprietary software or hardware devices that use proprietary protocols that may fade or become difficult to customize and interconnect.
Home automation devices' amorphous computing nature is also a problem for security, since patches to bugs found in the core operating system often do not reach users of older and lower-price devices.
One set of researchers say that the failure of vendors to support older devices with patches and updates leaves more than 87% of active devices vulnerable.
See Also:
- Home automation for the elderly and disabled
- Internet of Things
- List of home automation software and hardware
- List of home automation topics
- List of network buses
- Smart device
- Web of Things
Sergey Brin (co-founder of Google)
YouTube Video: Sergey Brin talks about Google Glass at TED 2013
Sergey Mikhaylovich Brin (born August 21, 1973) is a Soviet-born American computer scientist, internet entrepreneur, and philanthropist. Together with Larry Page, he co-founded Google.
Brin is the President of Google's parent company Alphabet Inc. In October 2016 (the most recent period for which figures are available), Brin was the 12th richest person in the world, with an estimated net worth of US$39.2 billion.
Brin immigrated to the United States with his family from the Soviet Union at the age of 6. He earned his bachelor's degree at the University of Maryland, following in his father's and grandfather's footsteps by studying mathematics, as well as computer science.
After graduation, he moved to Stanford University to acquire a PhD in computer science. There he met Page, with whom he later became friends. They crammed their dormitory room with inexpensive computers and applied Brin's data mining system to build a web search engine. The program became popular at Stanford, and they suspended their PhD studies to start up Google in a rented garage.
The Economist referred to Brin as an "Enlightenment Man", and as someone who believes that "knowledge is always good, and certainly always better than ignorance", a philosophy that is summed up by Google's mission statement, "Organize the world's information and make it universally accessible and useful," and unofficial, sometimes controversial motto, "Don't be evil".
Click on any of the following blue hyperlinks to learn more about Sergey Brin:
Brin is the President of Google's parent company Alphabet Inc. In October 2016 (the most recent period for which figures are available), Brin was the 12th richest person in the world, with an estimated net worth of US$39.2 billion.
Brin immigrated to the United States with his family from the Soviet Union at the age of 6. He earned his bachelor's degree at the University of Maryland, following in his father's and grandfather's footsteps by studying mathematics, as well as computer science.
After graduation, he moved to Stanford University to acquire a PhD in computer science. There he met Page, with whom he later became friends. They crammed their dormitory room with inexpensive computers and applied Brin's data mining system to build a web search engine. The program became popular at Stanford, and they suspended their PhD studies to start up Google in a rented garage.
The Economist referred to Brin as an "Enlightenment Man", and as someone who believes that "knowledge is always good, and certainly always better than ignorance", a philosophy that is summed up by Google's mission statement, "Organize the world's information and make it universally accessible and useful," and unofficial, sometimes controversial motto, "Don't be evil".
Click on any of the following blue hyperlinks to learn more about Sergey Brin:
- Early life and education
- Search engine development
- Other interests
- Censorship of Google in China
- Personal life
- Awards and accolades
- Filmography
Larry Page (co-founder of Google)
YouTube Video Where's Google going next? | Larry Page
Lawrence "Larry" Page (born March 26, 1973) is an American computer scientist and an Internet entrepreneur who co-founded Google Inc. with Sergey Brin in 1998.
Page is the chief executive officer (CEO) of Google's parent company, Alphabet Inc. After stepping aside as Google CEO in August 2001 in favor of Eric Schmidt, he re-assumed the role in April 2011. He announced his intention to step aside a second time in July 2015 to become CEO of Alphabet, under which Google's assets would be reorganized.
Under Page, Alphabet is seeking to deliver major advancements in a variety of industries.
In November 2016, he is the 12th richest person in the world, with an estimated net worth of US$36.9 billion.
Page is the inventor of PageRank, Google's best-known search ranking algorithm. Page received the Marconi Prize in 2004.
For more about Larry Page, click on any of the following blue hyperlinks:
Page is the chief executive officer (CEO) of Google's parent company, Alphabet Inc. After stepping aside as Google CEO in August 2001 in favor of Eric Schmidt, he re-assumed the role in April 2011. He announced his intention to step aside a second time in July 2015 to become CEO of Alphabet, under which Google's assets would be reorganized.
Under Page, Alphabet is seeking to deliver major advancements in a variety of industries.
In November 2016, he is the 12th richest person in the world, with an estimated net worth of US$36.9 billion.
Page is the inventor of PageRank, Google's best-known search ranking algorithm. Page received the Marconi Prize in 2004.
For more about Larry Page, click on any of the following blue hyperlinks:
- Early life and education
- PhD studies and research
- Alphabet
- Other interests
- Personal life
- Awards and accolades
Steve Jobs
YouTube Video by Steve Jobs - Courage*
* -- This is a clip from the D8 Conference, recorded in 2010. Steve Jobs is talking about the courage it takes to remove certain pieces of technology from Apple products. This happened after the iPad was introduced without support for Flash, just as the iPhone, back in 2007. This clip adds some perspective into the debate of Apple's new AirPod and the decision to remove the traditional analog audio connector from the iPhone 7. This kind of decision is not new to Apple.
Steven Paul "Steve" Jobs (February 24, 1955 – October 5, 2011) was an American businessman, inventor, and industrial designer. He was the co-founder, chairman, and chief executive officer (CEO) of Apple Inc.; CEO and majority shareholder of Pixar; a member of The Walt Disney Company's board of directors following its acquisition of Pixar; and founder, chairman, and CEO of NeXT. Jobs is widely recognized as a pioneer of the microcomputer revolution of the 1970s and 1980s, along with Apple co-founder Steve Wozniak.
Jobs was adopted at birth in San Francisco, and raised in the San Francisco Bay Area during the 1960s. Jobs briefly attended Reed College in 1972 before dropping out. He then decided to travel through India in 1974 seeking enlightenment and studying Zen Buddhism.
Jobs's declassified FBI report says an acquaintance knew that Jobs used illegal drugs in college including marijuana and LSD. Jobs told a reporter once that taking LSD was "one of the two or three most important things" he did in his life.
Jobs co-founded Apple in 1976 to sell Wozniak's Apple I personal computer. The duo gained fame and wealth a year later for the Apple II, one of the first highly successful mass-produced personal computers. In 1979, after a tour of PARC, Jobs saw the commercial potential of the Xerox Alto, which was mouse-driven and had a graphical user interface (GUI).
This led to development of the unsuccessful Apple Lisa in 1983, followed by the breakthrough Macintosh in 1984.
In addition to being the first mass-produced computer with a GUI, the Macintosh instigated the sudden rise of the desktop publishing industry in 1985 with the addition of the Apple LaserWriter, the first laser printer to feature vector graphics. Following a long power struggle, Jobs was forced out of Apple in 1985.
After leaving Apple, Jobs took a few of its members with him to found NeXT, a computer platform development company specializing in state-of-the-art computers for higher-education and business markets.
In addition, Jobs helped to initiate the development of the visual effects industry when he funded the spinout of the computer graphics division of George Lucas's Lucasfilm in 1986. The new company, Pixar, would eventually produce the first fully computer-animated film, Toy Story—an event made possible in part because of Jobs's financial support.
In 1997, Apple acquired and merged NeXT, allowing Jobs to become CEO once again, reviving the company at the verge of bankruptcy. Beginning in 1997 with the "Think different" advertising campaign, Jobs worked closely with designer Jonathan Ive to develop a line of products that would have larger cultural ramifications:
Mac OS was also revamped into OS X (renamed “macOS” in 2016), based on NeXT's NeXTSTEP platform.
Jobs was diagnosed with a pancreatic neuroendocrine tumor in 2003 and died of respiratory arrest related to the tumor on October 5, 2011.
Click on any of the following blue hyperlinks for more about Steve Jobs:
Jobs was adopted at birth in San Francisco, and raised in the San Francisco Bay Area during the 1960s. Jobs briefly attended Reed College in 1972 before dropping out. He then decided to travel through India in 1974 seeking enlightenment and studying Zen Buddhism.
Jobs's declassified FBI report says an acquaintance knew that Jobs used illegal drugs in college including marijuana and LSD. Jobs told a reporter once that taking LSD was "one of the two or three most important things" he did in his life.
Jobs co-founded Apple in 1976 to sell Wozniak's Apple I personal computer. The duo gained fame and wealth a year later for the Apple II, one of the first highly successful mass-produced personal computers. In 1979, after a tour of PARC, Jobs saw the commercial potential of the Xerox Alto, which was mouse-driven and had a graphical user interface (GUI).
This led to development of the unsuccessful Apple Lisa in 1983, followed by the breakthrough Macintosh in 1984.
In addition to being the first mass-produced computer with a GUI, the Macintosh instigated the sudden rise of the desktop publishing industry in 1985 with the addition of the Apple LaserWriter, the first laser printer to feature vector graphics. Following a long power struggle, Jobs was forced out of Apple in 1985.
After leaving Apple, Jobs took a few of its members with him to found NeXT, a computer platform development company specializing in state-of-the-art computers for higher-education and business markets.
In addition, Jobs helped to initiate the development of the visual effects industry when he funded the spinout of the computer graphics division of George Lucas's Lucasfilm in 1986. The new company, Pixar, would eventually produce the first fully computer-animated film, Toy Story—an event made possible in part because of Jobs's financial support.
In 1997, Apple acquired and merged NeXT, allowing Jobs to become CEO once again, reviving the company at the verge of bankruptcy. Beginning in 1997 with the "Think different" advertising campaign, Jobs worked closely with designer Jonathan Ive to develop a line of products that would have larger cultural ramifications:
- the iMac,
- iTunes and iTunes Store,
- Apple Store,
- iPod,
- iPhone,
- App Store,
- and the iPad (see picture above)
Mac OS was also revamped into OS X (renamed “macOS” in 2016), based on NeXT's NeXTSTEP platform.
Jobs was diagnosed with a pancreatic neuroendocrine tumor in 2003 and died of respiratory arrest related to the tumor on October 5, 2011.
Click on any of the following blue hyperlinks for more about Steve Jobs:
- Background
- Childhood
- Homestead High
- Reed College
- 1972–1985
- 1985–1997
- 1997–2011
- Portrayals and coverage in books, film, and theater
- Innovations and designs
- Honors and awards
- See also:
Laser Technology and its Applications
YouTube Video: Laser Assisted Cataract Surgery
YouTube Video of an Amazing Laser Show
Pictured: The lasers commonly employed in laser scanning confocal microscopy are high-intensity monochromatic light sources, which are useful as tools for a variety of techniques including optical trapping, lifetime imaging studies, photobleaching recovery, and total internal reflection fluorescence. In addition, lasers are also the most common light source for scanning confocal fluorescence microscopy, and have been utilized, although less frequently, in conventional widefield fluorescence investigations
A laser is a device that emits light through a process of optical amplification based on the stimulated emission of electromagnetic radiation. The term "laser" originated as an acronym for "light amplification by stimulated emission of radiation".
The first laser was built in 1960 by Theodore H. Maiman at Hughes Research Laboratories, based on theoretical work by Charles Hard Townes and Arthur Leonard Schawlow.
A laser differs from other sources of light in that it emits light coherently. Spatial coherence allows a laser to be focused to a tight spot, enabling applications such as laser cutting and lithography.
Spatial coherence also allows a laser beam to stay narrow over great distances (collimation), enabling applications such as laser pointers. Lasers can also have high temporal coherence, which allows them to emit light with a very narrow spectrum, i.e., they can emit a single color of light. Temporal coherence can be used to produce pulses of light as short as a femtosecond.
Among their many applications, lasers are used in:
Click here for more about Laser Technology
Click on any of the following blue hyperlinks for additional Laser Applications:
The first laser was built in 1960 by Theodore H. Maiman at Hughes Research Laboratories, based on theoretical work by Charles Hard Townes and Arthur Leonard Schawlow.
A laser differs from other sources of light in that it emits light coherently. Spatial coherence allows a laser to be focused to a tight spot, enabling applications such as laser cutting and lithography.
Spatial coherence also allows a laser beam to stay narrow over great distances (collimation), enabling applications such as laser pointers. Lasers can also have high temporal coherence, which allows them to emit light with a very narrow spectrum, i.e., they can emit a single color of light. Temporal coherence can be used to produce pulses of light as short as a femtosecond.
Among their many applications, lasers are used in:
- optical disk drives,
- laser printers,
- barcode scanners;
- DNA sequencing instruments,
- fiber-optic and free-space optical communication;
- laser surgery and skin treatments;
- cutting and welding materials;
- military and law enforcement devices for marking targets and measuring range and speed;
- and laser lighting displays in entertainment.
Click here for more about Laser Technology
Click on any of the following blue hyperlinks for additional Laser Applications:
- Scientific
- Military
- Medical
- Industrial and commercial
- Entertainment and recreation
- Surveying and ranging
- Bird deterrent
- Images
- See also:
Technologies for Clothing and Textiles, including their Timelines
YouTube Video: 100 Years of Fashion: Women
YouTube Video: 100 Years of Fashion: Men
Pictured Below:
TOP: Increases in capital investment from 2009-2014: 2016 State Of The U.S. Textile Industry
BOTTOM: Images of how female fashion has changed from (LEFT) 1950; to (RIGHT) Today: (L-R) Vanessa Hudgens, Miranda Kerr and Ashley Tisdale.
Click here for the Timeline of clothing and textiles technology
Clothing technology involves the manufacturing, materials, and design innovations that have been developed and used.
The timeline of clothing and textiles technology includes major changes in the manufacture and distribution of clothing.
From clothing in the ancient world into modernity the use of technology has dramatically influenced clothing and fashion in the modern age. Industrialization brought changes in the manufacture of goods. In many nations, homemade goods crafted by hand have largely been replaced factory produced goods on assembly lines purchased in a by consumer culture.
Innovations include man-made materials such as polyester, nylon, and vinyl as well as features like zippers and velcro. The advent of advanced electronics has resulted in wearable technology being developed and popularized since the 1980s.
Design is an important part of the industry beyond utilitarian concerns and the fashion and glamour industries have developed in relation to clothing marketing and retail.
Environmental and human rights issues have also become considerations for clothing and spurred the promotion and use of some natural materials such as bamboo that are considered environmentally friendly.
Click on any of the following blue hyperlinks for more information about clothing technology:
Textile manufacturing is a major industry. It is based on the conversion of fibre into yarn, yarn into fabric. These are then dyed or printed, fabricated into clothes.
Different types of fibre are used to produce yarn. Cotton remains the most important natural fiber, so is treated in depth. There are many variable processes available at the spinning and fabric-forming stages coupled with the complexities of the finishing and coloration processes to the production of a wide ranges of products. There remains a large industry that uses hand techniques to achieve the same results.
Click here for more about Textile Manufacturing.
Clothing technology involves the manufacturing, materials, and design innovations that have been developed and used.
The timeline of clothing and textiles technology includes major changes in the manufacture and distribution of clothing.
From clothing in the ancient world into modernity the use of technology has dramatically influenced clothing and fashion in the modern age. Industrialization brought changes in the manufacture of goods. In many nations, homemade goods crafted by hand have largely been replaced factory produced goods on assembly lines purchased in a by consumer culture.
Innovations include man-made materials such as polyester, nylon, and vinyl as well as features like zippers and velcro. The advent of advanced electronics has resulted in wearable technology being developed and popularized since the 1980s.
Design is an important part of the industry beyond utilitarian concerns and the fashion and glamour industries have developed in relation to clothing marketing and retail.
Environmental and human rights issues have also become considerations for clothing and spurred the promotion and use of some natural materials such as bamboo that are considered environmentally friendly.
Click on any of the following blue hyperlinks for more information about clothing technology:
- Production
- Sports
- Education
- See also
Textile manufacturing is a major industry. It is based on the conversion of fibre into yarn, yarn into fabric. These are then dyed or printed, fabricated into clothes.
Different types of fibre are used to produce yarn. Cotton remains the most important natural fiber, so is treated in depth. There are many variable processes available at the spinning and fabric-forming stages coupled with the complexities of the finishing and coloration processes to the production of a wide ranges of products. There remains a large industry that uses hand techniques to achieve the same results.
Click here for more about Textile Manufacturing.
Advancements in Technologies for Agriculture and Food, Including their Timeline
YouTube Video: Latest Technology Machines, New Modern Agriculture Machines
Pictured: LEFT: Six Ways Drones are Revolutionizing Agriculture; RIGHT: A corn farmer sprays weed killer across his corn field in Auburn, Ill.
Click here for a Timeline of Agriculture Technology Advancements.
By the United States Department of Agriculture (USDA), National Institute of Food and Agriculture:
Modern farms and agricultural operations work far differently than those a few decades ago, primarily because of advancements in technology, including sensors, devices, machines, and information technology.
Today’s agriculture routinely uses sophisticated technologies such as robots, temperature and moisture sensors, aerial images, and GPS technology. These advanced devices and precision agriculture and robotic systems allow businesses to be more profitable, efficient, safer, and more environmentally friendly.
IMPORTANCE OF AGRICULTURAL TECHNOLOGY:
Farmers no longer have to apply water, fertilizers, and pesticides uniformly across entire fields. Instead, they can use the minimum quantities required and target very specific areas, or even treat individual plants differently. Benefits include:
In addition, robotic technologies enable more reliable monitoring and management of natural resources, such as air and water quality. It also gives producers greater control over plant and animal production, processing, distribution, and storage, which results in:
NIFA’S IMPACT:
NIFA advances agricultural technology and ensures that the nation’s agricultural industries are able to utilize it by supporting:
Forbes Magazine, July 5, 2016 Issue:
ShareFullscreenAgriculture technology is no longer a niche that no one's heard about. Agriculture has confirmed its place as an industry of interest for the venture capital community after investment in agtech broke records for the past three years in a row, reaching $4.6 billion in 2015.
For a long time, it wasn’t a target sector for venture capitalists or entrepreneurs. Only a handful of funds served the market, largely focused on biotech opportunities. And until recently, entrepreneurs were also too focused on what Steve Case calls the “Second Wave” of innovation, -- web services, social media, and mobile technology -- to look at agriculture, the least digitized industry in the world, according to McKinsey & Co.
Michael Macrie, chief information officer at agriculture cooperative Land O’ Lakes recently told Forbes that he counted only 20 agtech companies as recently as 2010.
But now, the opportunity to bring agriculture, a $7.8 trillion industry representing 10% of global GDP, into the modern age has caught the attention of a growing number of investors globally. In our 2015 annual report, we recorded 503 individual companies raising funding.
This increasing interest in the sector coincides with a more general “Third Wave” in technological innovation, where all companies are internet-powered tech companies, and startups are challenging the biggest incumbent industries like hospitality, transport, and now agriculture.
There is huge potential, and need, to help the ag industry find efficiencies, conserve valuable resources, meet global demands for protein, and ensure consumers have access to clean, safe, healthy food. In all this, technological innovation is inevitable.
It’s a complex and diverse industry, however, with many subsectors for farmers, investors, and industry stakeholders to navigate. Entrepreneurs are innovating across agricultural disciplines, aiming to disrupt the beef, dairy, row crop, permanent crop, aquaculture, forestry, and fisheries sectors. Each discipline has a specific set of needs that will differ from the others.
___________________________________________________________________________
Advancements in Food Technology:
Food technology is a branch of food science that deals with the production processes that make foods.
Early scientific research into food technology concentrated on food preservation. Nicolas Appert’s development in 1810 of the canning process was a decisive event. The process wasn’t called canning then and Appert did not really know the principle on which his process worked, but canning has had a major impact on food preservation techniques.
Louis Pasteur's research on the spoilage of wine and his description of how to avoid spoilage in 1864 was an early attempt to apply scientific knowledge to food handling.
Besides research into wine spoilage, Pasteur researched the production of alcohol, vinegar, wines and beer, and the souring of milk. He developed pasteurization—the process of heating milk and milk products to destroy food spoilage and disease-producing organisms.
In his research into food technology, Pasteur became the pioneer into bacteriology and of modern preventive medicine.
Developments in food technology have contributed greatly to the food supply and have changed our world. Some of these developments are:
Consumer Acceptance:
In the past, consumer attitude towards food technologies was not common talk and was not important in food development. Nowadays the food chain is long and complicated, foods and food technologies are diverse; consequently the consumers are uncertain about the food quality and safety and find it difficult to orient themselves to the subject.
That is why consumer acceptance of food technologies is an important question. However, in these days acceptance of food products very often depends on potential benefits and risks associated with the food. This also includes the technology the food is processed with.
Attributes like “uncertain”, “unknown” or “unfamiliar” are associated with consumers’ risk perception and consumer very likely will reject products linked to these attributes. Especially innovative food processing technologies are connected to these characteristics and are perceived as risky by consumers.
Acceptance of the different food technologies is very different. Whereas pasteurization is well recognized, high pressure treatment or even microwaves are perceived as risky very often. In studies done within Hightech Europe project, it was found that traditional technologies were well accepted in contrast to innovative technologies.
Consumers form their attitude towards innovative food technologies by three main factors mechanisms. First, knowledge or beliefs about risks and benefits which are correlated with the technology. Second, attitudes are based on their own experience and third, based on higher order values and beliefs.
Acceptance of innovative technologies can be improved by providing non-emotional and concise information about these new technological processes methods. According to a study made by HighTech project also written information seems to have higher impact than audio-visual information on the consumer in case of sensory acceptance of products processed with innovative food technologies.
See Also:
By the United States Department of Agriculture (USDA), National Institute of Food and Agriculture:
Modern farms and agricultural operations work far differently than those a few decades ago, primarily because of advancements in technology, including sensors, devices, machines, and information technology.
Today’s agriculture routinely uses sophisticated technologies such as robots, temperature and moisture sensors, aerial images, and GPS technology. These advanced devices and precision agriculture and robotic systems allow businesses to be more profitable, efficient, safer, and more environmentally friendly.
IMPORTANCE OF AGRICULTURAL TECHNOLOGY:
Farmers no longer have to apply water, fertilizers, and pesticides uniformly across entire fields. Instead, they can use the minimum quantities required and target very specific areas, or even treat individual plants differently. Benefits include:
- Higher crop productivity
- Decreased use of water, fertilizer, and pesticides, which in turn keeps food prices down
- Reduced impact on natural ecosystems
- Less runoff of chemicals into rivers and groundwater
- Increased worker safety
In addition, robotic technologies enable more reliable monitoring and management of natural resources, such as air and water quality. It also gives producers greater control over plant and animal production, processing, distribution, and storage, which results in:
- Greater efficiencies and lower prices
- Safer growing conditions and safer foods
- Reduced environmental and ecological impact
NIFA’S IMPACT:
NIFA advances agricultural technology and ensures that the nation’s agricultural industries are able to utilize it by supporting:
- Basic research and development in physical sciences, engineering, and computer sciences
- Development of agricultural devices, sensors, and systems
- Applied research that assesses how to employ technologies economically and with minimal disruption to existing practices
- Assistance and instruction to farmers on how to use new technologies
Forbes Magazine, July 5, 2016 Issue:
ShareFullscreenAgriculture technology is no longer a niche that no one's heard about. Agriculture has confirmed its place as an industry of interest for the venture capital community after investment in agtech broke records for the past three years in a row, reaching $4.6 billion in 2015.
For a long time, it wasn’t a target sector for venture capitalists or entrepreneurs. Only a handful of funds served the market, largely focused on biotech opportunities. And until recently, entrepreneurs were also too focused on what Steve Case calls the “Second Wave” of innovation, -- web services, social media, and mobile technology -- to look at agriculture, the least digitized industry in the world, according to McKinsey & Co.
Michael Macrie, chief information officer at agriculture cooperative Land O’ Lakes recently told Forbes that he counted only 20 agtech companies as recently as 2010.
But now, the opportunity to bring agriculture, a $7.8 trillion industry representing 10% of global GDP, into the modern age has caught the attention of a growing number of investors globally. In our 2015 annual report, we recorded 503 individual companies raising funding.
This increasing interest in the sector coincides with a more general “Third Wave” in technological innovation, where all companies are internet-powered tech companies, and startups are challenging the biggest incumbent industries like hospitality, transport, and now agriculture.
There is huge potential, and need, to help the ag industry find efficiencies, conserve valuable resources, meet global demands for protein, and ensure consumers have access to clean, safe, healthy food. In all this, technological innovation is inevitable.
It’s a complex and diverse industry, however, with many subsectors for farmers, investors, and industry stakeholders to navigate. Entrepreneurs are innovating across agricultural disciplines, aiming to disrupt the beef, dairy, row crop, permanent crop, aquaculture, forestry, and fisheries sectors. Each discipline has a specific set of needs that will differ from the others.
___________________________________________________________________________
Advancements in Food Technology:
Food technology is a branch of food science that deals with the production processes that make foods.
Early scientific research into food technology concentrated on food preservation. Nicolas Appert’s development in 1810 of the canning process was a decisive event. The process wasn’t called canning then and Appert did not really know the principle on which his process worked, but canning has had a major impact on food preservation techniques.
Louis Pasteur's research on the spoilage of wine and his description of how to avoid spoilage in 1864 was an early attempt to apply scientific knowledge to food handling.
Besides research into wine spoilage, Pasteur researched the production of alcohol, vinegar, wines and beer, and the souring of milk. He developed pasteurization—the process of heating milk and milk products to destroy food spoilage and disease-producing organisms.
In his research into food technology, Pasteur became the pioneer into bacteriology and of modern preventive medicine.
Developments in food technology have contributed greatly to the food supply and have changed our world. Some of these developments are:
- Instantized Milk Powder - D.D. Peebles (U.S. patent 2,835,586) developed the first instant milk powder, which has become the basis for a variety of new products that are rehydratable. This process increases the surface area of the powdered product by partially rehydrating spray-dried milk powder.
- Freeze-drying - The first application of freeze drying was most likely in the pharmaceutical industry; however, a successful large-scale industrial application of the process was the development of continuous freeze drying of coffee.
- High-Temperature Short Time Processing - These processes for the most part are characterized by rapid heating and cooling, holding for a short time at a relatively high temperature and filling aseptically into sterile containers.
- Decaffeination of Coffee and Tea - Decaffeinated coffee and tea was first developed on a commercial basis in Europe around 1900. The process is described in U.S. patent 897,763. Green coffee beans are treated with water, heat and solvents to remove the caffeine from the beans.
- Process optimization - Food Technology now allows production of foods to be more efficient, Oil saving technologies are now available on different forms. Production methods and methodology have also become increasingly sophisticated.
Consumer Acceptance:
In the past, consumer attitude towards food technologies was not common talk and was not important in food development. Nowadays the food chain is long and complicated, foods and food technologies are diverse; consequently the consumers are uncertain about the food quality and safety and find it difficult to orient themselves to the subject.
That is why consumer acceptance of food technologies is an important question. However, in these days acceptance of food products very often depends on potential benefits and risks associated with the food. This also includes the technology the food is processed with.
Attributes like “uncertain”, “unknown” or “unfamiliar” are associated with consumers’ risk perception and consumer very likely will reject products linked to these attributes. Especially innovative food processing technologies are connected to these characteristics and are perceived as risky by consumers.
Acceptance of the different food technologies is very different. Whereas pasteurization is well recognized, high pressure treatment or even microwaves are perceived as risky very often. In studies done within Hightech Europe project, it was found that traditional technologies were well accepted in contrast to innovative technologies.
Consumers form their attitude towards innovative food technologies by three main factors mechanisms. First, knowledge or beliefs about risks and benefits which are correlated with the technology. Second, attitudes are based on their own experience and third, based on higher order values and beliefs.
Acceptance of innovative technologies can be improved by providing non-emotional and concise information about these new technological processes methods. According to a study made by HighTech project also written information seems to have higher impact than audio-visual information on the consumer in case of sensory acceptance of products processed with innovative food technologies.
See Also:
"Next-gen car technology just got another big upgrade" reported by the Washington Post (7/13/17)
Video: Chris Urmson: How a driverless car sees the road by TED Talk
Pictured: Driverless Robo-car guided by radar lasers.
By Brian Fung July 13
Federal regulators have approved a big swath of new airwaves for vehicle radar devices, opening the door to cheaper, more precise sensors that may accelerate the arrival of high-tech, next-generation cars.
Many consumer vehicles already use radar for collision avoidance, automatic lane-keeping and other purposes. But right now, vehicle radar is divided into a couple of different chunks of the radio spectrum. On Thursday, the Federal Communications Commission voted to consolidate these chunks — and added a little more, essentially giving extra bandwidth to vehicle radar.
“While we enthusiastically harness new technology that will ultimately propel us to a driverless future, we must maintain our focus on safety — and radar applications play an important role,” said Mignon Clyburn, a Democratic FCC commissioner.
Radar is a key component not only in today’s computer-assisted cars, but also in the fully self-driving cars of the future. There, the technology is even more important because it helps the computer make sound driving decisions.
Thursday’s decision by the FCC lets vehicle radar take advantage of all airwaves ranging from frequencies of 76 GHz to 81 GHz — reflecting an addition of four extra gigahertz — and ends support for the technology in the 24 GHz range.
Expanding the amount of airwaves devoted to vehicle radar could also make air travel safer, said FCC Chairman Ajit Pai, by allowing for the installation of radar devices on the wingtips of airplanes.
“Wingtip collisions account for 25 percent of all aircraft ground incidents,” said Pai. “Wingtip radars on aircraft may help with collision avoidance on the tarmac, among other areas.”
Although many analysts say fully self-driving cars are still years away from going mainstream, steps like these could help bring that future just a bit more within reach.
Federal regulators have approved a big swath of new airwaves for vehicle radar devices, opening the door to cheaper, more precise sensors that may accelerate the arrival of high-tech, next-generation cars.
Many consumer vehicles already use radar for collision avoidance, automatic lane-keeping and other purposes. But right now, vehicle radar is divided into a couple of different chunks of the radio spectrum. On Thursday, the Federal Communications Commission voted to consolidate these chunks — and added a little more, essentially giving extra bandwidth to vehicle radar.
“While we enthusiastically harness new technology that will ultimately propel us to a driverless future, we must maintain our focus on safety — and radar applications play an important role,” said Mignon Clyburn, a Democratic FCC commissioner.
Radar is a key component not only in today’s computer-assisted cars, but also in the fully self-driving cars of the future. There, the technology is even more important because it helps the computer make sound driving decisions.
Thursday’s decision by the FCC lets vehicle radar take advantage of all airwaves ranging from frequencies of 76 GHz to 81 GHz — reflecting an addition of four extra gigahertz — and ends support for the technology in the 24 GHz range.
Expanding the amount of airwaves devoted to vehicle radar could also make air travel safer, said FCC Chairman Ajit Pai, by allowing for the installation of radar devices on the wingtips of airplanes.
“Wingtip collisions account for 25 percent of all aircraft ground incidents,” said Pai. “Wingtip radars on aircraft may help with collision avoidance on the tarmac, among other areas.”
Although many analysts say fully self-driving cars are still years away from going mainstream, steps like these could help bring that future just a bit more within reach.
Radio Frequency Identification (RFID) including a case of RFID implant in Humans (ABC July 24, 2017)
Click Here then Click on arrow to the embedded Video "WATCH: Company offers to implant microchips in employees"
Tech company workers agree to have microchips implanted in their hands
By ENJOLI FRANCIS REBECCA JARVIS Jul 24, 2017, 6:48 PM ET ABC News
"Some workers at a company in Wisconsin will soon be getting microchips in order to enter the office, log into computers and even buy a snack or two with just a swipe of a hand.
Todd Westby, the CEO of tech company Three Square Market, told ABC News today that of the 80 employees at the company's River Falls headquarters, more than 50 agreed to get implants. He said that participation was not required.
The microchip uses radio frequency identification (RFID) technology and was approved by the Food and Drug Administration in 2004. The chip is the size of a grain of rice and will be placed between a thumb and forefinger.
Swedish company implants microchips in employees
Westby said that when his team was approached with the idea, there was some reluctance mixed with excitement.
But after further conversations and the sharing of more details, the majority of managers were on board, and the company opted to partner with BioHax International to get the microchips.
Westby said the chip is not GPS enabled, does not allow for tracking workers and does not require passwords.
"There's really nothing to hack in it, because it is encrypted just like credit cards are ... The chances of hacking into it are almost nonexistent because it's not connected to the internet," he said. "The only way for somebody to get connectivity to it is to basically chop off your hand."
Three Square Market is footing the bill for the microchips, which cost $300 each, and licensed piercers will be handling the implantations on Aug. 1. Westby said that if workers change their minds, the microchips can be removed, as if taking out a splinter.
He said his wife, young adult children and others will also be getting the microchips next week.
Critics warned that there could be dangers in how the company planned to store, use and protect workers' information.
Adam Levin, the chairman and founder of CyberScout, which provides identity protection and data risk services, said he would not put a microchip in his body.
"Many things start off with the best of intentions, but sometimes intentions turn," he said. "We've survived thousands of years as a species without being microchipped. Is there any particular need to do it now? ... Everyone has a decision to make. That is, how much privacy and security are they willing to trade for convenience?"
Jowan Osterlund of BioHax said implanting people was the next step for electronics.
"I'm certain that this will be the natural way to add another dimension to our everyday life," he told The Associated Press...."
Click here for rest of Article.
___________________________________________________________________________
Radio-frequency identification (RFID): uses electromagnetic fields to automatically identify and track tags attached to objects. The tags contain electronically stored information. Passive tags collect energy from a nearby RFID reader's interrogating radio waves.
Active tags have a local power source such as a battery and may operate at hundreds of meters from the RFID reader. Unlike a barcode, the tag need not be within the line of sight of the reader, so it may be embedded in the tracked object. RFID is one method for Automatic Identification and Data Capture (AIDC).
RFID tags are used in many industries, for example, an RFID tag attached to an automobile during production can be used to track its progress through the assembly line; RFID-tagged pharmaceuticals can be tracked through warehouses; and implanting RFID microchips in livestock and pets allows for positive identification of animals.
Since RFID tags can be attached to cash, clothing, and possessions, or implanted in animals and people, the possibility of reading personally-linked information without consent has raised serious privacy concerns. These concerns resulted in standard specifications development addressing privacy and security issues.
ISO/IEC 18000 and ISO/IEC 29167 use on-chip cryptography methods for untraceability, tag and reader authentication, and over-the-air privacy. ISO/IEC 20248 specifies a digital signature data structure for RFID and barcodes providing data, source and read method authenticity.
This work is done within ISO/IEC JTC 1/SC 31 Automatic identification and data capture techniques.
In 2014, the world RFID market is worth US$8.89 billion, up from US$7.77 billion in 2013 and US$6.96 billion in 2012. This includes tags, readers, and software/services for RFID cards, labels, fobs, and all other form factors. The market value is expected to rise to US$18.68 billion by 2026.
Click on any of the following blue hyperlinks for more about Radio-Frequency Idenfication (RFID):
By ENJOLI FRANCIS REBECCA JARVIS Jul 24, 2017, 6:48 PM ET ABC News
"Some workers at a company in Wisconsin will soon be getting microchips in order to enter the office, log into computers and even buy a snack or two with just a swipe of a hand.
Todd Westby, the CEO of tech company Three Square Market, told ABC News today that of the 80 employees at the company's River Falls headquarters, more than 50 agreed to get implants. He said that participation was not required.
The microchip uses radio frequency identification (RFID) technology and was approved by the Food and Drug Administration in 2004. The chip is the size of a grain of rice and will be placed between a thumb and forefinger.
Swedish company implants microchips in employees
Westby said that when his team was approached with the idea, there was some reluctance mixed with excitement.
But after further conversations and the sharing of more details, the majority of managers were on board, and the company opted to partner with BioHax International to get the microchips.
Westby said the chip is not GPS enabled, does not allow for tracking workers and does not require passwords.
"There's really nothing to hack in it, because it is encrypted just like credit cards are ... The chances of hacking into it are almost nonexistent because it's not connected to the internet," he said. "The only way for somebody to get connectivity to it is to basically chop off your hand."
Three Square Market is footing the bill for the microchips, which cost $300 each, and licensed piercers will be handling the implantations on Aug. 1. Westby said that if workers change their minds, the microchips can be removed, as if taking out a splinter.
He said his wife, young adult children and others will also be getting the microchips next week.
Critics warned that there could be dangers in how the company planned to store, use and protect workers' information.
Adam Levin, the chairman and founder of CyberScout, which provides identity protection and data risk services, said he would not put a microchip in his body.
"Many things start off with the best of intentions, but sometimes intentions turn," he said. "We've survived thousands of years as a species without being microchipped. Is there any particular need to do it now? ... Everyone has a decision to make. That is, how much privacy and security are they willing to trade for convenience?"
Jowan Osterlund of BioHax said implanting people was the next step for electronics.
"I'm certain that this will be the natural way to add another dimension to our everyday life," he told The Associated Press...."
Click here for rest of Article.
___________________________________________________________________________
Radio-frequency identification (RFID): uses electromagnetic fields to automatically identify and track tags attached to objects. The tags contain electronically stored information. Passive tags collect energy from a nearby RFID reader's interrogating radio waves.
Active tags have a local power source such as a battery and may operate at hundreds of meters from the RFID reader. Unlike a barcode, the tag need not be within the line of sight of the reader, so it may be embedded in the tracked object. RFID is one method for Automatic Identification and Data Capture (AIDC).
RFID tags are used in many industries, for example, an RFID tag attached to an automobile during production can be used to track its progress through the assembly line; RFID-tagged pharmaceuticals can be tracked through warehouses; and implanting RFID microchips in livestock and pets allows for positive identification of animals.
Since RFID tags can be attached to cash, clothing, and possessions, or implanted in animals and people, the possibility of reading personally-linked information without consent has raised serious privacy concerns. These concerns resulted in standard specifications development addressing privacy and security issues.
ISO/IEC 18000 and ISO/IEC 29167 use on-chip cryptography methods for untraceability, tag and reader authentication, and over-the-air privacy. ISO/IEC 20248 specifies a digital signature data structure for RFID and barcodes providing data, source and read method authenticity.
This work is done within ISO/IEC JTC 1/SC 31 Automatic identification and data capture techniques.
In 2014, the world RFID market is worth US$8.89 billion, up from US$7.77 billion in 2013 and US$6.96 billion in 2012. This includes tags, readers, and software/services for RFID cards, labels, fobs, and all other form factors. The market value is expected to rise to US$18.68 billion by 2026.
Click on any of the following blue hyperlinks for more about Radio-Frequency Idenfication (RFID):
- History
- Design
- Tags
Readers
Frequencies
Signaling
Miniaturization
- Tags
- Uses
- Commerce
- Access control
Advertising
Promotion tracking
- Access control
- Transportation and logistics
- Intelligent transportation systems
Hose stations and conveyance of fluids
Track & Trace test vehicles and prototype parts
- Intelligent transportation systems
- Infrastructure management and protection
- Passports
- Transportation payments
- Animal identification
- Human implantation
- Institutions
- Hospitals and healthcare
Libraries
Museums
Schools and universities
- Hospitals and healthcare
- Sports
- Complement to barcode
- Waste Management
- Telemetry
- Commerce
- Optical RFID
- Regulation and standardization
- Problems and concerns
- Data flooding
Global standardization
Security concerns
Health
Exploitation
Passports
Shielding
- Data flooding
- Controversies
- Privacy
Government control
Deliberate destruction in clothing and other items
- Privacy
- See also:
- AS5678
- Balise
- Bin bug
- Chipless RFID
- Internet of Things
- Mass surveillance
- Near Field Communication
- PositiveID
- Privacy by design
- Proximity card
- Resonant inductive coupling
- RFID on metal
- RSA blocker tag
- Smart label
- Speedpass
- TecTile
- Tracking system
- RFID in schools
- UHF regulations overview by GS1
- How RFID Works at HowStuffWorks
- Privacy concerns and proposed privacy legislation
- RFID at DMOZ
- What is RFID? - Animated Explanation
- IEEE Council on RFID
- RFID tracking system
The History of Technology
YouTube Video: Ellen Discusses Technology's Detailed History
(The Ellen Show)
The history of technology is the history of the invention of tools and techniques and is similar to other sides of the history of humanity. Technology can refer to methods ranging from as simple as language and stone tools to the complex genetic engineering and information technology that has emerged since the 1980s.
New knowledge has enabled people to create new things, and conversely, many scientific endeavors are made possible by technologies which assist humans in travelling to places they could not previously reach, and by scientific instruments by which we study nature in more detail than our natural senses allow.
Since much of technology is applied science, technical history is connected to the history of science. Since technology uses resources, technical history is tightly connected to economic history. From those resources, technology produces other resources, including technological artifacts used in everyday life.
Technological change affects, and is affected by, a society's cultural traditions. It is a force for economic growth and a means to develop and project economic, political and military power.
Measuring technological progress:
Many sociologists and anthropologists have created social theories dealing with social and cultural evolution. Some, like Lewis H. Morgan, Leslie White, and Gerhard Lenski have declared technological progress to be the primary factor driving the development of human civilization.
Morgan's concept of three major stages of social evolution (savagery, barbarism, and civilization) can be divided by technological milestones, such as fire. White argued the measure by which to judge the evolution of culture was energy.
For White, "the primary function of culture" is to "harness and control energy." White differentiates between five stages of human development:
White introduced a formula P=E*T, where E is a measure of energy consumed, and T is the measure of efficiency of technical factors utilizing the energy. In his own words, "culture evolves as the amount of energy harnessed per capita per year is increased, or as the efficiency of the instrumental means of putting the energy to work is increased". Nikolai Kardashev extrapolated his theory, creating the Kardashev scale, which categorizes the energy use of advanced civilizations.
Lenski's approach focuses on information. The more information and knowledge (especially allowing the shaping of natural environment) a given society has, the more advanced it is. He identifies four stages of human development, based on advances in the history of communication.
Lenski also differentiates societies based on their level of technology, communication and economy:
In economics productivity is a measure of technological progress. Productivity increases when fewer inputs (labor, energy, materials or land) are used in the production of a unit of output.
Another indicator of technological progress is the development of new products and services, which is necessary to offset unemployment that would otherwise result as labor inputs are reduced.
In developed countries productivity growth has been slowing since the late 1970s; however, productivity growth was higher in some economic sectors, such as manufacturing.
For example, in employment in manufacturing in the United States declined from over 30% in the 1940s to just over 10% 70 years later. Similar changes occurred in other developed countries. This stage is referred to as post-industrial.
In the late 1970s sociologists and anthropologists like Alvin Toffler (author of Future Shock), Daniel Bell and John Naisbitt have approached the theories of post-industrial societies, arguing that the current era of industrial society is coming to an end, and services and information are becoming more important than industry and goods. Some extreme visions of the post-industrial society, especially in fiction, are strikingly similar to the visions of near and post-Singularity societies.
Click on any of the following blue hyperlinks for more about The History of Technology:
New knowledge has enabled people to create new things, and conversely, many scientific endeavors are made possible by technologies which assist humans in travelling to places they could not previously reach, and by scientific instruments by which we study nature in more detail than our natural senses allow.
Since much of technology is applied science, technical history is connected to the history of science. Since technology uses resources, technical history is tightly connected to economic history. From those resources, technology produces other resources, including technological artifacts used in everyday life.
Technological change affects, and is affected by, a society's cultural traditions. It is a force for economic growth and a means to develop and project economic, political and military power.
Measuring technological progress:
Many sociologists and anthropologists have created social theories dealing with social and cultural evolution. Some, like Lewis H. Morgan, Leslie White, and Gerhard Lenski have declared technological progress to be the primary factor driving the development of human civilization.
Morgan's concept of three major stages of social evolution (savagery, barbarism, and civilization) can be divided by technological milestones, such as fire. White argued the measure by which to judge the evolution of culture was energy.
For White, "the primary function of culture" is to "harness and control energy." White differentiates between five stages of human development:
- In the first, people use energy of their own muscles.
- In the second, they use energy of domesticated animals.
- In the third, they use the energy of plants (agricultural revolution).
- In the fourth, they learn to use the energy of natural resources: coal, oil, gas.
- In the fifth, they harness nuclear energy.
White introduced a formula P=E*T, where E is a measure of energy consumed, and T is the measure of efficiency of technical factors utilizing the energy. In his own words, "culture evolves as the amount of energy harnessed per capita per year is increased, or as the efficiency of the instrumental means of putting the energy to work is increased". Nikolai Kardashev extrapolated his theory, creating the Kardashev scale, which categorizes the energy use of advanced civilizations.
Lenski's approach focuses on information. The more information and knowledge (especially allowing the shaping of natural environment) a given society has, the more advanced it is. He identifies four stages of human development, based on advances in the history of communication.
- In the first stage, information is passed by genes.
- In the second, when humans gain sentience, they can learn and pass information through by experience.
- In the third, the humans start using signs and develop logic.
- In the fourth, they can create symbols, develop language and writing. Advancements in communications technology translates into advancements in the economic system and political system, distribution of wealth, social inequality and other spheres of social life.
Lenski also differentiates societies based on their level of technology, communication and economy:
- hunter-gatherer,
- simple agricultural,
- advanced agricultural,
- industrial,
- specialized (such as fishing societies).
In economics productivity is a measure of technological progress. Productivity increases when fewer inputs (labor, energy, materials or land) are used in the production of a unit of output.
Another indicator of technological progress is the development of new products and services, which is necessary to offset unemployment that would otherwise result as labor inputs are reduced.
In developed countries productivity growth has been slowing since the late 1970s; however, productivity growth was higher in some economic sectors, such as manufacturing.
For example, in employment in manufacturing in the United States declined from over 30% in the 1940s to just over 10% 70 years later. Similar changes occurred in other developed countries. This stage is referred to as post-industrial.
In the late 1970s sociologists and anthropologists like Alvin Toffler (author of Future Shock), Daniel Bell and John Naisbitt have approached the theories of post-industrial societies, arguing that the current era of industrial society is coming to an end, and services and information are becoming more important than industry and goods. Some extreme visions of the post-industrial society, especially in fiction, are strikingly similar to the visions of near and post-Singularity societies.
Click on any of the following blue hyperlinks for more about The History of Technology:
- By period and geography
- By type
- See also:
- Related history
- Related disciplines
- Related subjects
- Related concepts
- Future (speculative)
- People
- Historiography
- Historians
- Book series
- Journals and periodicals
- Notebooks
- Research institutes
- Electropaedia on the History of Technology
- MIT 6.933J – The Structure of Engineering Revolutions. From MIT OpenCourseWare, course materials (graduate level) for a course on the history of technology through a Thomas Kuhn-ian lens.
- Concept of Civilization Events. From Jaroslaw Kessler, a chronology of "civilizing events".
- Ancient and Medieval City Technology
- Society for the History of Technology
Technology, including a List of Technologies
YouTube Video of the Top 10 Inventions of All Time by WatchMojo
Pictured: How to Check Your Wi-Fi Network for Suspicious Devices
Technology is the collection of techniques, skills, methods, and processes used in the production of goods or services or in the accomplishment of objectives, such as scientific investigation. Technology can be the knowledge of techniques, processes, and the like, or it can be embedded in machines to allow for operation without detailed knowledge of their workings.
The simplest form of technology is the development and use of basic tools.
The prehistoric discovery of how to control fire and the later Neolithic Revolution increased the available sources of food, and the invention of the wheel helped humans to travel in and control their environment.
Developments in historic times, including the printing press, the telephone, and the Internet, have lessened physical barriers to communication and allowed humans to interact freely on a global scale. The steady progress of military technology has brought weapons of ever-increasing destructive power, from clubs to nuclear weapons.
Technology has many effects. It has helped develop more advanced economies (including today's global economy) and has allowed the rise of a leisure class.
Many technological processes produce unwanted by-products known as pollution and deplete natural resources to the detriment of Earth's environment.
Innovations have always influenced the values of a society and raised new questions of the ethics of technology. Examples include the rise of the notion of efficiency in terms of human productivity, and the challenges of bioethics.
Philosophical debates have arisen over the use of technology, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar reactionary movements criticize the pervasiveness of technology, arguing that it harms the environment and alienates people; proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition.
Click on any of the following blue hyperlinks for more about Technology:
Click on the following blue hyperlinks for a List of Technologies by Category of Use:
The simplest form of technology is the development and use of basic tools.
The prehistoric discovery of how to control fire and the later Neolithic Revolution increased the available sources of food, and the invention of the wheel helped humans to travel in and control their environment.
Developments in historic times, including the printing press, the telephone, and the Internet, have lessened physical barriers to communication and allowed humans to interact freely on a global scale. The steady progress of military technology has brought weapons of ever-increasing destructive power, from clubs to nuclear weapons.
Technology has many effects. It has helped develop more advanced economies (including today's global economy) and has allowed the rise of a leisure class.
Many technological processes produce unwanted by-products known as pollution and deplete natural resources to the detriment of Earth's environment.
Innovations have always influenced the values of a society and raised new questions of the ethics of technology. Examples include the rise of the notion of efficiency in terms of human productivity, and the challenges of bioethics.
Philosophical debates have arisen over the use of technology, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar reactionary movements criticize the pervasiveness of technology, arguing that it harms the environment and alienates people; proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition.
Click on any of the following blue hyperlinks for more about Technology:
- Definition and usage
- Science, engineering and technology
- History
- Paleolithic (2.5 Ma – 10 ka)
- Stone tools
Fire
Clothing and shelter
- Stone tools
- Neolithic through classical antiquity (10 ka – 300 CE)
- Metal tools
Energy and transport
- Metal tools
- Medieval and modern history (300 CE – present)
- Paleolithic (2.5 Ma – 10 ka)
- Philosophy
- Competitiveness
- Other animal species
- Future technology
- See also:
- Outline of technology
- Architectural technology
- Critique of technology
- Greatest Engineering Achievements of the 20th Century
- History of science and technology
- Knowledge economy
- Law of the instrument – Golden hammer
- Lewis Mumford
- List of years in science
- Niche construction
- Technological convergence
- Technology and society
- Technology assessment
- Technology tree
- -logy
- Superpower § Possible factors
- Theories and concepts in technology:
- Appropriate technology
- Diffusion of innovations
- Human enhancement
- Instrumental conception of technology
- Jacques Ellul
- Paradigm
- Philosophy of technology
- Posthumanism
- Precautionary principle
- Singularitarianism
- Strategy of Technology
- Techno-progressivism
- Technocentrism
- Technocracy
- Technocriticism
- Technological determinism
- Technological evolution
- Technological nationalism
- Technological revival
- Technological singularity
- Technology management
- Technology readiness level
- Technorealism
- Transhumanism
- Economics of technology:
- Technology journalism:
- Other:
Click on the following blue hyperlinks for a List of Technologies by Category of Use:
- Practical Technologies
- Military Technologies
- Astronomical Technologies
- Practical Technologies #2
- Military Technologies #2
- Astronomical Technologies #2
- Medieval Era
- Renaissance Era
Emerging Technologies, including a List
Pictured below: 40 Key and Emerging Technologies for the Future: Chart
- YouTube Video: the Top Ten Emerging Technologies by The World Economic Forum
- YouTube Video: How Brain-Computer Interfaces Work
- YouTube Video of the Top 10 Emerging Technologies That Will Change Your Life
Pictured below: 40 Key and Emerging Technologies for the Future: Chart
Click Here for a List of Emerging Technologies.
Emerging technologies are technologies whose development, practical applications, or both are still largely unrealized. These technologies are generally new but also include older technologies finding new applications. Emerging technologies are often perceived as capable of changing the status quo.
Emerging technologies are characterized by radical novelty (in application even if not in origins), relatively fast growth, coherence, prominent impact, and uncertainty and ambiguity.
In other words, an emerging technology can be defined as "a radically novel and relatively fast growing technology characterised by a certain degree of coherence persisting over time and with the potential to exert a considerable impact on the socio-economic domain(s) which is observed in terms of the composition of actors, institutions and patterns of interactions among those, along with the associated knowledge production processes. Its most prominent impact, however, lies in the future and so in the emergence phase is still somewhat uncertain and ambiguous."
Emerging technologies include a variety of technologies such as the following:
New technological fields may result from the technological convergence of different systems evolving towards similar goals. Convergence brings previously separate technologies such as voice (and telephony features), data (and productivity applications) and video together so that they share resources and interact with each other, creating new efficiencies.
Emerging technologies are those technical innovations which represent progressive developments within a field for competitive advantage; converging technologies represent previously distinct fields which are in some way moving towards stronger inter-connection and similar goals. However, the opinion on the degree of the impact, status and economic viability of several emerging and converging technologies varies.
History of emerging technologies:
Main article: History of technology
In the history of technology, emerging technologies are contemporary advances and innovation in various fields of technology.
Over centuries innovative methods and new technologies are developed and opened up. Some of these technologies are due to theoretical research, and others from commercial research and development.
Technological growth includes incremental developments and disruptive technologies. An example of the former was the gradual roll-out of DVD (digital video disc) as a development intended to follow on from the previous optical technology compact disc. By contrast, disruptive technologies are those where a new method replaces the previous technology and makes it redundant, for example, the replacement of horse-drawn carriages by automobiles and other vehicles.
Emerging technology debates:
See also: Technology and society
Many writers, including computer scientist Bill Joy, have identified clusters of technologies that they consider critical to humanity's future. Joy warns that the technology could be used by elites for good or evil. They could use it as "good shepherds" for the rest of humanity or decide everyone else is superfluous and push for mass extinction of those made unnecessary by technology.
Advocates of the benefits of technological change typically see emerging and converging technologies as offering hope for the betterment of the human condition.
Cyberphilosophers Alexander Bard and Jan Söderqvist argue in The Futurica Trilogy that while Man himself is basically constant throughout human history (genes change very slowly), all relevant change is rather a direct or indirect result of technological innovation (memes change very fast) since new ideas always emanate from technology use and not the other way around.
Man should consequently be regarded as history's main constant and technology as its main variable. However, critics of the risks of technological change, and even some advocates such as transhumanist philosopher Nick Bostrom, warn that some of these technologies could pose dangers, perhaps even contribute to the extinction of humanity itself; i.e., some of them could involve existential risks.
Much ethical debate centers on issues of distributive justice in allocating access to beneficial forms of technology. Some thinkers, including environmental ethicist Bill McKibben, oppose the continuing development of advanced technology partly out of fear that its benefits will be distributed unequally in ways that could worsen the plight of the poor. By contrast, inventor Ray Kurzweil is among techno-utopians who believe that emerging and converging technologies could and will eliminate poverty and abolish suffering.
Some analysts such as Martin Ford, author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, argue that as information technology advances, robots and other forms of automation will ultimately result in significant unemployment as machines and software begin to match and exceed the capability of workers to perform most routine jobs.
As robotics and artificial intelligence develop further, even many skilled jobs may be threatened. Technologies such as machine learning may ultimately allow computers to do many knowledge-based jobs that require significant education.
This may result in substantial unemployment at all skill levels, stagnant or falling wages for most workers, and increased concentration of income and wealth as the owners of capital capture an ever-larger fraction of the economy. This in turn could lead to depressed consumer spending and economic growth as the bulk of the population lacks sufficient discretionary income to purchase the products and services produced by the economy.
See also:
Examples of emerging technologies:
Main article: List of emerging technologies
Artificial intelligence:
Main articles: Artificial intelligence and Outline of artificial intelligence
Artificial intelligence (AI) is the sub intelligence exhibited by machines or software, and the branch of computer science that develops machines and software with animal-like intelligence.
Major AI researchers and textbooks define the field as "the study and design of intelligent agents," where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the study of making intelligent machines".
The central functions (or goals) of AI research include:
General intelligence (or "strong AI") is still among the field's long-term goals. Currently, popular approaches include deep learning, statistical methods, computational intelligence and traditional symbolic AI. There is an enormous number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics, and many others.
3D printing:
Main article: 3D printing
3D printing, also known as additive manufacturing, has been posited by Jeremy Rifkin and others as part of the third industrial revolution.
Combined with Internet technology, 3D printing would allow for digital blueprints of virtually any material product to be sent instantly to another person to be produced on the spot, making purchasing a product online almost instantaneous. Although this technology is still too crude to produce most products, it is rapidly developing and created a controversy in 2013 around the issue of 3D printed firearms.
Gene therapy:
Main article: Gene therapy
See also: Genetic engineering timeline
Gene therapy was first successfully demonstrated in late 1990/early 1991 for adenosine deaminase deficiency, though the treatment was somatic – that is, did not affect the patient's germ line and thus was not heritable. This led the way to treatments for other genetic diseases and increased interest in germ line gene therapy – therapy affecting the gametes and descendants of patients.
Between September 1990 and January 2014, there were around 2,000 gene therapy trials conducted or approved.
Cancer vaccines:
Main article: Cancer vaccine
A cancer vaccine is a vaccine that treats existing cancer or prevents the development of cancer in certain high-risk individuals. Vaccines that treat existing cancer are known as therapeutic cancer vaccines. There are currently no vaccines able to prevent cancer in general.
On April 14, 2009, The Dendreon Corporation announced that their Phase III clinical trial of Provenge, a cancer vaccine designed to treat prostate cancer, had demonstrated an increase in survival. It received U.S. Food and Drug Administration (FDA) approval for use in the treatment of advanced prostate cancer patients on April 29, 2010. The approval of Provenge has stimulated interest in this type of therapy
Cultured meat:
Main article: Cultured meat
Cultured meat, also called in vitro meat, clean meat, cruelty-free meat, shmeat, and test-tube meat, is an animal-flesh product that has never been part of a living animal with exception of the fetal calf serum taken from a slaughtered cow.
In the 21st century, several research projects have worked on in vitro meat in the laboratory. The first in vitro beefburger, created by a Dutch team, was eaten at a demonstration for the press in London in August 2013.
There remain difficulties to be overcome before in vitro meat becomes commercially available. Cultured meat is prohibitively expensive, but it is expected that the cost could be reduced to compete with that of conventionally obtained meat as technology improves.
In vitro meat is also an ethical issue. Some argue that it is less objectionable than traditionally obtained meat because it doesn't involve killing and reduces the risk of animal cruelty, while others disagree with eating meat that has not developed naturally.
Nanotechnology:
Main articles: Nanotechnology and Outline of nanotechnology
Nanotechnology (sometimes shortened to nanotech) is the manipulation of matter on an atomic, molecular, and supramolecular scale. The earliest widespread description of nanotechnology referred to the particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology.
A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this quantum-realm scale, and so the definition shifted from a particular technological goal to a research category inclusive of all types of research and technologies that deal with the special properties of matter that occur below the given size threshold.
Robotics:
Main articles: Robotics and Outline of robotics
Robotics is the branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing.
These technologies deal with automated machines that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behavior, and/or cognition. A good example of a robot that resembles humans is Sophia, a social humanoid robot developed by Hong Kong-based company Hanson Robotics which was activated on April 19, 2015. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics.
Stem-cell therapy:
Main article: Stem-cell therapy
Stem cell therapy is an intervention strategy that introduces new adult stem cells into damaged tissue in order to treat disease or injury. Many medical researchers believe that stem cell treatments have the potential to change the face of human disease and alleviate suffering.
The ability of stem cells to self-renew and give rise to subsequent generations with variable degrees of differentiation capacities offers significant potential for generation of tissues that can potentially replace diseased and damaged areas in the body, with minimal risk of rejection and side effects.
Chimeric antigen receptor (CAR)-modified T cells have raised among other immunotherapies for cancer treatment, being implemented against B-cell malignancies. Despite the promising outcomes of this innovative technology, CAR-T cells are not exempt from limitations that must yet to be overcome in order to provide reliable and more efficient treatments against other types of cancer.
Distributed ledger technology:
Main articles: Blockchain and Smart contracts
Distributed ledger or blockchain technology provides a transparent and immutable list of transactions. A wide range of uses has been proposed for where an open, decentralised database is required, ranging from supply chains to cryptocurrencies.
Smart contracts are self-executing transactions which occur when pre-defined conditions are met. The aim is to provide security that is superior to traditional contract law, and to reduce transaction costs and delays. The original idea was conceived by Nick Szabo in 1994, but remained unrealized until the development of blockchains.
Augmented reality:
Main article: Augmented reality
This type of technology where digital graphics are loaded onto live footage has been around since the 20th century, but thanks to the arrival of more powerful computing hardware and the implementation of open source, this technology has been able to do things that we never thought were possible. Some ways in which we have used this technology can be through apps such as Pokémon Go, Snapchat and Instagram filters and other apps that create fictional things in real objects.
Multi-use rockets:
Main article: Reusable spacecraft
This technology can be attributed to Elon Musk and the space company SpaceX, where instead of creating single use rockets that have no purpose after their launch, they are now able to land safely in a pre-specified place where they can recover them and use them again in later launches. This technology is believed to be one of the most important factors for the future of space travel, making it more accessible and also less polluting for the environment.
Development of emerging technologies:
As innovation drives economic growth, and large economic rewards come from new inventions, a great deal of resources (funding and effort) go into the development of emerging technologies. Some of the sources of these resources are described below.
Research and development:
See also List of countries by research and development spending.
Research and development is directed towards the advancement of technology in general, and therefore includes development of emerging technologies.
Applied research is a form of systematic inquiry involving the practical application of science. It accesses and uses some part of the research communities' (the academia's) accumulated theories, knowledge, methods, and techniques, for a specific, often state-, business-, or client-driven purpose.
Science policy is the area of public policy which is concerned with the policies that affect the conduct of the science and research enterprise, including the funding of science, often in pursuance of other national policy goals such as technological innovation to promote commercial product development, weapons development, health care and environmental monitoring.
Patents
Patents provide inventors with a limited period of time (minimum of 20 years, but duration based on jurisdiction) of exclusive right in the making, selling, use, leasing or otherwise of their novel technological inventions.
Artificial intelligence, robotic inventions, new material, or blockchain platforms may be patentable, the patent protecting the technological know-how used to create these inventions.
In 2019, WIPO reported that AI was the most prolific emerging technology in terms of number of patent applications and granted patents, the Internet of things was estimated to be the largest in terms of market size.
It was followed, again in market size, by big data technologies, robotics, AI, 3D printing and the fifth generation of mobile services (5G). Since AI emerged in the 1950s, 340000 AI-related patent applications were filed by innovators and 1.6 million scientific papers have been published by researchers, with the majority of all AI-related patent filings published since 2013.
Companies represent 26 out of the top 30 AI patent applicants, with universities or public research organizations accounting for the remaining four.
DARPA:
The Defense Advanced Research Projects Agency (DARPA) is an agency of the U.S. Department of Defense responsible for the development of emerging technologies for use by the military.
DARPA was created in 1958 as the Advanced Research Projects Agency (ARPA) by President Dwight D. Eisenhower. Its purpose was to formulate and execute research and development projects to expand the frontiers of technology and science, with the aim to reach beyond immediate military requirements.
Projects funded by DARPA have provided significant technologies that influenced many non-military fields, such as the Internet and Global Positioning System technology.
Technology competitions and awards:
There are awards that provide incentive to push the limits of technology (generally synonymous with emerging technologies).
Note that while some of these awards reward achievement after-the-fact via analysis of the merits of technological breakthroughs, others provide incentive via competitions for awards offered for goals yet to be achieved.
The Orteig Prize was a $25,000 award offered in 1919 by French hotelier Raymond Orteig for the first nonstop flight between New York City and Paris. In 1927, underdog Charles Lindbergh won the prize in a modified single-engine Ryan aircraft called the Spirit of St. Louis.
In total, nine teams spent $400,000 in pursuit of the Orteig Prize.
The XPRIZE series of awards, public competitions designed and managed by the non-profit organization called the X Prize Foundation, are intended to encourage technological development that could benefit mankind.
The most high-profile XPRIZE to date was the $10,000,000 Ansari XPRIZE relating to spacecraft development, which was awarded in 2004 for the development of SpaceShipOne.
The Turing Award is an annual prize given by the Association for Computing Machinery (ACM) to "an individual selected for contributions of a technical nature made to the computing community." It is stipulated that the contributions should be of lasting and major technical importance to the computer field. The Turing Award is generally recognized as the highest distinction in computer science, and in 2014 grew to $1,000,000.
The Millennium Technology Prize is awarded once every two years by Technology Academy Finland, an independent fund established by Finnish industry and the Finnish state in partnership. The first recipient was Tim Berners-Lee, inventor of the World Wide Web.
In 2003, David Gobel seed-funded the Methuselah Mouse Prize (Mprize) to encourage the development of new life extension therapies in mice, which are genetically similar to humans. So far, three Mouse Prizes have been awarded: one for breaking longevity records to Dr. Andrzej Bartke of Southern Illinois University; one for late-onset rejuvenation strategies to Dr. Stephen Spindler of the University of California; and one to Dr. Z. Dave Sharp for his work with the pharmaceutical rapamycin.
Role of science fiction:
Main article: Technology in science fiction
Science fiction has often affected innovation and new technology - for example many rocketry pioneers were inspired by science fiction - and the documentary How William Shatner Changed the World gives a number of examples of imagined technologies being actualized.
In the media:
The term bleeding edge has been used to refer to some new technologies, formed as an allusion to the similar terms "leading edge" and "cutting edge". It tends to imply even greater advancement, albeit at an increased risk because of the unreliability of the software or hardware. The first documented example of this term being used dates to early 1983, when an unnamed banking executive was quoted to have used it in reference to Storage Technology Corporation.
See also:
Emerging technologies are technologies whose development, practical applications, or both are still largely unrealized. These technologies are generally new but also include older technologies finding new applications. Emerging technologies are often perceived as capable of changing the status quo.
Emerging technologies are characterized by radical novelty (in application even if not in origins), relatively fast growth, coherence, prominent impact, and uncertainty and ambiguity.
In other words, an emerging technology can be defined as "a radically novel and relatively fast growing technology characterised by a certain degree of coherence persisting over time and with the potential to exert a considerable impact on the socio-economic domain(s) which is observed in terms of the composition of actors, institutions and patterns of interactions among those, along with the associated knowledge production processes. Its most prominent impact, however, lies in the future and so in the emergence phase is still somewhat uncertain and ambiguous."
Emerging technologies include a variety of technologies such as the following:
- educational technology,
- information technology,
- nanotechnology,
- biotechnology,
- robotics,
- and artificial intelligence.
New technological fields may result from the technological convergence of different systems evolving towards similar goals. Convergence brings previously separate technologies such as voice (and telephony features), data (and productivity applications) and video together so that they share resources and interact with each other, creating new efficiencies.
Emerging technologies are those technical innovations which represent progressive developments within a field for competitive advantage; converging technologies represent previously distinct fields which are in some way moving towards stronger inter-connection and similar goals. However, the opinion on the degree of the impact, status and economic viability of several emerging and converging technologies varies.
History of emerging technologies:
Main article: History of technology
In the history of technology, emerging technologies are contemporary advances and innovation in various fields of technology.
Over centuries innovative methods and new technologies are developed and opened up. Some of these technologies are due to theoretical research, and others from commercial research and development.
Technological growth includes incremental developments and disruptive technologies. An example of the former was the gradual roll-out of DVD (digital video disc) as a development intended to follow on from the previous optical technology compact disc. By contrast, disruptive technologies are those where a new method replaces the previous technology and makes it redundant, for example, the replacement of horse-drawn carriages by automobiles and other vehicles.
Emerging technology debates:
See also: Technology and society
Many writers, including computer scientist Bill Joy, have identified clusters of technologies that they consider critical to humanity's future. Joy warns that the technology could be used by elites for good or evil. They could use it as "good shepherds" for the rest of humanity or decide everyone else is superfluous and push for mass extinction of those made unnecessary by technology.
Advocates of the benefits of technological change typically see emerging and converging technologies as offering hope for the betterment of the human condition.
Cyberphilosophers Alexander Bard and Jan Söderqvist argue in The Futurica Trilogy that while Man himself is basically constant throughout human history (genes change very slowly), all relevant change is rather a direct or indirect result of technological innovation (memes change very fast) since new ideas always emanate from technology use and not the other way around.
Man should consequently be regarded as history's main constant and technology as its main variable. However, critics of the risks of technological change, and even some advocates such as transhumanist philosopher Nick Bostrom, warn that some of these technologies could pose dangers, perhaps even contribute to the extinction of humanity itself; i.e., some of them could involve existential risks.
Much ethical debate centers on issues of distributive justice in allocating access to beneficial forms of technology. Some thinkers, including environmental ethicist Bill McKibben, oppose the continuing development of advanced technology partly out of fear that its benefits will be distributed unequally in ways that could worsen the plight of the poor. By contrast, inventor Ray Kurzweil is among techno-utopians who believe that emerging and converging technologies could and will eliminate poverty and abolish suffering.
Some analysts such as Martin Ford, author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, argue that as information technology advances, robots and other forms of automation will ultimately result in significant unemployment as machines and software begin to match and exceed the capability of workers to perform most routine jobs.
As robotics and artificial intelligence develop further, even many skilled jobs may be threatened. Technologies such as machine learning may ultimately allow computers to do many knowledge-based jobs that require significant education.
This may result in substantial unemployment at all skill levels, stagnant or falling wages for most workers, and increased concentration of income and wealth as the owners of capital capture an ever-larger fraction of the economy. This in turn could lead to depressed consumer spending and economic growth as the bulk of the population lacks sufficient discretionary income to purchase the products and services produced by the economy.
See also:
- Technological innovation system,
- Technological utopianism,
- Techno-progressivism,
- Current research in evolutionary biology,
- Bioconservatism,
- Bioethics,
- and Biopolitics
Examples of emerging technologies:
Main article: List of emerging technologies
Artificial intelligence:
Main articles: Artificial intelligence and Outline of artificial intelligence
Artificial intelligence (AI) is the sub intelligence exhibited by machines or software, and the branch of computer science that develops machines and software with animal-like intelligence.
Major AI researchers and textbooks define the field as "the study and design of intelligent agents," where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the study of making intelligent machines".
The central functions (or goals) of AI research include:
- reasoning,
- knowledge,
- planning,
- learning,
- natural language processing (communication),
- perception,
- and the ability to move and manipulate objects.
General intelligence (or "strong AI") is still among the field's long-term goals. Currently, popular approaches include deep learning, statistical methods, computational intelligence and traditional symbolic AI. There is an enormous number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics, and many others.
3D printing:
Main article: 3D printing
3D printing, also known as additive manufacturing, has been posited by Jeremy Rifkin and others as part of the third industrial revolution.
Combined with Internet technology, 3D printing would allow for digital blueprints of virtually any material product to be sent instantly to another person to be produced on the spot, making purchasing a product online almost instantaneous. Although this technology is still too crude to produce most products, it is rapidly developing and created a controversy in 2013 around the issue of 3D printed firearms.
Gene therapy:
Main article: Gene therapy
See also: Genetic engineering timeline
Gene therapy was first successfully demonstrated in late 1990/early 1991 for adenosine deaminase deficiency, though the treatment was somatic – that is, did not affect the patient's germ line and thus was not heritable. This led the way to treatments for other genetic diseases and increased interest in germ line gene therapy – therapy affecting the gametes and descendants of patients.
Between September 1990 and January 2014, there were around 2,000 gene therapy trials conducted or approved.
Cancer vaccines:
Main article: Cancer vaccine
A cancer vaccine is a vaccine that treats existing cancer or prevents the development of cancer in certain high-risk individuals. Vaccines that treat existing cancer are known as therapeutic cancer vaccines. There are currently no vaccines able to prevent cancer in general.
On April 14, 2009, The Dendreon Corporation announced that their Phase III clinical trial of Provenge, a cancer vaccine designed to treat prostate cancer, had demonstrated an increase in survival. It received U.S. Food and Drug Administration (FDA) approval for use in the treatment of advanced prostate cancer patients on April 29, 2010. The approval of Provenge has stimulated interest in this type of therapy
Cultured meat:
Main article: Cultured meat
Cultured meat, also called in vitro meat, clean meat, cruelty-free meat, shmeat, and test-tube meat, is an animal-flesh product that has never been part of a living animal with exception of the fetal calf serum taken from a slaughtered cow.
In the 21st century, several research projects have worked on in vitro meat in the laboratory. The first in vitro beefburger, created by a Dutch team, was eaten at a demonstration for the press in London in August 2013.
There remain difficulties to be overcome before in vitro meat becomes commercially available. Cultured meat is prohibitively expensive, but it is expected that the cost could be reduced to compete with that of conventionally obtained meat as technology improves.
In vitro meat is also an ethical issue. Some argue that it is less objectionable than traditionally obtained meat because it doesn't involve killing and reduces the risk of animal cruelty, while others disagree with eating meat that has not developed naturally.
Nanotechnology:
Main articles: Nanotechnology and Outline of nanotechnology
Nanotechnology (sometimes shortened to nanotech) is the manipulation of matter on an atomic, molecular, and supramolecular scale. The earliest widespread description of nanotechnology referred to the particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology.
A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this quantum-realm scale, and so the definition shifted from a particular technological goal to a research category inclusive of all types of research and technologies that deal with the special properties of matter that occur below the given size threshold.
Robotics:
Main articles: Robotics and Outline of robotics
Robotics is the branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing.
These technologies deal with automated machines that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behavior, and/or cognition. A good example of a robot that resembles humans is Sophia, a social humanoid robot developed by Hong Kong-based company Hanson Robotics which was activated on April 19, 2015. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics.
Stem-cell therapy:
Main article: Stem-cell therapy
Stem cell therapy is an intervention strategy that introduces new adult stem cells into damaged tissue in order to treat disease or injury. Many medical researchers believe that stem cell treatments have the potential to change the face of human disease and alleviate suffering.
The ability of stem cells to self-renew and give rise to subsequent generations with variable degrees of differentiation capacities offers significant potential for generation of tissues that can potentially replace diseased and damaged areas in the body, with minimal risk of rejection and side effects.
Chimeric antigen receptor (CAR)-modified T cells have raised among other immunotherapies for cancer treatment, being implemented against B-cell malignancies. Despite the promising outcomes of this innovative technology, CAR-T cells are not exempt from limitations that must yet to be overcome in order to provide reliable and more efficient treatments against other types of cancer.
Distributed ledger technology:
Main articles: Blockchain and Smart contracts
Distributed ledger or blockchain technology provides a transparent and immutable list of transactions. A wide range of uses has been proposed for where an open, decentralised database is required, ranging from supply chains to cryptocurrencies.
Smart contracts are self-executing transactions which occur when pre-defined conditions are met. The aim is to provide security that is superior to traditional contract law, and to reduce transaction costs and delays. The original idea was conceived by Nick Szabo in 1994, but remained unrealized until the development of blockchains.
Augmented reality:
Main article: Augmented reality
This type of technology where digital graphics are loaded onto live footage has been around since the 20th century, but thanks to the arrival of more powerful computing hardware and the implementation of open source, this technology has been able to do things that we never thought were possible. Some ways in which we have used this technology can be through apps such as Pokémon Go, Snapchat and Instagram filters and other apps that create fictional things in real objects.
Multi-use rockets:
Main article: Reusable spacecraft
This technology can be attributed to Elon Musk and the space company SpaceX, where instead of creating single use rockets that have no purpose after their launch, they are now able to land safely in a pre-specified place where they can recover them and use them again in later launches. This technology is believed to be one of the most important factors for the future of space travel, making it more accessible and also less polluting for the environment.
Development of emerging technologies:
As innovation drives economic growth, and large economic rewards come from new inventions, a great deal of resources (funding and effort) go into the development of emerging technologies. Some of the sources of these resources are described below.
Research and development:
See also List of countries by research and development spending.
Research and development is directed towards the advancement of technology in general, and therefore includes development of emerging technologies.
Applied research is a form of systematic inquiry involving the practical application of science. It accesses and uses some part of the research communities' (the academia's) accumulated theories, knowledge, methods, and techniques, for a specific, often state-, business-, or client-driven purpose.
Science policy is the area of public policy which is concerned with the policies that affect the conduct of the science and research enterprise, including the funding of science, often in pursuance of other national policy goals such as technological innovation to promote commercial product development, weapons development, health care and environmental monitoring.
Patents
Patents provide inventors with a limited period of time (minimum of 20 years, but duration based on jurisdiction) of exclusive right in the making, selling, use, leasing or otherwise of their novel technological inventions.
Artificial intelligence, robotic inventions, new material, or blockchain platforms may be patentable, the patent protecting the technological know-how used to create these inventions.
In 2019, WIPO reported that AI was the most prolific emerging technology in terms of number of patent applications and granted patents, the Internet of things was estimated to be the largest in terms of market size.
It was followed, again in market size, by big data technologies, robotics, AI, 3D printing and the fifth generation of mobile services (5G). Since AI emerged in the 1950s, 340000 AI-related patent applications were filed by innovators and 1.6 million scientific papers have been published by researchers, with the majority of all AI-related patent filings published since 2013.
Companies represent 26 out of the top 30 AI patent applicants, with universities or public research organizations accounting for the remaining four.
DARPA:
The Defense Advanced Research Projects Agency (DARPA) is an agency of the U.S. Department of Defense responsible for the development of emerging technologies for use by the military.
DARPA was created in 1958 as the Advanced Research Projects Agency (ARPA) by President Dwight D. Eisenhower. Its purpose was to formulate and execute research and development projects to expand the frontiers of technology and science, with the aim to reach beyond immediate military requirements.
Projects funded by DARPA have provided significant technologies that influenced many non-military fields, such as the Internet and Global Positioning System technology.
Technology competitions and awards:
There are awards that provide incentive to push the limits of technology (generally synonymous with emerging technologies).
Note that while some of these awards reward achievement after-the-fact via analysis of the merits of technological breakthroughs, others provide incentive via competitions for awards offered for goals yet to be achieved.
The Orteig Prize was a $25,000 award offered in 1919 by French hotelier Raymond Orteig for the first nonstop flight between New York City and Paris. In 1927, underdog Charles Lindbergh won the prize in a modified single-engine Ryan aircraft called the Spirit of St. Louis.
In total, nine teams spent $400,000 in pursuit of the Orteig Prize.
The XPRIZE series of awards, public competitions designed and managed by the non-profit organization called the X Prize Foundation, are intended to encourage technological development that could benefit mankind.
The most high-profile XPRIZE to date was the $10,000,000 Ansari XPRIZE relating to spacecraft development, which was awarded in 2004 for the development of SpaceShipOne.
The Turing Award is an annual prize given by the Association for Computing Machinery (ACM) to "an individual selected for contributions of a technical nature made to the computing community." It is stipulated that the contributions should be of lasting and major technical importance to the computer field. The Turing Award is generally recognized as the highest distinction in computer science, and in 2014 grew to $1,000,000.
The Millennium Technology Prize is awarded once every two years by Technology Academy Finland, an independent fund established by Finnish industry and the Finnish state in partnership. The first recipient was Tim Berners-Lee, inventor of the World Wide Web.
In 2003, David Gobel seed-funded the Methuselah Mouse Prize (Mprize) to encourage the development of new life extension therapies in mice, which are genetically similar to humans. So far, three Mouse Prizes have been awarded: one for breaking longevity records to Dr. Andrzej Bartke of Southern Illinois University; one for late-onset rejuvenation strategies to Dr. Stephen Spindler of the University of California; and one to Dr. Z. Dave Sharp for his work with the pharmaceutical rapamycin.
Role of science fiction:
Main article: Technology in science fiction
Science fiction has often affected innovation and new technology - for example many rocketry pioneers were inspired by science fiction - and the documentary How William Shatner Changed the World gives a number of examples of imagined technologies being actualized.
In the media:
The term bleeding edge has been used to refer to some new technologies, formed as an allusion to the similar terms "leading edge" and "cutting edge". It tends to imply even greater advancement, albeit at an increased risk because of the unreliability of the software or hardware. The first documented example of this term being used dates to early 1983, when an unnamed banking executive was quoted to have used it in reference to Storage Technology Corporation.
See also:
- List of emerging technologies
- Foresight
- Futures studies
- Future of Humanity Institute
- Institute for Ethics and Emerging Technologies
- Institute on Biotechnology and the Human Future
- Technological change
- Transhumanism
- Upcoming software
Human-like Robots including Sophia (Robot), "Citizen" of Saudi Arabia
YouTube Video: Interview With The Lifelike Hot Robot Named Sophia by CNBC
Pictured: Four facial expressions that Sophia can exhibit
A humanoid robot is a robot with its body shape built to resemble the human body. The design may be for functional purposes, such as interacting with human tools and environments, for experimental purposes, such as the study of al locomotion, or for other purposes.
In general, humanoid robots have a torso, a head, two arms, and two legs, though some forms of humanoid robots may model only part of the body, for example, from the waist up. Some humanoid robot also have heads designed to replicate human facial features such as eyes and mouths. Androids are humanoid robots built to aesthetically resemble humans.
Humanoid robots are now used as a research tool in several scientific areas.
Researchers need to understand the human body structure and behavior (biomechanics) to build and study humanoid robots.
On the other side, the attempt to the simulation of the human body leads to a better understanding of it.
Human cognition is a field of study which is focused on how humans learn from sensory information in order to acquire perceptual and motor skills. This knowledge is used to develop computational models of human behavior and it has been improving over time.
It has been suggested that very advanced robotics will facilitate the enhancement of ordinary humans. See transhumanism.
Although the initial aim of humanoid research was to build better orthosis and prosthesis for human beings, knowledge has been transferred between both disciplines. A few examples are: powered leg prosthesis for neuromuscularly impaired, ankle-foot orthosis, biological realistic leg prosthesis and forearm prosthesis.
Besides the research, humanoid robots are being developed to perform human tasks like personal assistance, where they should be able to assist the sick and elderly, and dirty or dangerous jobs.
Regular jobs like being a receptionist or a worker of an automotive manufacturing line are also suitable for humanoids. In essence, since they can use tools and operate equipment and vehicles designed for the human form, humanoids could theoretically perform any task a human being can, so long as they have the proper software. However, the complexity of doing so is immense.
They are becoming increasingly popular for providing entertainment too. For example, Ursula, a female robot, sings, play music, dances, and speaks to her audiences at Universal Studios.
Several Disney attractions employ the use of animatrons, robots that look, move, and speak much like human beings, in some of their theme park shows. These animatrons look so realistic that it can be hard to decipher from a distance whether or not they are actually human. Although they have a realistic look, they have no cognition or physical autonomy.
Various humanoid robots and their possible applications in daily life are featured in an independent documentary film called Plug & Pray, which was released in 2010.
Humanoid robots, especially with artificial intelligence algorithms, could be useful for future dangerous and/or distant space exploration missions, without having the need to turn back around again and return to Earth once the mission is completed.
Sensors:
A sensor is a device that measures some attribute of the world. Being one of the three primitives of robotics (besides planning and control), sensing plays an important role in robotic paradigms.
Sensors can be classified according to the physical process with which they work or according to the type of measurement information that they give as output. In this case, the second approach was used.
Proprioceptive sensors:
Proprioceptive sensors sense the position, the orientation and the speed of the humanoid's body and joints.
In human beings the otoliths and semi-circular canals (in the inner ear) are used to maintain balance and orientation. In addition humans use their own proprioceptive sensors (e.g. touch, muscle extension, limb position) to help with their orientation.
Humanoid robots use accelerometers to measure the acceleration, from which velocity can be calculated by integration; tilt sensors to measure inclination; force sensors placed in robot's hands and feet to measure contact force with environment; position sensors, that indicate the actual position of the robot (from which the velocity can be calculated by derivation) or even speed sensors.
Exteroceptive sensors:
Arrays of tactels can be used to provide data on what has been touched. The Shadow Hand uses an array of 34 tactels arranged beneath its polyurethane skin on each finger tip. Tactile sensors also provide information about forces and torques transferred between the robot and other objects.
Vision refers to processing data from any modality which uses the electromagnetic spectrum to produce an image. In humanoid robots it is used to recognize objects and determine their properties. Vision sensors work most similarly to the eyes of human beings. Most humanoid robots use CCD cameras as vision sensors.
Sound sensors allow humanoid robots to hear speech and environmental sounds, and perform as the ears of the human being. Microphones are usually used for this task.
Actuators:
Actuators are the motors responsible for motion in the robot.
Humanoid robots are constructed in such a way that they mimic the human body, so they use actuators that perform like muscles and joints, though with a different structure. To achieve the same effect as human motion, humanoid robots use mainly rotary actuators. They can be either electric, pneumatic, hydraulic, piezoelectric or ultrasonic.
Hydraulic and electric actuators have a very rigid behavior and can only be made to act in a compliant manner through the use of relatively complex feedback control strategies. While electric coreless motor actuators are better suited for high speed and low load applications, hydraulic ones operate well at low speed and high load applications.
Piezoelectric actuators generate a small movement with a high force capability when voltage is applied. They can be used for ultra-precise positioning and for generating and handling high forces or pressures in static or dynamic situations.
Ultrasonic actuators are designed to produce movements in a micrometer order at ultrasonic frequencies (over 20 kHz). They are useful for controlling vibration, positioning applications and quick switching.
Pneumatic actuators operate on the basis of gas compressibility. As they are inflated, they expand along the axis, and as they deflate, they contract. If one end is fixed, the other will move in a linear trajectory. These actuators are intended for low speed and low/medium load applications. Between pneumatic actuators there are: cylinders, bellows, pneumatic engines, pneumatic stepper motors and pneumatic artificial muscles.
Planning and Control:
In planning and control, the essential difference between humanoids and other kinds of robots (like industrial ones) is that the movement of the robot has to be human-like, using legged locomotion, especially biped gait.
The ideal planning for humanoid movements during normal walking should result in minimum energy consumption, as it does in the human body. For this reason, studies on dynamics and control of these kinds of structures become more and more important.
The question of walking biped robots stabilization on the surface is of great importance. Maintenance of the robot’s gravity center over the center of bearing area for providing a stable position can be chosen as a goal of control.
To maintain dynamic balance during the walk, a robot needs information about contact force and its current and desired motion. The solution to this problem relies on a major concept, the Zero Moment Point (ZMP).
Another characteristic of humanoid robots is that they move, gather information (using sensors) on the "real world" and interact with it. They don’t stay still like factory manipulators and other robots that work in highly structured environments. To allow humanoids to move in complex environments, planning and control must focus on self-collision detection, path planning and obstacle avoidance.
Humanoid robots do not yet have some features of the human body. They include structures with variable flexibility, which provide safety (to the robot itself and to the people), and redundancy of movements, i.e. more degrees of freedom and therefore wide task availability.
Although these characteristics are desirable to humanoid robots, they will bring more complexity and new problems to planning and control. The field of whole-body control deals with these issues and addresses the proper coordination of numerous degrees of freedom, e.g. to realize several control tasks simultaneously while following a given order of priority.
Click on the following for more about Humanoid Robots:___________________________________________________________________________
Sophia the Robot
Sophia is a humanoid robot developed by Hong Kong-based company Hanson Robotics. She has been designed to learn and adapt to human behavior and work with humans, and has been interviewed around the world.
In October 2017, she became a Saudi Arabian citizen, the first robot to receive citizenship of a country.
According to herself, Sophia was activated on April 19, 2015. She is modeled after actress Audrey Hepburn, and is known for her human-like appearance and behavior compared to previous robotic variants.
According to the manufacturer, David Hanson, Sophia has artificial intelligence, visual data processing and facial recognition. Sophia also imitates human gestures and facial expressions and is able to answer certain questions and to make simple conversations on predefined topics (e.g. on the weather).
The robot uses voice recognition technology from Alphabet Inc. (parent company of Google) and is designed to get smarter over time. Sophia's intelligence software is designed by SingularityNET.
The AI program analyses conversations and extracts data that allows her to improve responses in the future. It is conceptually similar to the computer program ELIZA, which was one of the first attempts at simulating a human conversation.
Hanson designed Sophia to be a suitable companion for the elderly at nursing homes, or to help crowds at large events or parks. He hopes that she can ultimately interact with other humans sufficiently to gain social skills.
Events:
Sophia has been interviewed in the same manner as a human, striking up conversations with hosts. Some replies have been nonsensical, while others have been impressive, such as lengthy discussions with Charlie Rose on 60 Minutes.
In a piece for CNBC, when the interviewer expressed concerns about robot behavior, Sophia joked that he had "been reading too much Elon Musk. And watching too many Hollywood movies". Musk tweeted that Sophia could watch The Godfather and suggested "what's the worst that could happen?"
On October 11, 2017, Sophia was introduced to the United Nations with a brief conversation with the United Nations Deputy Secretary-General, Amina J. Mohammed.
On October 25, at the Future Investment Summit in Riyadh, she was granted Saudi Arabian citizenship, becoming the first robot ever to have a nationality. This attracted controversy as some commentators wondered if this implied that Sophia could vote or marry, or whether a deliberate system shutdown could be considered murder.
Social media users used Sophia's citizenship to criticize Saudi Arabia's human rights record.
See also:
In general, humanoid robots have a torso, a head, two arms, and two legs, though some forms of humanoid robots may model only part of the body, for example, from the waist up. Some humanoid robot also have heads designed to replicate human facial features such as eyes and mouths. Androids are humanoid robots built to aesthetically resemble humans.
Humanoid robots are now used as a research tool in several scientific areas.
Researchers need to understand the human body structure and behavior (biomechanics) to build and study humanoid robots.
On the other side, the attempt to the simulation of the human body leads to a better understanding of it.
Human cognition is a field of study which is focused on how humans learn from sensory information in order to acquire perceptual and motor skills. This knowledge is used to develop computational models of human behavior and it has been improving over time.
It has been suggested that very advanced robotics will facilitate the enhancement of ordinary humans. See transhumanism.
Although the initial aim of humanoid research was to build better orthosis and prosthesis for human beings, knowledge has been transferred between both disciplines. A few examples are: powered leg prosthesis for neuromuscularly impaired, ankle-foot orthosis, biological realistic leg prosthesis and forearm prosthesis.
Besides the research, humanoid robots are being developed to perform human tasks like personal assistance, where they should be able to assist the sick and elderly, and dirty or dangerous jobs.
Regular jobs like being a receptionist or a worker of an automotive manufacturing line are also suitable for humanoids. In essence, since they can use tools and operate equipment and vehicles designed for the human form, humanoids could theoretically perform any task a human being can, so long as they have the proper software. However, the complexity of doing so is immense.
They are becoming increasingly popular for providing entertainment too. For example, Ursula, a female robot, sings, play music, dances, and speaks to her audiences at Universal Studios.
Several Disney attractions employ the use of animatrons, robots that look, move, and speak much like human beings, in some of their theme park shows. These animatrons look so realistic that it can be hard to decipher from a distance whether or not they are actually human. Although they have a realistic look, they have no cognition or physical autonomy.
Various humanoid robots and their possible applications in daily life are featured in an independent documentary film called Plug & Pray, which was released in 2010.
Humanoid robots, especially with artificial intelligence algorithms, could be useful for future dangerous and/or distant space exploration missions, without having the need to turn back around again and return to Earth once the mission is completed.
Sensors:
A sensor is a device that measures some attribute of the world. Being one of the three primitives of robotics (besides planning and control), sensing plays an important role in robotic paradigms.
Sensors can be classified according to the physical process with which they work or according to the type of measurement information that they give as output. In this case, the second approach was used.
Proprioceptive sensors:
Proprioceptive sensors sense the position, the orientation and the speed of the humanoid's body and joints.
In human beings the otoliths and semi-circular canals (in the inner ear) are used to maintain balance and orientation. In addition humans use their own proprioceptive sensors (e.g. touch, muscle extension, limb position) to help with their orientation.
Humanoid robots use accelerometers to measure the acceleration, from which velocity can be calculated by integration; tilt sensors to measure inclination; force sensors placed in robot's hands and feet to measure contact force with environment; position sensors, that indicate the actual position of the robot (from which the velocity can be calculated by derivation) or even speed sensors.
Exteroceptive sensors:
Arrays of tactels can be used to provide data on what has been touched. The Shadow Hand uses an array of 34 tactels arranged beneath its polyurethane skin on each finger tip. Tactile sensors also provide information about forces and torques transferred between the robot and other objects.
Vision refers to processing data from any modality which uses the electromagnetic spectrum to produce an image. In humanoid robots it is used to recognize objects and determine their properties. Vision sensors work most similarly to the eyes of human beings. Most humanoid robots use CCD cameras as vision sensors.
Sound sensors allow humanoid robots to hear speech and environmental sounds, and perform as the ears of the human being. Microphones are usually used for this task.
Actuators:
Actuators are the motors responsible for motion in the robot.
Humanoid robots are constructed in such a way that they mimic the human body, so they use actuators that perform like muscles and joints, though with a different structure. To achieve the same effect as human motion, humanoid robots use mainly rotary actuators. They can be either electric, pneumatic, hydraulic, piezoelectric or ultrasonic.
Hydraulic and electric actuators have a very rigid behavior and can only be made to act in a compliant manner through the use of relatively complex feedback control strategies. While electric coreless motor actuators are better suited for high speed and low load applications, hydraulic ones operate well at low speed and high load applications.
Piezoelectric actuators generate a small movement with a high force capability when voltage is applied. They can be used for ultra-precise positioning and for generating and handling high forces or pressures in static or dynamic situations.
Ultrasonic actuators are designed to produce movements in a micrometer order at ultrasonic frequencies (over 20 kHz). They are useful for controlling vibration, positioning applications and quick switching.
Pneumatic actuators operate on the basis of gas compressibility. As they are inflated, they expand along the axis, and as they deflate, they contract. If one end is fixed, the other will move in a linear trajectory. These actuators are intended for low speed and low/medium load applications. Between pneumatic actuators there are: cylinders, bellows, pneumatic engines, pneumatic stepper motors and pneumatic artificial muscles.
Planning and Control:
In planning and control, the essential difference between humanoids and other kinds of robots (like industrial ones) is that the movement of the robot has to be human-like, using legged locomotion, especially biped gait.
The ideal planning for humanoid movements during normal walking should result in minimum energy consumption, as it does in the human body. For this reason, studies on dynamics and control of these kinds of structures become more and more important.
The question of walking biped robots stabilization on the surface is of great importance. Maintenance of the robot’s gravity center over the center of bearing area for providing a stable position can be chosen as a goal of control.
To maintain dynamic balance during the walk, a robot needs information about contact force and its current and desired motion. The solution to this problem relies on a major concept, the Zero Moment Point (ZMP).
Another characteristic of humanoid robots is that they move, gather information (using sensors) on the "real world" and interact with it. They don’t stay still like factory manipulators and other robots that work in highly structured environments. To allow humanoids to move in complex environments, planning and control must focus on self-collision detection, path planning and obstacle avoidance.
Humanoid robots do not yet have some features of the human body. They include structures with variable flexibility, which provide safety (to the robot itself and to the people), and redundancy of movements, i.e. more degrees of freedom and therefore wide task availability.
Although these characteristics are desirable to humanoid robots, they will bring more complexity and new problems to planning and control. The field of whole-body control deals with these issues and addresses the proper coordination of numerous degrees of freedom, e.g. to realize several control tasks simultaneously while following a given order of priority.
Click on the following for more about Humanoid Robots:___________________________________________________________________________
Sophia the Robot
Sophia is a humanoid robot developed by Hong Kong-based company Hanson Robotics. She has been designed to learn and adapt to human behavior and work with humans, and has been interviewed around the world.
In October 2017, she became a Saudi Arabian citizen, the first robot to receive citizenship of a country.
According to herself, Sophia was activated on April 19, 2015. She is modeled after actress Audrey Hepburn, and is known for her human-like appearance and behavior compared to previous robotic variants.
According to the manufacturer, David Hanson, Sophia has artificial intelligence, visual data processing and facial recognition. Sophia also imitates human gestures and facial expressions and is able to answer certain questions and to make simple conversations on predefined topics (e.g. on the weather).
The robot uses voice recognition technology from Alphabet Inc. (parent company of Google) and is designed to get smarter over time. Sophia's intelligence software is designed by SingularityNET.
The AI program analyses conversations and extracts data that allows her to improve responses in the future. It is conceptually similar to the computer program ELIZA, which was one of the first attempts at simulating a human conversation.
Hanson designed Sophia to be a suitable companion for the elderly at nursing homes, or to help crowds at large events or parks. He hopes that she can ultimately interact with other humans sufficiently to gain social skills.
Events:
Sophia has been interviewed in the same manner as a human, striking up conversations with hosts. Some replies have been nonsensical, while others have been impressive, such as lengthy discussions with Charlie Rose on 60 Minutes.
In a piece for CNBC, when the interviewer expressed concerns about robot behavior, Sophia joked that he had "been reading too much Elon Musk. And watching too many Hollywood movies". Musk tweeted that Sophia could watch The Godfather and suggested "what's the worst that could happen?"
On October 11, 2017, Sophia was introduced to the United Nations with a brief conversation with the United Nations Deputy Secretary-General, Amina J. Mohammed.
On October 25, at the Future Investment Summit in Riyadh, she was granted Saudi Arabian citizenship, becoming the first robot ever to have a nationality. This attracted controversy as some commentators wondered if this implied that Sophia could vote or marry, or whether a deliberate system shutdown could be considered murder.
Social media users used Sophia's citizenship to criticize Saudi Arabia's human rights record.
See also:
- ELIZA effect
- Official website
- Sophia at Hanson Robotics website
Outline of Technology
YouTube Video of the 5 Most Secret Military Aircraft
The following outline is provided as an overview of and topical guide to technology:
Technology – collection of tools, including machinery, modifications, arrangements and procedures used by humans. Engineering is the discipline that seeks to study and design new technologies.
Technologies significantly affect human as well as other animal species' ability to control and adapt to their natural environments.
Click on any of the following blue hyperlinks for further amplification on the Outline of Technology:
Technology – collection of tools, including machinery, modifications, arrangements and procedures used by humans. Engineering is the discipline that seeks to study and design new technologies.
Technologies significantly affect human as well as other animal species' ability to control and adapt to their natural environments.
Click on any of the following blue hyperlinks for further amplification on the Outline of Technology:
- Components of technology
- Branches of technology
- Technology by region
- History of technology
- Hypothetical technology
- Philosophy of technology
- Management of technology including Advancement of technology
- Politics of technology
- Economics of technology
- Technology education
- Technology organizations
- Technology media
- Persons influential in technology
Content Management System (CMS) including a List of content management systems
YouTube Video: Understanding content management systems (CMS)
Click here for a List of Content Management Systems (CMS)
A content management system (CMS) is a computer application that supports the creation and modification of digital content. It typically supports multiple users in a collaborative environment.
CMS features vary widely. Most CMSs include Web-based publishing, format management, history editing and version control, indexing, search, and retrieval. By their nature, content management systems support the separation of content and presentation.
A web content management system (WCM or WCMS) is a CMS designed to support the management of the content of Web pages. Most popular CMSs are also WCMSs. Web content includes text and embedded graphics, photos, video, audio, maps, and program code (e.g., for applications) that displays content or interacts with the user.
Such a content management system (CMS) typically has two major components:
Digital asset management systems are another type of CMS. They manage things such as documents, movies, pictures, phone numbers, and scientific data. Companies also use CMSs to store, control, revise, and publish documentation.
Based on market share statistics, the most popular content management system is WordPress, used by over 28% of all websites on the internet, and by 59% of all websites using a known content management system. Other popular content management systems include Joomla and Drupal.
Common features:
Content management systems typically provide the following features:
Advantages:
Disadvantages:
See also:
A content management system (CMS) is a computer application that supports the creation and modification of digital content. It typically supports multiple users in a collaborative environment.
CMS features vary widely. Most CMSs include Web-based publishing, format management, history editing and version control, indexing, search, and retrieval. By their nature, content management systems support the separation of content and presentation.
A web content management system (WCM or WCMS) is a CMS designed to support the management of the content of Web pages. Most popular CMSs are also WCMSs. Web content includes text and embedded graphics, photos, video, audio, maps, and program code (e.g., for applications) that displays content or interacts with the user.
Such a content management system (CMS) typically has two major components:
- A content management application (CMA) is the front-end user interface that allows a user, even with limited expertise, to add, modify, and remove content from a website without the intervention of a webmaster.
- A content delivery application (CDA) compiles that information and updates the website.
Digital asset management systems are another type of CMS. They manage things such as documents, movies, pictures, phone numbers, and scientific data. Companies also use CMSs to store, control, revise, and publish documentation.
Based on market share statistics, the most popular content management system is WordPress, used by over 28% of all websites on the internet, and by 59% of all websites using a known content management system. Other popular content management systems include Joomla and Drupal.
Common features:
Content management systems typically provide the following features:
- SEO-friendly URLs
- Integrated and online help
- Modularity and extensibility
- User and group functionality
- Templating support for changing designs
- Install and upgrade wizards
- Integrated audit logs
- Compliance with various accessibility frameworks and standards, such as WAI-ARIA
Advantages:
- Reduced need to code from scratch
- Easy to create a unified look and feel
- Version control
- Edit permission management
Disadvantages:
- Limited or no ability to create functionality not envisioned in the CMS (e.g., layouts, web apps, etc.)
- Increased need for special expertise and training for content authors
See also:
- Content management
- Document management system
- Dynamic web page
- Enterprise content management
- Information management
- Knowledge management
- LAMP (software bundle)
- List of content management frameworks
- Revision control
- Web application framework
- Wiki
Bill Gates
YouTube Video: Bill Gates interview: How the world will change by 2030
William Henry "Bill" Gates III (born October 28, 1955) is an American business magnate, philanthropist, investor, and computer programmer. In 1975, Gates and Paul Allen co-founded Microsoft, which became the world's largest PC software company. During his career at Microsoft, Gates held the positions of chairman, CEO and chief software architect, and was the largest individual shareholder until May 2014. Gates has authored and co-authored several books.
Starting in 1987, Gates was included in the Forbes list of the world's wealthiest people and was the wealthiest from 1995 to 2007, again in 2009, and has been since 2014. Between 2009 and 2014, his wealth doubled from US $40 billion to more than US $82 billion. Between 2013 and 2014, his wealth increased by US $15 billion. Gates is currently the wealthiest person in the world.
Gates is one of the best-known entrepreneurs of the personal computer revolution. Gates has been criticized for his business tactics, which have been considered anti-competitive, an opinion which has in some cases been upheld by numerous court rulings. Later in his career Gates pursued a number of philanthropic endeavors, donating large amounts of money to various charitable organizations and scientific research programs through the Bill & Melinda Gates Foundation, established in 2000.
Gates stepped down as Chief Executive Officer of Microsoft in January 2000. He remained as Chairman and created the position of Chief Software Architect for himself. In June 2006, Gates announced that he would be transitioning from full-time work at Microsoft to part-time work, and full-time work at the Bill & Melinda Gates Foundation.
He gradually transferred his duties to Ray Ozzie, chief software architect and Craig Mundie, chief research and strategy officer. Ozzie later left the company. Gates's last full-time day at Microsoft was June 27, 2008. He stepped down as Chairman of Microsoft in February 2014, taking on a new post as technology advisor to support newly appointed CEO Satya Nadella.
Starting in 1987, Gates was included in the Forbes list of the world's wealthiest people and was the wealthiest from 1995 to 2007, again in 2009, and has been since 2014. Between 2009 and 2014, his wealth doubled from US $40 billion to more than US $82 billion. Between 2013 and 2014, his wealth increased by US $15 billion. Gates is currently the wealthiest person in the world.
Gates is one of the best-known entrepreneurs of the personal computer revolution. Gates has been criticized for his business tactics, which have been considered anti-competitive, an opinion which has in some cases been upheld by numerous court rulings. Later in his career Gates pursued a number of philanthropic endeavors, donating large amounts of money to various charitable organizations and scientific research programs through the Bill & Melinda Gates Foundation, established in 2000.
Gates stepped down as Chief Executive Officer of Microsoft in January 2000. He remained as Chairman and created the position of Chief Software Architect for himself. In June 2006, Gates announced that he would be transitioning from full-time work at Microsoft to part-time work, and full-time work at the Bill & Melinda Gates Foundation.
He gradually transferred his duties to Ray Ozzie, chief software architect and Craig Mundie, chief research and strategy officer. Ozzie later left the company. Gates's last full-time day at Microsoft was June 27, 2008. He stepped down as Chairman of Microsoft in February 2014, taking on a new post as technology advisor to support newly appointed CEO Satya Nadella.
Jeff Bezos, Amazon Founder
Click on Video for Jeff Bezos on "What matters more than your talents" TED
Jeffrey Preston Bezos (né Jorgensen; born January 12, 1964) is an American technology and retail entrepreneur, investor, electrical engineer, computer scientist, and philanthropist, best known as the founder, chairman, and chief executive officer of Amazon.com, the world's largest online shopping retailer.
The company began as an Internet merchant of books and expanded to a wide variety of products and services, most recently video and audio streaming. Amazon.com is currently the world's largest Internet sales company on the World Wide Web, as well as the world's largest provider of cloud infrastructure services, which is available through its Amazon Web Services arm.
Bezos' other diversified business interests include aerospace and newspapers. He is the founder and manufacturer of Blue Origin (founded in 2000) with test flights to space which started in 2015, and plans for commercial suborbital human spaceflight beginning in 2018.
In 2013, Bezos purchased The Washington Post newspaper. A number of other business investments are managed through Bezos Expeditions.
When the financial markets opened on July 27, 2017, Bezos briefly surpassed Bill Gates on the Forbes list of billionaires to become the world's richest person, with an estimated net worth of just over $90 billion. He lost the title later in the day when Amazon's stock dropped, returning him to second place with a net worth just below $90 billion.
On October 27, 2017, Bezos again surpassed Gates on the Forbes list as the richest person in the world. Bezos's net worth surpassed $100 billion for the first time on November 24, 2017 after Amazon's share price increased by more than 2.5%.
Click on any of the following blue hyperlinks for more about Jeff Bezos:
The company began as an Internet merchant of books and expanded to a wide variety of products and services, most recently video and audio streaming. Amazon.com is currently the world's largest Internet sales company on the World Wide Web, as well as the world's largest provider of cloud infrastructure services, which is available through its Amazon Web Services arm.
Bezos' other diversified business interests include aerospace and newspapers. He is the founder and manufacturer of Blue Origin (founded in 2000) with test flights to space which started in 2015, and plans for commercial suborbital human spaceflight beginning in 2018.
In 2013, Bezos purchased The Washington Post newspaper. A number of other business investments are managed through Bezos Expeditions.
When the financial markets opened on July 27, 2017, Bezos briefly surpassed Bill Gates on the Forbes list of billionaires to become the world's richest person, with an estimated net worth of just over $90 billion. He lost the title later in the day when Amazon's stock dropped, returning him to second place with a net worth just below $90 billion.
On October 27, 2017, Bezos again surpassed Gates on the Forbes list as the richest person in the world. Bezos's net worth surpassed $100 billion for the first time on November 24, 2017 after Amazon's share price increased by more than 2.5%.
Click on any of the following blue hyperlinks for more about Jeff Bezos:
- Early life and education
- Business career
- Philanthropy
- Recognition
- Criticism
- Personal life
- Politics
- See also:
Drones
- YouTube Video The World's Deadliest Drone: MQ-9 Reaper
- Click here to see a drone walking a dog in order that the owner can abide with a government-mandated coronavirus lockdown!
The following is highlights of the Consumer Reports article "10 Ways Drones are Changing Your World":
An unmanned aerial vehicle (UAV), commonly known as a drone, as an unmanned aircraft system (UAS), or by several other names, is an aircraft without a human pilot aboard.
The flight of UAVs may operate with various degrees of autonomy: either under remote control by a human operator, or fully or intermittently autonomously, by onboard computers.
Compared to manned aircraft, UAVs are often preferred for missions that are too "dull, dirty or dangerous" for humans. They originated mostly in military applications, although their use is expanding in commercial, scientific, recreational and other applications, such as policing and surveillance, aerial photography, agriculture and drone racing.
Civilian drones now vastly outnumber military drones, with estimates of over a million sold by 2015.
For more, click on any of the following:
- Package Delivery (Amazon: WIP)
- Agriculture (enabling farmers to view the health of their crops)
- Photos and Videos of places otherwise inaccessible
- Humanitarian Aid
- First Responders (enabling law enforcement to have an aerial view of a potential crime scene)
- Safety Inspections
- Viewing damage for insurance claims
- Enhancing Internet access through low-flying drones in an otherwise inaccessible area.
- Hurricane and Tornado Forecasts
- Wildlife Conservation
An unmanned aerial vehicle (UAV), commonly known as a drone, as an unmanned aircraft system (UAS), or by several other names, is an aircraft without a human pilot aboard.
The flight of UAVs may operate with various degrees of autonomy: either under remote control by a human operator, or fully or intermittently autonomously, by onboard computers.
Compared to manned aircraft, UAVs are often preferred for missions that are too "dull, dirty or dangerous" for humans. They originated mostly in military applications, although their use is expanding in commercial, scientific, recreational and other applications, such as policing and surveillance, aerial photography, agriculture and drone racing.
Civilian drones now vastly outnumber military drones, with estimates of over a million sold by 2015.
For more, click on any of the following:
- 1 Terminology
- 2 History
- 3 Classification
- 4 UAV components
- 5 Autonomy
- 6 Functions
- 7 Market trends
- 8 Development considerations
- 9 Applications
- 10 Existing UAVs
- 11 Events
- 12 Ethical concerns
- 13 Safety
- 14 Regulation
- 15 Popular culture
- 16 UAE Drones for Good Award
- See also:
- Drones in Agriculture
- International Aerial Robotics Competition
- Micro air vehicle
- Miniature UAV
- Quadcopter
- ParcAberporth
- Radio-controlled aircraft
- Satellite Sentinel Project
- Unmanned underwater vehicle
- Tactical Control System
- List of films featuring drones
- Micromechanical Flying Insect
- Research and groups:
- Drones and Drone Data Technical Interest Group (TIG) Technology and techniques (equipment, software, workflows, survey designs) to allow individuals to enhance their capabilities with data obtained from drones and drone surveys. Chaired by Karl Osvald and James McDonald.
Inventions, Their Inventors and Their Timeline
YouTube Video: Timeline of the inventions that changed the world
Pictured: L-R: The Light Bulb as pioneered by Thomas E. Edison, The Model T Ford by Henry Ford (displayed at the Science Museum: Getty Images), First flight of the Wright Flyer I (By the Wright Brothers, 12/17/1903 with Orville piloting, Wilbur running at wingtip)
Click here for a List of Inventors.
Click here for a Timeline of Historic Inventions.
An invention is a unique or novel device, method, composition or process. The invention process is a process within an overall engineering and product development process. It may be an improvement upon a machine or product, or a new process for creating an object or a result.
An invention that achieves a completely unique function or result may be a radical breakthrough. Such works are novel and not obvious to others skilled in the same field. An inventor may be taking a big step in success or failure.
Some inventions can be patented. A patent legally protects the intellectual property rights of the inventor and legally recognizes that a claimed invention is actually an invention. The rules and requirements for patenting an invention vary from country to country, and the process of obtaining a patent is often expensive.
Another meaning of invention is cultural invention, which is an innovative set of useful social behaviors adopted by people and passed on to others.
The Institute for Social Inventions collected many such ideas in magazines and books. Invention is also an important component of artistic and design creativity. Inventions often extend the boundaries of human knowledge, experience or capability.
An example of an invention we all take for granted include: the Remote Control.
An example of an invention that revolutionized the kitchen (and leading to less food spoilage) was the Refrigerator.
Click on any of the following blue hyperlinks for more about Inventions:
Click here for a Timeline of Historic Inventions.
An invention is a unique or novel device, method, composition or process. The invention process is a process within an overall engineering and product development process. It may be an improvement upon a machine or product, or a new process for creating an object or a result.
An invention that achieves a completely unique function or result may be a radical breakthrough. Such works are novel and not obvious to others skilled in the same field. An inventor may be taking a big step in success or failure.
Some inventions can be patented. A patent legally protects the intellectual property rights of the inventor and legally recognizes that a claimed invention is actually an invention. The rules and requirements for patenting an invention vary from country to country, and the process of obtaining a patent is often expensive.
Another meaning of invention is cultural invention, which is an innovative set of useful social behaviors adopted by people and passed on to others.
The Institute for Social Inventions collected many such ideas in magazines and books. Invention is also an important component of artistic and design creativity. Inventions often extend the boundaries of human knowledge, experience or capability.
An example of an invention we all take for granted include: the Remote Control.
An example of an invention that revolutionized the kitchen (and leading to less food spoilage) was the Refrigerator.
Click on any of the following blue hyperlinks for more about Inventions:
- Three areas of invention
- Process of invention
- Invention vs. innovation
- Purposes of invention
- Invention as defined by patent law
- Invention in the arts
- See also:
- Bayh-Dole Act
- Chindōgu
- Creativity techniques
- Directive on the legal protection of biotechnological inventions
- Discovery (observation)
- Edisonian approach
- Heroic theory of invention and scientific development
- Independent inventor
- Ingenuity
- INPEX (invention show)
- International Innovation Index
- Invention promotion firm
- Inventors' Day
- Kranzberg's laws of technology
- Lemelson-MIT Prize
- Category:Lists of inventions or discoveries
- List of inventions named after people
- List of prolific inventors
- Multiple discovery
- National Inventors Hall of Fame
- Patent model
- Proof of concept
- Proposed directive on the patentability of computer-implemented inventions - it was rejected
- Scientific priority
- Technological revolution
- The Illustrated Science and Invention Encyclopedia
- Science and invention in Birmingham - The first cotton spinning mill to plastics and steam power.
- Invention Ideas
- List of PCT (Patent Cooperation Treaty) Notable Inventions at WIPO
(Internet-enabled) Smart TV , including a List of Internet Television Providers in the United States
YouTube Video about the Samsung SMART TV Tutorial – Smart Hub 2015 [How-To-Video]
Pictured below: If you're new to streaming TV, here's your guide by StlToday.com
Click here for a List of Internet Television Providers in the United States.
A smart TV, sometimes referred to as connected TV or hybrid TV, is a television set with integrated Internet and interactive "Web 2.0" features.
Smart TV is a technological convergence between computers and flatscreen television sets and set-top boxes. Besides the traditional functions of television sets and set-top boxes provided through traditional broadcasting media, these devices can also provide
Smart TV should not be confused with Internet TV, IPTV or Web television. Internet TV refers to receiving television content over the Internet instead of traditional systems (terrestrial, cable and satellite) (although Internet itself is received by these methods).
IPTV is one of the Internet television technology standards for use by television broadcasters. Web television is a term used for programs created by a wide variety of companies and individuals for broadcast on Internet TV.
In smart TVs, the operating system is pre-loaded or is available through the set-top box. The software applications or "apps" can be pre-loaded into the device, or updated or installed on demand via an app store or marketplace, in a similar manner to how the apps are integrated in modern smartphones.
The technology that enables smart TVs is also incorporated in external devices such as set-top boxes and some Blu-ray players, game consoles, digital media players, hotel television systems, smartphones, and other network-connected interactive devices that utilize television-type display outputs.
These devices allow viewers to find and play videos, movies, TV shows, photos and other content from the Web, cable or satellite TV channel, or from a local storage device.
A smart TV device is either a television set with integrated Internet capabilities or a set-top box for television that offers more advanced computing ability and connectivity than a contemporary basic television set.
Smart TVs may be thought of as an information appliance or the computer system from a handheld computer integrated within a television set unit, as such a smart TV often allows the user to install and run more advanced applications or plugins/addons based on a specific platform. Smart TVs run a complete operating system or mobile operating system software providing a platform for application developers.
Smart TV platforms or middleware have a public Software development kit (SDK) and/or Native development kit (NDK) for apps so that third-party developers can develop applications for it, and an app store so that the end-users can install and uninstall apps themselves.
The public SDK enables third-party companies and other interactive application developers to “write” applications once and see them run successfully on any device that supports the smart TV platform or middleware architecture which it was written for, no matter who the hardware manufacturer is.
Smart TVs deliver content (such as photos, movies and music) from other computers or network attached storage devices on a network using either a Digital Living Network Alliance / Universal Plug and Play media server or similar service program like Windows Media Player or Network-attached storage (NAS), or via iTunes.
It also provides access to Internet-based services including:
Smart TV enables access to movies, shows, video games, apps and more. Some of those apps include Netflix, Spotify, YouTube, and Amazon.
Functions:
Smart TV devices also provide access to user-generated content (either stored on an external hard drive or in cloud storage) and to interactive services and Internet applications, such as YouTube, many using HTTP Live Streaming (also known as HLS) adaptive streaming.
Smart TV devices facilitate the curation of traditional content by combining information from the Internet with content from TV providers. Services offer users a means to track and receive reminders about shows or sporting events, as well as the ability to change channels for immediate viewing.
Some devices feature additional interactive organic user interface / natural user interface technologies for navigation controls and other human interaction with a Smart TV, with such as second screen companion devices, spatial gestures input like with Xbox Kinect, and even for speech recognition for natural language user interface.
Features:
Smart TV develops new features to satisfy consumers and companies, such as new payment processes. LG and PaymentWall have collaborated to allow consumers to access purchased apps, movies, games, and more using a remote control, laptop, tablet, or smartphone. This is intended for an easier and more convenient way for checkout.
Background:
In the early 1980s, "intelligent" television receivers were introduced in Japan. The addition of an LSI chip with memory and a character generator to a television receiver enabled Japanese viewers to receive a mix of programming and information transmitted over spare lines of the broadcast television signal.
A patent was published in 1994 (and extended the following year) for an "intelligent" television system, linked with data processing systems, by means of a digital or analog network.
Apart from being linked to data networks, one key point is its ability to automatically download necessary software routines, according to a user's demand, and process their needs.
The mass acceptance of digital television in late 2000s and early 2010s greatly improved smart TVs. Major TV manufacturers have announced production of smart TVs only, for their middle-end to high-end TVs in 2015.
Smart TVs are expected to become the dominant form of television by the late 2010s. At the beginning of 2016, Nielsen reported that 29 percent of those with incomes over $75,000 a year had a smart TV.
Click on any of the following blue hyperlinks for more about Smart TV:
A smart TV, sometimes referred to as connected TV or hybrid TV, is a television set with integrated Internet and interactive "Web 2.0" features.
Smart TV is a technological convergence between computers and flatscreen television sets and set-top boxes. Besides the traditional functions of television sets and set-top boxes provided through traditional broadcasting media, these devices can also provide
- Internet TV,
- online interactive media,
- over-the-top content (OTT),
- as well as on-demand streaming media, and home networking access.
Smart TV should not be confused with Internet TV, IPTV or Web television. Internet TV refers to receiving television content over the Internet instead of traditional systems (terrestrial, cable and satellite) (although Internet itself is received by these methods).
IPTV is one of the Internet television technology standards for use by television broadcasters. Web television is a term used for programs created by a wide variety of companies and individuals for broadcast on Internet TV.
In smart TVs, the operating system is pre-loaded or is available through the set-top box. The software applications or "apps" can be pre-loaded into the device, or updated or installed on demand via an app store or marketplace, in a similar manner to how the apps are integrated in modern smartphones.
The technology that enables smart TVs is also incorporated in external devices such as set-top boxes and some Blu-ray players, game consoles, digital media players, hotel television systems, smartphones, and other network-connected interactive devices that utilize television-type display outputs.
These devices allow viewers to find and play videos, movies, TV shows, photos and other content from the Web, cable or satellite TV channel, or from a local storage device.
A smart TV device is either a television set with integrated Internet capabilities or a set-top box for television that offers more advanced computing ability and connectivity than a contemporary basic television set.
Smart TVs may be thought of as an information appliance or the computer system from a handheld computer integrated within a television set unit, as such a smart TV often allows the user to install and run more advanced applications or plugins/addons based on a specific platform. Smart TVs run a complete operating system or mobile operating system software providing a platform for application developers.
Smart TV platforms or middleware have a public Software development kit (SDK) and/or Native development kit (NDK) for apps so that third-party developers can develop applications for it, and an app store so that the end-users can install and uninstall apps themselves.
The public SDK enables third-party companies and other interactive application developers to “write” applications once and see them run successfully on any device that supports the smart TV platform or middleware architecture which it was written for, no matter who the hardware manufacturer is.
Smart TVs deliver content (such as photos, movies and music) from other computers or network attached storage devices on a network using either a Digital Living Network Alliance / Universal Plug and Play media server or similar service program like Windows Media Player or Network-attached storage (NAS), or via iTunes.
It also provides access to Internet-based services including:
- traditional broadcast TV channels,
- catch-up services,
- video-on-demand (VOD),
- electronic program guide,
- interactive advertising, personalization,
- voting,
- games,
- social networking,
- and other multimedia applications.
Smart TV enables access to movies, shows, video games, apps and more. Some of those apps include Netflix, Spotify, YouTube, and Amazon.
Functions:
Smart TV devices also provide access to user-generated content (either stored on an external hard drive or in cloud storage) and to interactive services and Internet applications, such as YouTube, many using HTTP Live Streaming (also known as HLS) adaptive streaming.
Smart TV devices facilitate the curation of traditional content by combining information from the Internet with content from TV providers. Services offer users a means to track and receive reminders about shows or sporting events, as well as the ability to change channels for immediate viewing.
Some devices feature additional interactive organic user interface / natural user interface technologies for navigation controls and other human interaction with a Smart TV, with such as second screen companion devices, spatial gestures input like with Xbox Kinect, and even for speech recognition for natural language user interface.
Features:
Smart TV develops new features to satisfy consumers and companies, such as new payment processes. LG and PaymentWall have collaborated to allow consumers to access purchased apps, movies, games, and more using a remote control, laptop, tablet, or smartphone. This is intended for an easier and more convenient way for checkout.
Background:
In the early 1980s, "intelligent" television receivers were introduced in Japan. The addition of an LSI chip with memory and a character generator to a television receiver enabled Japanese viewers to receive a mix of programming and information transmitted over spare lines of the broadcast television signal.
A patent was published in 1994 (and extended the following year) for an "intelligent" television system, linked with data processing systems, by means of a digital or analog network.
Apart from being linked to data networks, one key point is its ability to automatically download necessary software routines, according to a user's demand, and process their needs.
The mass acceptance of digital television in late 2000s and early 2010s greatly improved smart TVs. Major TV manufacturers have announced production of smart TVs only, for their middle-end to high-end TVs in 2015.
Smart TVs are expected to become the dominant form of television by the late 2010s. At the beginning of 2016, Nielsen reported that 29 percent of those with incomes over $75,000 a year had a smart TV.
Click on any of the following blue hyperlinks for more about Smart TV:
- Technology
- Security and privacy
- Reliability
- Restriction of access
- Market share
- See also:
- Automatic content recognition
- 10-foot user interface
- Digital Living Network Alliance - DLNA
- Digital media player
- Enhanced TV
- Home theater PC
- Hotel television systems
- Hybrid Broadcast Broadband TV
- Interactive television
- List of smart TV platforms and middleware software
- Over-the-top content
- PC-on-a-stick
- Second screen
- Space shifting
- Telescreen
- Tivoization
- TV Genius
- Video on demand
Robots and Robotics as well as the resulting impact of Automation on Jobs.
YouTube Video of a Robot Disarming Bombs
YouTube Video CNET News - Meet the robots making Amazon even faster
(A look inside Amazon's warehouse where the Kiva robots are busy moving your orders around. Between December 2014 and January 2015, Amazon have deployed 15.000 of these robots in their warehouses.)
Pictured below:
Top-Left: KUKA industrial robots being used at a bakery for food production;
Top-Right: Automated side loader operation
Bottom: Demand for industrial robots to treble in automotive industry
In a lead-up to the following topics covering robots, robotics, and automation, you will find the following Brookings Institute April 18, 2018 article entitled "Will robots and AI take your job? The economic and political consequences of automation"
By Darrell M. West Wednesday, April 18, 2018
Editor's Note: Darrell M. West is author of the Brookings book “The Future of Work: Robots, AI, and Automation.”
In Edward Bellamy’s classic Looking Backward, the protagonist Julian West wakes up from a 113-year slumber and finds the United States in 2000 has changed dramatically from 1887. People stop working at age forty-five and devote their lives to mentoring other people and engaging in volunteer work that benefits the overall community. There are short work weeks for employees, and everyone receives full benefits, food, and housing.
The reason is that new technologies of the period have enabled people to be very productive while working part-time. Businesses do not need large numbers of employees, so individuals can devote most of their waking hours to hobbies, volunteering, and community service. In conjunction with periodic work stints, they have time to pursue new skills and personal identities that are independent of their jobs.
In the current era, developed countries may be on the verge of a similar transition. Robotics and machine learning have improved productivity and enhanced the economies of many nations.
Artificial intelligence (AI) has advanced into finance, transportation, defense, and energy management. The internet of things (IoT) is facilitated by high-speed networks and remote sensors to connect people and businesses. In all of this, there is a possibility of a new era that could improve the lives of many people.
Yet amid these possible benefits, there is widespread fear that robots and AI will take jobs and throw millions of people into poverty. A Pew Research Center study asked 1,896 experts about the impact of emerging technologies and found “half of these experts (48 percent) envision a future in which robots and digital agents [will] have displaced significant numbers of both blue- and white-collar workers—with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.”
These fears have been echoed by detailed analyses showing anywhere from a 14 to 54 percent automation impact on jobs. For example, a Bruegel analysis found that “54% of EU jobs [are] at risk of computerization.” Using European data, they argue that job losses are likely to be significant and people should prepare for large-scale disruption.
Meanwhile, Oxford University researchers Carl Frey and Michael Osborne claim that technology will transform many sectors of life. They studied 702 occupational groupings and found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”
A McKinsey Global Institute analysis of 750 jobs concluded that “45% of paid activities could be automated using ‘currently demonstrated technologies’ and . . . 60% of occupations could have 30% or more of their processes automated.”
A more recent McKinsey report, “Jobs Lost, Jobs Gained,” found that 30 percent of “work activities” could be automated by 2030 and up to 375 million workers worldwide could be affected by emerging technologies.
Researchers at the Organization for Economic Cooperation and Development (OECD) focused on “tasks” as opposed to “jobs” and found fewer job losses. Using task-related data from 32 OECD countries, they estimated that 14 percent of jobs are highly automatable and another 32 have a significant risk of automation. Although their job loss estimates are below those of other experts, they concluded that “low qualified workers are likely to bear the brunt of the adjustment costs as the automatibility of their jobs is higher compared to highly qualified workers.”
While some dispute the dire predictions on grounds new positions will be created to offset the job losses, the fact that all these major studies report significant workforce disruptions should be taken seriously.
If the employment impact falls at the 38 percent mean of these forecasts, Western democracies likely could resort to authoritarianism as happened in some countries during the Great Depression of the 1930s in order to keep their restive populations in check. If that happened, wealthy elites would require armed guards, security details, and gated communities to protect themselves, as is the case in poor countries today with high income inequality. The United States would look like Syria or Iraq, with armed bands of young men with few employment prospects other than war, violence, or theft.
Yet even if the job ramifications lie more at the low end of disruption, the political consequences still will be severe. Relatively small increases in unemployment or underemployment have an outsized political impact. We saw that a decade ago when 10 percent unemployment during the Great Recession spawned the Tea party and eventually helped to make Donald Trump president.
With some workforce disruption virtually guaranteed by trends already underway, it is safe to predict American politics will be chaotic and turbulent during the coming decades. As innovation accelerates and public anxiety intensifies, right-wing and left-wing populists will jockey for voter support.
Government control could gyrate between very conservative and very liberal leaders as each side blames a different set of scapegoats for economic outcomes voters don’t like. The calm and predictable politics of the post-World War II era likely will become a distant memory as the American system moves toward Trumpism on steroids.
[End of Article]
___________________________________________________________________________
A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically.
Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.
Robots can be autonomous or semi-autonomous and range from humanoids such as Honda's Advanced Step in Innovative Mobility (ASIMO) and TOSY's TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots.
By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own. Autonomous Things are expected to proliferate in the coming decade, with home robotics and the autonomous car as some of the main drivers.
The branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing is robotics. These technologies deal with automated machines that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behavior, or cognition. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics. These robots have also created a newer branch of robotics: soft robotics.
From the time of ancient civilization there have been many accounts of user-configurable automated devices and even automata resembling animals and humans, designed primarily as entertainment. As mechanical techniques developed through the Industrial age, there appeared more practical applications such as automated machines, remote-control and wireless remote-control.
The term comes from a Czech word, robota, meaning "forced labor"; the word 'robot' was first used to denote a fictional humanoid in a 1920 play R.U.R. by the Czech writer, Karel Čapek but it was Karel's brother Josef Čapek who was the word's true inventor.
Electronics evolved into the driving force of development with the advent of the first electronic autonomous robots created by William Grey Walter in Bristol, England in 1948, as well as Computer Numerical Control (CNC) machine tools in the late 1940s by John T. Parsons and Frank L. Stulen.
The first commercial, digital and programmable robot was built by George Devol in 1954 and was named the Unimate. It was sold to General Motors in 1961 where it was used to lift pieces of hot metal from die casting machines at the Inland Fisher Guide Plant in the West Trenton section of Ewing Township, New Jersey.
Robots have replaced humans in performing repetitive and dangerous tasks which humans prefer not to do, or are unable to do because of size limitations, or which take place in extreme environments such as outer space or the bottom of the sea. There are concerns about the increasing use of robots and their role in society.
Robots are blamed for rising technological unemployment as they replace workers in increasing numbers of functions. The use of robots in military combat raises ethical concerns. The possibilities of robot autonomy and potential repercussions have been addressed in fiction and may be a realistic concern in the future.
Click on any of the following blue hyperlinks for more about Robots:
Robotics is an interdisciplinary branch of engineering and science that includes mechanical engineering, electronics engineering, computer science, and others.
Robotics deals with the design, construction, operation, and use of robots (above), as well as computer systems for their control, sensory feedback, and information processing.
These technologies are used to develop machines that can substitute for humans and replicate human actions. Robots can be used in any situation and for any purpose, but today many are used in dangerous environments (including bomb detection and deactivation), manufacturing processes, or where humans cannot survive.
Robots can take on any form but some are made to resemble humans in appearance. This is said to help in the acceptance of a robot in certain replicative behaviors usually performed by people. Such robots attempt to replicate walking, lifting, speech, cognition, and basically anything a human can do. Many of today's robots are inspired by nature, contributing to the field of bio-inspired robotics.
The concept of creating machines that can operate autonomously dates back to classical times, but research into the functionality and potential uses of robots did not grow substantially until the 20th century. Throughout history, it has been frequently assumed that robots will one day be able to mimic human behavior and manage tasks in a human-like fashion.
Today, robotics is a rapidly growing field, as technological advances continue; researching, designing, and building new robots serve various practical purposes, whether domestically, commercially, or militarily. Many robots are built to do jobs that are hazardous to people such as defusing bombs, finding survivors in unstable ruins, and exploring mines and shipwrecks.
Robotics is also used in STEM (science, technology, engineering, and mathematics) as a teaching aid.
Robotics is a branch of engineering that involves the conception, design, manufacture, and operation of robots. This field overlaps with electronics, computer science, artificial intelligence, mechatronics, nanotechnology and bioengineering.
Science-fiction author Isaac Asimov is often given credit for being the first person to use the term robotics in a short story composed in the 1940s. In the story, Asimov suggested three principles to guide the behavior of robots and smart machines. Asimov's Three Laws of Robotics, as they are called, have survived to the present:
Click on any of the following blue hyperlinks for more about Robotics:
Automation is the technology by which a process or procedure is performed without human assistance.
Automation or automatic control, is the use of various control systems for operating equipment such as machinery, processes in factories, boilers and heat treating ovens, switching on telephone networks, steering and stabilization of ships, aircraft and other applications with minimal or reduced human intervention. Some processes have been completely automated.
The biggest benefit of automation is that it saves labor; however, it is also used to save energy and materials and to improve quality, accuracy and precision.
The term automation was not widely used before 1947, when General Motors established an automation department. It was during this time that industry was rapidly adopting feedback controllers, which were introduced in the 1930s.
Automation has been achieved by various means including mechanical, hydraulic, pneumatic, electrical, electronic devices and computers, usually in combination. Complicated systems, such as modern factories, airplanes and ships typically use all these combined techniques.
Click on any of the following hyperlinks for amplification:
By Darrell M. West Wednesday, April 18, 2018
Editor's Note: Darrell M. West is author of the Brookings book “The Future of Work: Robots, AI, and Automation.”
In Edward Bellamy’s classic Looking Backward, the protagonist Julian West wakes up from a 113-year slumber and finds the United States in 2000 has changed dramatically from 1887. People stop working at age forty-five and devote their lives to mentoring other people and engaging in volunteer work that benefits the overall community. There are short work weeks for employees, and everyone receives full benefits, food, and housing.
The reason is that new technologies of the period have enabled people to be very productive while working part-time. Businesses do not need large numbers of employees, so individuals can devote most of their waking hours to hobbies, volunteering, and community service. In conjunction with periodic work stints, they have time to pursue new skills and personal identities that are independent of their jobs.
In the current era, developed countries may be on the verge of a similar transition. Robotics and machine learning have improved productivity and enhanced the economies of many nations.
Artificial intelligence (AI) has advanced into finance, transportation, defense, and energy management. The internet of things (IoT) is facilitated by high-speed networks and remote sensors to connect people and businesses. In all of this, there is a possibility of a new era that could improve the lives of many people.
Yet amid these possible benefits, there is widespread fear that robots and AI will take jobs and throw millions of people into poverty. A Pew Research Center study asked 1,896 experts about the impact of emerging technologies and found “half of these experts (48 percent) envision a future in which robots and digital agents [will] have displaced significant numbers of both blue- and white-collar workers—with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.”
These fears have been echoed by detailed analyses showing anywhere from a 14 to 54 percent automation impact on jobs. For example, a Bruegel analysis found that “54% of EU jobs [are] at risk of computerization.” Using European data, they argue that job losses are likely to be significant and people should prepare for large-scale disruption.
Meanwhile, Oxford University researchers Carl Frey and Michael Osborne claim that technology will transform many sectors of life. They studied 702 occupational groupings and found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”
A McKinsey Global Institute analysis of 750 jobs concluded that “45% of paid activities could be automated using ‘currently demonstrated technologies’ and . . . 60% of occupations could have 30% or more of their processes automated.”
A more recent McKinsey report, “Jobs Lost, Jobs Gained,” found that 30 percent of “work activities” could be automated by 2030 and up to 375 million workers worldwide could be affected by emerging technologies.
Researchers at the Organization for Economic Cooperation and Development (OECD) focused on “tasks” as opposed to “jobs” and found fewer job losses. Using task-related data from 32 OECD countries, they estimated that 14 percent of jobs are highly automatable and another 32 have a significant risk of automation. Although their job loss estimates are below those of other experts, they concluded that “low qualified workers are likely to bear the brunt of the adjustment costs as the automatibility of their jobs is higher compared to highly qualified workers.”
While some dispute the dire predictions on grounds new positions will be created to offset the job losses, the fact that all these major studies report significant workforce disruptions should be taken seriously.
If the employment impact falls at the 38 percent mean of these forecasts, Western democracies likely could resort to authoritarianism as happened in some countries during the Great Depression of the 1930s in order to keep their restive populations in check. If that happened, wealthy elites would require armed guards, security details, and gated communities to protect themselves, as is the case in poor countries today with high income inequality. The United States would look like Syria or Iraq, with armed bands of young men with few employment prospects other than war, violence, or theft.
Yet even if the job ramifications lie more at the low end of disruption, the political consequences still will be severe. Relatively small increases in unemployment or underemployment have an outsized political impact. We saw that a decade ago when 10 percent unemployment during the Great Recession spawned the Tea party and eventually helped to make Donald Trump president.
With some workforce disruption virtually guaranteed by trends already underway, it is safe to predict American politics will be chaotic and turbulent during the coming decades. As innovation accelerates and public anxiety intensifies, right-wing and left-wing populists will jockey for voter support.
Government control could gyrate between very conservative and very liberal leaders as each side blames a different set of scapegoats for economic outcomes voters don’t like. The calm and predictable politics of the post-World War II era likely will become a distant memory as the American system moves toward Trumpism on steroids.
[End of Article]
___________________________________________________________________________
A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically.
Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.
Robots can be autonomous or semi-autonomous and range from humanoids such as Honda's Advanced Step in Innovative Mobility (ASIMO) and TOSY's TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots.
By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own. Autonomous Things are expected to proliferate in the coming decade, with home robotics and the autonomous car as some of the main drivers.
The branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing is robotics. These technologies deal with automated machines that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behavior, or cognition. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics. These robots have also created a newer branch of robotics: soft robotics.
From the time of ancient civilization there have been many accounts of user-configurable automated devices and even automata resembling animals and humans, designed primarily as entertainment. As mechanical techniques developed through the Industrial age, there appeared more practical applications such as automated machines, remote-control and wireless remote-control.
The term comes from a Czech word, robota, meaning "forced labor"; the word 'robot' was first used to denote a fictional humanoid in a 1920 play R.U.R. by the Czech writer, Karel Čapek but it was Karel's brother Josef Čapek who was the word's true inventor.
Electronics evolved into the driving force of development with the advent of the first electronic autonomous robots created by William Grey Walter in Bristol, England in 1948, as well as Computer Numerical Control (CNC) machine tools in the late 1940s by John T. Parsons and Frank L. Stulen.
The first commercial, digital and programmable robot was built by George Devol in 1954 and was named the Unimate. It was sold to General Motors in 1961 where it was used to lift pieces of hot metal from die casting machines at the Inland Fisher Guide Plant in the West Trenton section of Ewing Township, New Jersey.
Robots have replaced humans in performing repetitive and dangerous tasks which humans prefer not to do, or are unable to do because of size limitations, or which take place in extreme environments such as outer space or the bottom of the sea. There are concerns about the increasing use of robots and their role in society.
Robots are blamed for rising technological unemployment as they replace workers in increasing numbers of functions. The use of robots in military combat raises ethical concerns. The possibilities of robot autonomy and potential repercussions have been addressed in fiction and may be a realistic concern in the future.
Click on any of the following blue hyperlinks for more about Robots:
- Summary
- History
- Future development and trends
- New functionalities and prototypes
- Etymology
- Modern robots
- Robots in society
- Contemporary uses
- Robots in popular culture
- See also:
- Specific robotics concepts
- Robotics methods and categories
- Specific robots and devices
- Index of robotics articles
- Outline of robotics
- William Grey Walter
- Specific robotics concepts:
- Robotics methods and categories
- Specific robots and devices
Robotics is an interdisciplinary branch of engineering and science that includes mechanical engineering, electronics engineering, computer science, and others.
Robotics deals with the design, construction, operation, and use of robots (above), as well as computer systems for their control, sensory feedback, and information processing.
These technologies are used to develop machines that can substitute for humans and replicate human actions. Robots can be used in any situation and for any purpose, but today many are used in dangerous environments (including bomb detection and deactivation), manufacturing processes, or where humans cannot survive.
Robots can take on any form but some are made to resemble humans in appearance. This is said to help in the acceptance of a robot in certain replicative behaviors usually performed by people. Such robots attempt to replicate walking, lifting, speech, cognition, and basically anything a human can do. Many of today's robots are inspired by nature, contributing to the field of bio-inspired robotics.
The concept of creating machines that can operate autonomously dates back to classical times, but research into the functionality and potential uses of robots did not grow substantially until the 20th century. Throughout history, it has been frequently assumed that robots will one day be able to mimic human behavior and manage tasks in a human-like fashion.
Today, robotics is a rapidly growing field, as technological advances continue; researching, designing, and building new robots serve various practical purposes, whether domestically, commercially, or militarily. Many robots are built to do jobs that are hazardous to people such as defusing bombs, finding survivors in unstable ruins, and exploring mines and shipwrecks.
Robotics is also used in STEM (science, technology, engineering, and mathematics) as a teaching aid.
Robotics is a branch of engineering that involves the conception, design, manufacture, and operation of robots. This field overlaps with electronics, computer science, artificial intelligence, mechatronics, nanotechnology and bioengineering.
Science-fiction author Isaac Asimov is often given credit for being the first person to use the term robotics in a short story composed in the 1940s. In the story, Asimov suggested three principles to guide the behavior of robots and smart machines. Asimov's Three Laws of Robotics, as they are called, have survived to the present:
- Robots must never harm human beings.
- Robots must follow instructions from humans without violating rule 1.
- Robots must protect themselves without violating the other rules.
Click on any of the following blue hyperlinks for more about Robotics:
- Etymology
- History
- Robotic aspects
- Applications
- Components
- Control
- Research
- Education and training
- Summer robotics camp
- Robotics competitions
- Employment
- Occupational safety and health implications
- See also:
- Robotics portal
- Anderson Powerpole connector
- Artificial intelligence
- Autonomous robot
- Cloud robotics
- Cognitive robotics
- Evolutionary robotics
- Glossary of robotics
- Index of robotics articles
- Mechatronics
- Multi-agent system
- Outline of robotics
- Roboethics
- Robot rights
- Robotic governance
- Soft robotics
- IEEE Robotics and Automation Society
- Investigation of social robots – Robots that mimic human behaviors and gestures.
- Wired's guide to the '50 best robots ever', a mix of robots in fiction (Hal, R2D2, K9) to real robots (Roomba, Mobot, Aibo).
- Notable Chinese Firms Emerging in Medical Robots Sector(GCiS)
Automation is the technology by which a process or procedure is performed without human assistance.
Automation or automatic control, is the use of various control systems for operating equipment such as machinery, processes in factories, boilers and heat treating ovens, switching on telephone networks, steering and stabilization of ships, aircraft and other applications with minimal or reduced human intervention. Some processes have been completely automated.
The biggest benefit of automation is that it saves labor; however, it is also used to save energy and materials and to improve quality, accuracy and precision.
The term automation was not widely used before 1947, when General Motors established an automation department. It was during this time that industry was rapidly adopting feedback controllers, which were introduced in the 1930s.
Automation has been achieved by various means including mechanical, hydraulic, pneumatic, electrical, electronic devices and computers, usually in combination. Complicated systems, such as modern factories, airplanes and ships typically use all these combined techniques.
Click on any of the following hyperlinks for amplification:
- Types of automation
- History
- Advantages and disadvantages
- Lights out manufacturing
- Health and environment
- Convertibility and turnaround time
- Automation tools
- Recent and emerging applications
- Relationship to unemployment
- See also:
- Accelerating change
- Artificial intelligence – automated thought
- Automated reasoning
- Automatic Tool Changer
- Automation protocols
- Automation Technician
- BELBIC
- Controller
- Conveyor
- Conveyor belt
- Cybernetics
- EnOcean
- Feedforward Control
- Hardware architect
- Hardware architecture
- Industrial engineering
- International Society of Automation
- Machine to Machine
- Mobile manipulator
- Multi-agent system
- Odo J. Struger
- OLE for process control
- OPC Foundation
- Pharmacy automation
- Pneumatics automation
- Process control
- Retraining
- Robotics
- Autonomous robot
- Run Book Automation (RBA)
- Sensor-based sorting
- Stepper motor
- Support automation
- System integration
- Systems architect
- Systems architecture
- Luddite
- Mechatronics
- mechanization
- deskilling
Alphabet, Inc. (Google's Parent Company)
YouTube Video: How did Google Get So Big? (60 Minutes 5/21/18)
YouTube Video:How Does Google Maps Work?
Pictured below: Google announced its plans to form a conglomerate of companies called Alphabet back in August. The announcement has finally been converted into reality and a new parent company with the name Alphabet is formed. Google will now split into various companies on the basis of the functions they provide with Google itself retaining its identity.
The below links provide critical viewpoints about Google:
Alphabet Inc. is an American multinational conglomerate headquartered in Mountain View, California. It was created through a corporate restructuring of Google on October 2, 2015 and became the parent company of Google and several former Google subsidiaries.
The two founders of Google assumed executive roles in the new company, with Larry Page serving as CEO and Sergey Brin as President. It has 80,110 employees (as of December 2017).
Alphabet's portfolio encompasses several industries, including technology, life sciences, investment capital, and research. Some of its subsidiaries include:
Some of the subsidiaries of Alphabet have altered their names since leaving Google and becoming part of the new parent company:
Following the restructuring, Page became CEO of Alphabet and Sundar Pichai took his position as CEO of Google. Shares of Google's stock have been converted into Alphabet stock, which trade under Google's former ticker symbols of "GOOG" and "GOOGL".
The establishment of Alphabet was prompted by a desire to make the core Google Internet services business "cleaner and more accountable" while allowing greater autonomy to group companies that operate in businesses other than Internet services.
Click on any of the following blue hyperlinks to learn more about Alphabet, Inc.:
- Click here: How Did Google Get So Big? (60 Minutes 5/21/18: see above YouTube).
- Click here: Google Tries Being Slightly Less Evil. (Vanity Fair 6/8/18)
Alphabet Inc. is an American multinational conglomerate headquartered in Mountain View, California. It was created through a corporate restructuring of Google on October 2, 2015 and became the parent company of Google and several former Google subsidiaries.
The two founders of Google assumed executive roles in the new company, with Larry Page serving as CEO and Sergey Brin as President. It has 80,110 employees (as of December 2017).
Alphabet's portfolio encompasses several industries, including technology, life sciences, investment capital, and research. Some of its subsidiaries include:
Some of the subsidiaries of Alphabet have altered their names since leaving Google and becoming part of the new parent company:
Following the restructuring, Page became CEO of Alphabet and Sundar Pichai took his position as CEO of Google. Shares of Google's stock have been converted into Alphabet stock, which trade under Google's former ticker symbols of "GOOG" and "GOOGL".
The establishment of Alphabet was prompted by a desire to make the core Google Internet services business "cleaner and more accountable" while allowing greater autonomy to group companies that operate in businesses other than Internet services.
Click on any of the following blue hyperlinks to learn more about Alphabet, Inc.:
- History
- Website
- Structure
- Proposed growth
- Restructuring process
- Lawsuit
- Investments and acquisitions
- See also:
- Official website
- Business data for Alphabet Inc: Google Finance
- Yahoo! Finance
- Reuters
- SEC filings
HDMI (High-Definition Multimedia Interface)
YouTube Video: Connect Computer to TV With HDMI With AUDIO/Sound
Pictured: The HDMI Advantage
HDMI (High-Definition Multimedia Interface) is a proprietary audio/video interface for transmitting uncompressed video data and compressed or uncompressed digital audio data from a HDMI-compliant source device, such as a display controller, to a compatible computer monitor, video projector, digital television, or digital audio device. HDMI is a digital replacement for analog video standards.
HDMI implements the EIA/CEA-861 standards, which define video formats and waveforms, transport of compressed, uncompressed, and LPCM audio, auxiliary data, and implementations of the VESA EDID. (p. III) CEA-861 signals carried by HDMI are electrically compatible with the CEA-861 signals used by the digital visual interface (DVI).
No signal conversion is necessary, nor is there a loss of video quality when a DVI-to-HDMI adapter is used.(§C) The CEC (Consumer Electronics Control) capability allows HDMI devices to control each other when necessary and allows the user to operate multiple devices with one handheld remote control device.(§6.3)
Several versions of HDMI have been developed and deployed since initial release of the technology but all use the same cable and connector. Other than improved audio and video capacity, performance, resolution and color spaces, newer versions have optional advanced features such as 3D, Ethernet data connection, and CEC (Consumer Electronics Control) extensions.
Production of consumer HDMI products started in late 2003. In Europe either DVI-HDCP or HDMI is included in the HD ready in-store labeling specification for TV sets for HDTV, formulated by EICTA with SES Astra in 2005. HDMI began to appear on consumer HDTV, camcorders and digital still cameras in 2006. As of January 6, 2015 (twelve years after the release of the first HDMI specification), over 4 billion HDMI devices have been sold.
Click on any of the following blue hyperlinks for more information about HDMI:
HDMI implements the EIA/CEA-861 standards, which define video formats and waveforms, transport of compressed, uncompressed, and LPCM audio, auxiliary data, and implementations of the VESA EDID. (p. III) CEA-861 signals carried by HDMI are electrically compatible with the CEA-861 signals used by the digital visual interface (DVI).
No signal conversion is necessary, nor is there a loss of video quality when a DVI-to-HDMI adapter is used.(§C) The CEC (Consumer Electronics Control) capability allows HDMI devices to control each other when necessary and allows the user to operate multiple devices with one handheld remote control device.(§6.3)
Several versions of HDMI have been developed and deployed since initial release of the technology but all use the same cable and connector. Other than improved audio and video capacity, performance, resolution and color spaces, newer versions have optional advanced features such as 3D, Ethernet data connection, and CEC (Consumer Electronics Control) extensions.
Production of consumer HDMI products started in late 2003. In Europe either DVI-HDCP or HDMI is included in the HD ready in-store labeling specification for TV sets for HDTV, formulated by EICTA with SES Astra in 2005. HDMI began to appear on consumer HDTV, camcorders and digital still cameras in 2006. As of January 6, 2015 (twelve years after the release of the first HDMI specification), over 4 billion HDMI devices have been sold.
Click on any of the following blue hyperlinks for more information about HDMI:
- History
- Specifications
- Versions
- Version comparison
- Applications
- HDMI Alternate Mode for USB Type-C
- Relationship with DisplayPort
- Relationship with MHL
- See also:
Animation
YouTube Video: The 5 Types of Animation
YouTube Video: Complete Animation Workflow (Adobe Character Animator Tutorial)
YouTube Video: Beginner's Guide to Animation
Animation is a dynamic medium in which images or objects are manipulated to appear as moving images. In traditional animation the images were drawn (or painted) by hand on cels to be photographed and exhibited on film. Nowadays most animations are made with computer-generated imagery (CGI).
Computer animation can be very detailed 3D animation, while 2D computer animation can be used for stylistic reasons, low bandwidth or faster real-time renderings. Other common animation methods apply a stop motion technique to two and three-dimensional objects like paper cutouts, puppets or clay figures. The stop motion technique where live actors are used as a frame-by-frame subject is known as pixilation.
Commonly the effect of animation is achieved by a rapid succession of sequential images that minimally differ from each other. The illusion—as in motion pictures in general—is thought to rely on the phi phenomenon and beta movement, but the exact causes are still uncertain.
Analog mechanical animation media that rely on the rapid display of sequential images include the phénakisticope, zoetrope, flip book, praxinoscope and film.
Television and video are popular electronic animation media that originally were analog and now operate digitally. For display on the computer, techniques like animated GIF and Flash animation were developed.
Apart from short films, feature films, animated gifs and other media dedicated to the display moving images, animation is also heavily used for video games, motion graphics and special effects.
The physical movement of image parts through simple mechanics in for instance the moving images in magic lantern shows can also be considered animation. Mechanical animation of actual robotic devices is known as animatronics.
Animators are artists who specialize in creating animation.
Click on any of the following blue hyperlinks for more about Animation Technology:
Computer animation can be very detailed 3D animation, while 2D computer animation can be used for stylistic reasons, low bandwidth or faster real-time renderings. Other common animation methods apply a stop motion technique to two and three-dimensional objects like paper cutouts, puppets or clay figures. The stop motion technique where live actors are used as a frame-by-frame subject is known as pixilation.
Commonly the effect of animation is achieved by a rapid succession of sequential images that minimally differ from each other. The illusion—as in motion pictures in general—is thought to rely on the phi phenomenon and beta movement, but the exact causes are still uncertain.
Analog mechanical animation media that rely on the rapid display of sequential images include the phénakisticope, zoetrope, flip book, praxinoscope and film.
Television and video are popular electronic animation media that originally were analog and now operate digitally. For display on the computer, techniques like animated GIF and Flash animation were developed.
Apart from short films, feature films, animated gifs and other media dedicated to the display moving images, animation is also heavily used for video games, motion graphics and special effects.
The physical movement of image parts through simple mechanics in for instance the moving images in magic lantern shows can also be considered animation. Mechanical animation of actual robotic devices is known as animatronics.
Animators are artists who specialize in creating animation.
Click on any of the following blue hyperlinks for more about Animation Technology:
- History
- Techniques
- Traditional animation
- Full animation
Limited animation
Rotoscoping
Live-action/animation
- Full animation
- Stop motion animation
- Computer animation
- 2D animation
3D animation
3D terms
- 2D animation
- Mechanical animation
- Other animation styles, techniques, and approaches
- Traditional animation
- Production
- Criticism
- Animation and Human Rights
- Awards
- See also:
- 12 basic principles of animation
- Animated war film
- Animation department
- Animation software
- Architectural animation
- Avar (animation variable)
- Independent animation
- International Animated Film Association
- International Tournée of Animation
- List of motion picture topics
- Model sheet
- Motion graphic design
- Society for Animation Studies
- Tradigital art
- Wire-frame model
- The making of an 8-minute cartoon short
- Importance of animation and its utilization in varied industries
- "Animando", a 12-minute film demonstrating 10 different animation techniques (and teaching how to use them).
- 19 types of animation techniques and styles
Image File Formats
YouTube Video: how to pick the correct file format for images
Image file formats are standardized means of organizing and storing digital images. Image files are composed of digital data in one of these formats that can be rasterized for use on a computer display or printer.
An image file format may store data in uncompressed, compressed, or vector formats. Once rasterized, an image becomes a grid of pixels, each of which has a number of bits to designate its color equal to the color depth of the device displaying it.
Image File Sizes:
The size of raster image files is positively correlated with the resolution and images size (number of pixels) and the color depth (bits per pixel). Images can be compressed in various ways, however.
A compression algorithm stores either an exact representation or an approximation of the original image in a smaller number of bytes that can be expanded back to its uncompressed form with a corresponding decompression algorithm. Images with the same number of pixels and color depth can have very different compressed file size.
Considering exactly the same compression, number of pixels, and color depth for two images, different graphical complexity of the original images may also result in very different file sizes after compression due to the nature of compression algorithms. With some compression formats, images that are less complex may result in smaller compressed file sizes.
This characteristic sometimes results in a smaller file size for some lossless formats than lossy formats. For example, graphically simple images (i.e. images with large continuous regions like line art or animation sequences) may be losslessly compressed into a GIF or PNG format and result in a smaller file size than a lossy JPEG format.
Vector images, unlike raster images, can be any dimension independent of file size. File size increases only with the addition of more vectors.
For example, a 640 * 480 pixel image with 24-bit color would occupy almost a megabyte of space:
640 * 480 * 24 = 7,372,800 bits = 921,600 bytes = 900 kB
Image File Compression:
There are two types of image file compression algorithms: lossless and lossy.
Lossless compression algorithms reduce file size while preserving a perfect copy of the original uncompressed image. Lossless compression generally, but not always, results in larger files than lossy compression. Lossless compression should be used to avoid accumulating stages of re-compression when editing images.
Lossy compression algorithms preserve a representation of the original uncompressed image that may appear to be a perfect copy, but it is not a perfect copy. Often lossy compression is able to achieve smaller file sizes than lossless compression. Most lossy compression algorithms allow for variable compression that trades image quality for file size.
Major graphic file formats:
See also: Comparison of graphics file formats § Technical details
Click on any of the following blue hyperlinks for more about Image File Formats:
An image file format may store data in uncompressed, compressed, or vector formats. Once rasterized, an image becomes a grid of pixels, each of which has a number of bits to designate its color equal to the color depth of the device displaying it.
Image File Sizes:
The size of raster image files is positively correlated with the resolution and images size (number of pixels) and the color depth (bits per pixel). Images can be compressed in various ways, however.
A compression algorithm stores either an exact representation or an approximation of the original image in a smaller number of bytes that can be expanded back to its uncompressed form with a corresponding decompression algorithm. Images with the same number of pixels and color depth can have very different compressed file size.
Considering exactly the same compression, number of pixels, and color depth for two images, different graphical complexity of the original images may also result in very different file sizes after compression due to the nature of compression algorithms. With some compression formats, images that are less complex may result in smaller compressed file sizes.
This characteristic sometimes results in a smaller file size for some lossless formats than lossy formats. For example, graphically simple images (i.e. images with large continuous regions like line art or animation sequences) may be losslessly compressed into a GIF or PNG format and result in a smaller file size than a lossy JPEG format.
Vector images, unlike raster images, can be any dimension independent of file size. File size increases only with the addition of more vectors.
For example, a 640 * 480 pixel image with 24-bit color would occupy almost a megabyte of space:
640 * 480 * 24 = 7,372,800 bits = 921,600 bytes = 900 kB
Image File Compression:
There are two types of image file compression algorithms: lossless and lossy.
Lossless compression algorithms reduce file size while preserving a perfect copy of the original uncompressed image. Lossless compression generally, but not always, results in larger files than lossy compression. Lossless compression should be used to avoid accumulating stages of re-compression when editing images.
Lossy compression algorithms preserve a representation of the original uncompressed image that may appear to be a perfect copy, but it is not a perfect copy. Often lossy compression is able to achieve smaller file sizes than lossless compression. Most lossy compression algorithms allow for variable compression that trades image quality for file size.
Major graphic file formats:
See also: Comparison of graphics file formats § Technical details
Click on any of the following blue hyperlinks for more about Image File Formats:
Video including Film and Video Technology
YouTube Video: How to become a Videographer - Videographer tips
Click here for a List of Film and Video Technologies
Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media.
Video was first developed for mechanical television systems, which were quickly replaced by cathode ray tube (CRT) systems which were later replaced by flat panel displays of several types.
Video systems vary in display resolution, aspect ratio, refresh rate, color capabilities and other qualities. Analog and digital variants exist and can be carried on a variety of media, including radio broadcast, magnetic tape, optical discs, computer files, and network streaming.
History.
See also: History of television
Video technology was first developed for mechanical television systems, which were quickly replaced by cathode ray tube (CRT) television systems, but several new technologies for video display devices have since been invented. Video was originally exclusively a live technology. Charles Ginsburg led an Ampex research team developing one of the first practical video tape recorder (VTR). In 1951 the first video tape recorder captured live images from television cameras by converting the camera's electrical impulses and saving the information onto magnetic video tape.
Video recorders were sold for US $50,000 in 1956, and videotapes cost US $300 per one-hour reel. However, prices gradually dropped over the years; in 1971, Sony began selling videocassette recorder (VCR) decks and tapes into the consumer market.
The use of digital techniques in video created digital video, which allows higher quality and, eventually, much lower cost than earlier analog technology.
After the invention of the DVD in 1997 and Blu-ray Disc in 2006, sales of videotape and recording equipment plummeted.
Advances in computer technology allows even inexpensive personal computers and smartphones to capture, store, edit and transmit digital video, further reducing the cost of video production, allowing program-makers and broadcasters to move to tapeless production.
The advent of digital broadcasting and the subsequent digital television transition is in the process of relegating analog video to the status of a legacy technology in most parts of the world.
As of 2015, with the increasing use of high-resolution video cameras with improved dynamic range and color gamuts, and high-dynamic-range digital intermediate data formats with improved color depth, modern digital video technology is converging with digital film technology.
Characteristics of video streams:
Number of frames per second
Frame rate, the number of still pictures per unit of time of video, ranges from six or eight frames per second (frame/s) for old mechanical cameras to 120 or more frames per second for new professional cameras. PAL standards (Europe, Asia, Australia, etc.) and SECAM (France, Russia, parts of Africa etc.) specify 25 frame/s, while NTSC standards (USA, Canada, Japan, etc.) specify 29.97 frame/s.
Film is shot at the slower frame rate of 24 frames per second, which slightly complicates the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve a comfortable illusion of a moving image is about sixteen frames per second.
Interlaced vs progressive:
Video can be interlaced or progressive. In progressive scan systems, each refresh period updates all scan lines in each frame in sequence. When displaying a natively progressive broadcast or recorded signal, the result is optimum spatial resolution of both the stationary and moving parts of the image.
Interlacing was invented as a way to reduce flicker in early mechanical and CRT video displays without increasing the number of complete frames per second. Interlacing retains detail while requiring lower bandwidth compared to progressive scanning.
In interlaced video, the horizontal scan lines of each complete frame are treated as if numbered consecutively, and captured as two fields: an odd field (upper field) consisting of the odd-numbered lines and an even field (lower field) consisting of the even-numbered lines.
Analog display devices reproduce each frame, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more lifelike reproduction of rapidly moving parts of the image when viewed on an interlaced CRT display.
NTSC, PAL and SECAM are interlaced formats. Abbreviated video resolution specifications often include an i to indicate interlacing. For example, PAL video format is often described as 576i50, where 576 indicates the total number of horizontal scan lines, i indicates interlacing, and 50 indicates 50 fields (half-frames) per second.
When displaying a natively interlaced signal on a progressive scan device, overall spatial resolution is degraded by simple line doubling—artifacts such as flickering or "comb" effects in moving parts of the image which appear unless special signal processing eliminates them.
A procedure known as deinterlacing can optimize the display of an interlaced video signal from an analog, DVD or satellite source on a progressive scan device such as an LCD television, digital video projector or plasma panel. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material.
Aspect ratio:
Aspect ratio describes the dimensions of video screens and video picture elements. All popular video formats are rectilinear, and so can be described by a ratio between width and height. The screen aspect ratio of a traditional television screen is 4:3, or about 1.33:1. High definition televisions use an aspect ratio of 16:9, or about 1.78:1. The aspect ratio of a full 35 mm film frame with soundtrack (also known as the Academy ratio) is 1.375:1.
Pixels on computer monitors are usually square, but pixels used in digital video often have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard, and the corresponding anamorphic widescreen formats.
Therefore, a 720 by 480 pixel NTSC DV image displays with the 4:3 aspect ratio (the traditional television standard) if the pixels are thin, and displays at the 16:9 aspect ratio (the anamorphic widescreen format) if the pixels are fat.
The popularity of viewing video on mobile phones has led to the growth of vertical video. Mary Meeker, a partner at Silicon Valley venture capital firm Kleiner Perkins Caufield & Byers, highlighted the growth of vertical video viewing in her 2015 Internet Trends Report – growing from 5% of video viewing in 2010 to 29% in 2015.
Vertical video ads like Snapchat’s are watched in their entirety 9X more than landscape video ads. The format was rapidly taken up by leading social platforms and media publishers such as Mashable In October 2015 video platform Grabyo launched technology to help video publishers adapt horizotonal 16:9 video into mobile formats such as vertical and square.
Color space and bits per pixel
Color model name describes the video color representation. YIQ was used in NTSC television. It corresponds closely to the YUV scheme used in NTSC and PAL television and the YDbDr scheme used by SECAM television.
The number of distinct colors a pixel can represent depends on the number of bits per pixel (bpp). A common way to reduce the amount of data required in digital video is by chroma subsampling (e.g., 4:4:4, 4:2:2, 4:2:0/4:1:1).
Because the human eye is less sensitive to details in color than brightness, the luminance data for all pixels is maintained, while the chrominance data is averaged for a number of pixels in a block and that same value is used for all of them. For example, this results in a 50% reduction in chrominance data using 2 pixel blocks (4:2:2) or 75% using 4 pixel blocks(4:2:0). This process does not reduce the number of possible color values that can be displayed, it reduces the number of distinct points at which the color changes.
Video quality
Video quality can be measured with formal metrics like PSNR or with subjective video quality using expert observation.
The subjective video quality of a video processing system is evaluated as follows:
Many subjective video quality methods are described in the ITU-T recommendation BT.500.
One of the standardized method is the Double Stimulus Impairment Scale (DSIS). In DSIS, each expert views an unimpaired reference video followed by an impaired version of the same video. The expert then rates the impaired video using a scale ranging from "impairments are imperceptible" to "impairments are very annoying".
Video compression method (digital only)
Main article: Video compression
Uncompressed video delivers maximum quality, but with a very high data rate. A variety of methods are used to compress video streams, with the most effective ones using a Group Of Pictures (GOP) to reduce spatial and temporal redundancy. Broadly speaking, spatial redundancy is reduced by registering differences between parts of a single frame; this task is known as intraframe compression and is closely related to image compression.
Likewise, temporal redundancy can be reduced by registering differences between frames; this task is known as interframe compression, including motion compensation and other techniques. The most common modern standards are MPEG-2, used for DVD, Blu-ray and satellite television, and MPEG-4, used for AVCHD, Mobile phones (3GP) and Internet.
Stereoscopic
Stereoscopic video can be created using several different methods:
Blu-ray Discs greatly improve the sharpness and detail of the two-color 3D effect in color-coded stereo programs. See articles Stereoscopy and 3-D film.
Formats:
Different layers of video transmission and storage each provide their own set of formats to choose from.
For transmission, there is a physical connector and signal protocol ("video connection standard" below). A given physical link can carry certain "display standards" that specify a particular refresh rate, display resolution, and color space.
Many analog and digital recording formats are in use, and digital video clips can also be stored on a computer file system as files, which have their own formats. In addition to the physical format used by the data storage device or transmission medium, the stream of ones and zeros that is sent must be in a particular digital video compression format, of which a number are available.
Analog video
Analog video is a video signal transferred by an analog signal. An analog color video signal contains luminance, brightness (Y) and chrominance (C) of an analog television image. When combined into one channel, it is called composite video as is the case, among others with NTSC, PAL and SECAM.
Analog video may be carried in separate channels, as in two channel S-Video (YC) and multi-channel component video formats. Analog video is used in both consumer and professional television production applications.
Digital video
Digital video signal formats with higher quality have been adopted, including serial digital interface (SDI), Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI) and DisplayPort Interface, though analog video interfaces are still used and widely available. There exist different adaptors and variants.
Click on any of the following hyperlinks for more about Video:
Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media.
Video was first developed for mechanical television systems, which were quickly replaced by cathode ray tube (CRT) systems which were later replaced by flat panel displays of several types.
Video systems vary in display resolution, aspect ratio, refresh rate, color capabilities and other qualities. Analog and digital variants exist and can be carried on a variety of media, including radio broadcast, magnetic tape, optical discs, computer files, and network streaming.
History.
See also: History of television
Video technology was first developed for mechanical television systems, which were quickly replaced by cathode ray tube (CRT) television systems, but several new technologies for video display devices have since been invented. Video was originally exclusively a live technology. Charles Ginsburg led an Ampex research team developing one of the first practical video tape recorder (VTR). In 1951 the first video tape recorder captured live images from television cameras by converting the camera's electrical impulses and saving the information onto magnetic video tape.
Video recorders were sold for US $50,000 in 1956, and videotapes cost US $300 per one-hour reel. However, prices gradually dropped over the years; in 1971, Sony began selling videocassette recorder (VCR) decks and tapes into the consumer market.
The use of digital techniques in video created digital video, which allows higher quality and, eventually, much lower cost than earlier analog technology.
After the invention of the DVD in 1997 and Blu-ray Disc in 2006, sales of videotape and recording equipment plummeted.
Advances in computer technology allows even inexpensive personal computers and smartphones to capture, store, edit and transmit digital video, further reducing the cost of video production, allowing program-makers and broadcasters to move to tapeless production.
The advent of digital broadcasting and the subsequent digital television transition is in the process of relegating analog video to the status of a legacy technology in most parts of the world.
As of 2015, with the increasing use of high-resolution video cameras with improved dynamic range and color gamuts, and high-dynamic-range digital intermediate data formats with improved color depth, modern digital video technology is converging with digital film technology.
Characteristics of video streams:
Number of frames per second
Frame rate, the number of still pictures per unit of time of video, ranges from six or eight frames per second (frame/s) for old mechanical cameras to 120 or more frames per second for new professional cameras. PAL standards (Europe, Asia, Australia, etc.) and SECAM (France, Russia, parts of Africa etc.) specify 25 frame/s, while NTSC standards (USA, Canada, Japan, etc.) specify 29.97 frame/s.
Film is shot at the slower frame rate of 24 frames per second, which slightly complicates the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve a comfortable illusion of a moving image is about sixteen frames per second.
Interlaced vs progressive:
Video can be interlaced or progressive. In progressive scan systems, each refresh period updates all scan lines in each frame in sequence. When displaying a natively progressive broadcast or recorded signal, the result is optimum spatial resolution of both the stationary and moving parts of the image.
Interlacing was invented as a way to reduce flicker in early mechanical and CRT video displays without increasing the number of complete frames per second. Interlacing retains detail while requiring lower bandwidth compared to progressive scanning.
In interlaced video, the horizontal scan lines of each complete frame are treated as if numbered consecutively, and captured as two fields: an odd field (upper field) consisting of the odd-numbered lines and an even field (lower field) consisting of the even-numbered lines.
Analog display devices reproduce each frame, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more lifelike reproduction of rapidly moving parts of the image when viewed on an interlaced CRT display.
NTSC, PAL and SECAM are interlaced formats. Abbreviated video resolution specifications often include an i to indicate interlacing. For example, PAL video format is often described as 576i50, where 576 indicates the total number of horizontal scan lines, i indicates interlacing, and 50 indicates 50 fields (half-frames) per second.
When displaying a natively interlaced signal on a progressive scan device, overall spatial resolution is degraded by simple line doubling—artifacts such as flickering or "comb" effects in moving parts of the image which appear unless special signal processing eliminates them.
A procedure known as deinterlacing can optimize the display of an interlaced video signal from an analog, DVD or satellite source on a progressive scan device such as an LCD television, digital video projector or plasma panel. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material.
Aspect ratio:
Aspect ratio describes the dimensions of video screens and video picture elements. All popular video formats are rectilinear, and so can be described by a ratio between width and height. The screen aspect ratio of a traditional television screen is 4:3, or about 1.33:1. High definition televisions use an aspect ratio of 16:9, or about 1.78:1. The aspect ratio of a full 35 mm film frame with soundtrack (also known as the Academy ratio) is 1.375:1.
Pixels on computer monitors are usually square, but pixels used in digital video often have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard, and the corresponding anamorphic widescreen formats.
Therefore, a 720 by 480 pixel NTSC DV image displays with the 4:3 aspect ratio (the traditional television standard) if the pixels are thin, and displays at the 16:9 aspect ratio (the anamorphic widescreen format) if the pixels are fat.
The popularity of viewing video on mobile phones has led to the growth of vertical video. Mary Meeker, a partner at Silicon Valley venture capital firm Kleiner Perkins Caufield & Byers, highlighted the growth of vertical video viewing in her 2015 Internet Trends Report – growing from 5% of video viewing in 2010 to 29% in 2015.
Vertical video ads like Snapchat’s are watched in their entirety 9X more than landscape video ads. The format was rapidly taken up by leading social platforms and media publishers such as Mashable In October 2015 video platform Grabyo launched technology to help video publishers adapt horizotonal 16:9 video into mobile formats such as vertical and square.
Color space and bits per pixel
Color model name describes the video color representation. YIQ was used in NTSC television. It corresponds closely to the YUV scheme used in NTSC and PAL television and the YDbDr scheme used by SECAM television.
The number of distinct colors a pixel can represent depends on the number of bits per pixel (bpp). A common way to reduce the amount of data required in digital video is by chroma subsampling (e.g., 4:4:4, 4:2:2, 4:2:0/4:1:1).
Because the human eye is less sensitive to details in color than brightness, the luminance data for all pixels is maintained, while the chrominance data is averaged for a number of pixels in a block and that same value is used for all of them. For example, this results in a 50% reduction in chrominance data using 2 pixel blocks (4:2:2) or 75% using 4 pixel blocks(4:2:0). This process does not reduce the number of possible color values that can be displayed, it reduces the number of distinct points at which the color changes.
Video quality
Video quality can be measured with formal metrics like PSNR or with subjective video quality using expert observation.
The subjective video quality of a video processing system is evaluated as follows:
- Choose the video sequences (the SRC) to use for testing.
- Choose the settings of the system to evaluate (the HRC).
- Choose a test method for how to present video sequences to experts and to collect their ratings.
- Invite a sufficient number of experts, preferably not fewer than 15.
- Carry out testing.
- Calculate the average marks for each HRC based on the experts' ratings.
Many subjective video quality methods are described in the ITU-T recommendation BT.500.
One of the standardized method is the Double Stimulus Impairment Scale (DSIS). In DSIS, each expert views an unimpaired reference video followed by an impaired version of the same video. The expert then rates the impaired video using a scale ranging from "impairments are imperceptible" to "impairments are very annoying".
Video compression method (digital only)
Main article: Video compression
Uncompressed video delivers maximum quality, but with a very high data rate. A variety of methods are used to compress video streams, with the most effective ones using a Group Of Pictures (GOP) to reduce spatial and temporal redundancy. Broadly speaking, spatial redundancy is reduced by registering differences between parts of a single frame; this task is known as intraframe compression and is closely related to image compression.
Likewise, temporal redundancy can be reduced by registering differences between frames; this task is known as interframe compression, including motion compensation and other techniques. The most common modern standards are MPEG-2, used for DVD, Blu-ray and satellite television, and MPEG-4, used for AVCHD, Mobile phones (3GP) and Internet.
Stereoscopic
Stereoscopic video can be created using several different methods:
- Two channels: a right channel for the right eye and a left channel for the left eye. Both channels may be viewed simultaneously by using light-polarizing filters 90 degrees off-axis from each other on two video projectors. These separately polarized channels are viewed wearing eyeglasses with matching polarization filters.
- One channel with two overlaid color-coded layers. This left and right layer technique is occasionally used for network broadcast, or recent "anaglyph" releases of 3D movies on DVD. Simple Red/Cyan plastic glasses provide the means to view the images discretely to form a stereoscopic view of the content.
- One channel with alternating left and right frames for the corresponding eye, using LCD shutter glasses that read the frame sync from the VGA Display Data Channel to alternately block the image to each eye, so the appropriate eye sees the correct frame. This method is most common in computer virtual reality applications such as in a Cave Automatic Virtual Environment, but reduces effective video framerate to one-half of normal (for example, from 120 Hz to 60 Hz).
Blu-ray Discs greatly improve the sharpness and detail of the two-color 3D effect in color-coded stereo programs. See articles Stereoscopy and 3-D film.
Formats:
Different layers of video transmission and storage each provide their own set of formats to choose from.
For transmission, there is a physical connector and signal protocol ("video connection standard" below). A given physical link can carry certain "display standards" that specify a particular refresh rate, display resolution, and color space.
Many analog and digital recording formats are in use, and digital video clips can also be stored on a computer file system as files, which have their own formats. In addition to the physical format used by the data storage device or transmission medium, the stream of ones and zeros that is sent must be in a particular digital video compression format, of which a number are available.
Analog video
Analog video is a video signal transferred by an analog signal. An analog color video signal contains luminance, brightness (Y) and chrominance (C) of an analog television image. When combined into one channel, it is called composite video as is the case, among others with NTSC, PAL and SECAM.
Analog video may be carried in separate channels, as in two channel S-Video (YC) and multi-channel component video formats. Analog video is used in both consumer and professional television production applications.
Digital video
Digital video signal formats with higher quality have been adopted, including serial digital interface (SDI), Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI) and DisplayPort Interface, though analog video interfaces are still used and widely available. There exist different adaptors and variants.
Click on any of the following hyperlinks for more about Video:
- Transport medium
- Video connectors, cables, and signal standards
- Video display standards
- Digital television
- Analog television
- Computer displays
- Recording formats before video tape
- Analog tape formats
- Digital tape formats
- Optical disc storage formats
- Digital encoding formats
- Standards
- See also:
- General
- Video format
- Video usage
- Video screen recording
- Programmer's Guide to Video Systems: in-depth technical info on 480i, 576i, 1080i, 720p, etc.
- Format Descriptions for Moving Images - www.digitalpreservation.gov
Photography including a Timeline of Photography Technology
YouTube Video: How to Take The Perfect Picture of Yourself
Pictured: Clockwise from Upper Left: Kodak Instamatic 500, Polaroid Land Camera 360 Electronic Flash, Google Pixel XL Smartphone, Sony DSC-W830 20.1-Megapixel Digital Camera
Click here for a Timeline of Photography Technology.
Photography is the science, art, application and practice of creating durable images by recording light or other electromagnetic radiation, either electronically by means of an image sensor, or chemically by means of a light-sensitive material such as photographic film.
Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. With an electronic image sensor, this produces an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing.
The result with photographic emulsion is an invisible latent image, which is later chemically "developed" into a visible image, either negative or positive depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing.
Photography is employed in many fields of science, manufacturing (e.g., photolithography), and business, as well as its more direct uses for art, film and video production, recreational purposes, hobby, and mass communication.
Click on any of the following hyperlinks for more about the Technology of Photography:
Photography is the science, art, application and practice of creating durable images by recording light or other electromagnetic radiation, either electronically by means of an image sensor, or chemically by means of a light-sensitive material such as photographic film.
Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. With an electronic image sensor, this produces an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing.
The result with photographic emulsion is an invisible latent image, which is later chemically "developed" into a visible image, either negative or positive depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing.
Photography is employed in many fields of science, manufacturing (e.g., photolithography), and business, as well as its more direct uses for art, film and video production, recreational purposes, hobby, and mass communication.
Click on any of the following hyperlinks for more about the Technology of Photography:
- History
- Photographic techniques
- Modes of production
- Social and cultural implications
- Law
- See also:
- Outline of photography
- Science of photography
- List of photographers
- Image Editing
- Photolab and minilab
- World History of Photography From The History of Art.
- Daguerreotype to Digital: A Brief History of the Photographic Process From the State Library & Archives of Florida.
3D Printing Technology:
YouTube Video: Can a 3D printer make guns?
Pictured below: 3D Printing: Innovation's New Lifeblood
- Five Myths about 3-D Printing, by the Washington Post, August 10, 2018
- 3D Printing, including 3D Printed Firearms
- Applications of 3D Printing
YouTube Video: Can a 3D printer make guns?
Pictured below: 3D Printing: Innovation's New Lifeblood
[I added this article above the 3D Printing text following it, as it is a recent trend in the vision of ultimate application of 3D printing, potentially on a much larger scale than originally anticipated.]
Washington Post August 10, 2018 Opinion Piece by By Richard A. D'Aveni : the Bakala professor of strategy at Dartmouth’s Tuck School of Business. He is the author of the forthcoming book “The Pan-Industrial Revolution: How New Manufacturing Titans Will Transform the World.”
Five Myths about 3-D Printing by the Washington Post.
"Like any fast-developing technology, 3-D printing, described more technically as “additive manufacturing,” is susceptible to a variety of misconceptions. While recent debates have revolved around 3-D-printed firearms, most of the practical issues in the field come down to the emergence of new manufacturing techniques. The resulting culture of innovation has led to some persistent myths. Here are five of the most common.
MYTH NO. 1: 3-D printing is slow and expensive:
Early 3-D printing was indeed agonizingly slow, requiring pricey equipment, select materials and tedious trial-and-error fiddling to make improvements. In 2015, Quartz magazine said that 3-D printers are “still slow, inaccurate and generally only print one material at a time. And that’s not going to change any time soon.”
When the stock price of the leading printer manufacturers was free-falling in 2016, Inc. magazine announced that 3-D printing was “dying,” mostly because people were realizing the high cost of printer feedstock.
But a variety of new techniques for additive manufacturing are proving those premises wrong. Desktop Metal’s Single Pass Jetting, HP’s Multi Jet Fusion and Carbon’s Digital Light Synthesis can all make products in minutes, not hours.
Lab tests show that these printers are cost-competitive with conventional manufacturing in the tens or even hundreds of thousands of units. Many of the newest printers also use lower-price commodity materials rather than specially formulated proprietary feedstocks, so the cost is falling rapidly.
MYTH NO. 2: 3-D printers are limited to small products.
By design, 3-D printers are not large. They need an airtight build chamber to function, so most are no larger than a copy machine. Nick Allen, an early 3-D printing evangelist, once said, “Producing anything in bulk that is bigger than your fist seems to be a waste of time.”
TechRepublic warned in 2014 that 3-D printing from “plastic filament can’t make anything too sturdy,” which would further limit the size of printed objects.
But some techniques, such as Big Area Additive Manufacturing , work in the open air and can generate highly resilient pieces. They’ve been used to build products from automobiles to jet fighters. A new method for building involves a roving “printer bot” that gradually adds fast-hardening materials to carry out the construction. This spring, a Dutch company completed a pedestrian bridge using 3-D printing methods.
MYTH NO. 3: 3-D printers produce only low quality products:
As anyone who’s handled a crude 3-D-printed keychain can probably guess, the hardest part of 3-D printing is ensuring that a product looks good. When you print layer upon layer, you don’t get the smooth finish of conventional manufacturing.
“There’s no device that you’re using today that can be 3-D printed to the standard you’re going to accept as the consumer,” said Liam Casey, the chief executive of PCH International, in 2015.
The Additive Manufacturing and 3D Printing Research Group at the University of Nottingham in Britain likewise predicted that high post-printing costs, among other challenges, would help keep 3-D printing from expanding much beyond customized or highly complex parts.
But some new techniques, such as Digital Light Synthesis, can generate a high-quality finish from the start. That’s because they aren’t based on layering. The products are monolithic — they emerge smoothly from a vat of liquid , similar to the reassembled robot in the Terminator movies.
Other printer manufacturers are building automated hybrid systems that combine 3-D-printed products with conventional finishing.
If we think of quality more broadly, additive is likely to improve on conventional products.
That’s because 3-D printing can handle sophisticated internal structures and radical geometries that would be impossible via conventional manufacturing. Boeing is now installing additive support struts in its jets. They’re a good deal lighter than conventional equivalents, but they’re stronger because they have honeycomb structures that couldn’t be made before. Adidas is making running shoes with complex lattices that are firmer and better at shock absorption than conventional shoes.
MYTH NO. 4: 3-D printing will give us artificial organs:
One of the most exciting areas of additive manufacturing is bioprinting. Thousands of people die every year waiting for replacement hearts, kidneys and other organs; if we could generate artificial organs, we could eliminate a leading cause of death in the United States.
We’ve already made major advances with customized 3-D-printed prosthetics and orthodontics, and most hearing aids now come from additive manufacturing. Why not organs? A 2014 CNN article predicted that 3-D-printed organs might soon be a reality, since the machines’ “precise process can reproduce vascular systems required to make organs viable.” Smithsonian magazine likewise announced in 2015 that “Soon, Your Doctor Could Print a Human Organ on Demand.”
But scientists have yet to crack the fundamental problem of creating life. We can build a matrix that will support living tissue, and we can add a kind of “cell ink” from the recipient’s stem cells to create the tissue. But we haven’t been able to generate a microscopic capillary network to feed oxygen to this tissue.
The most promising current work focuses on artificial skin, which is of special interest to cosmetic companies looking for an unlimited supply of skin for testing new products. Skin is the easiest organ to manufacture because it’s relatively stable, but success is several years away at best. Other organs are decades away from reality: Even if we could solve the capillary problem, the cost of each organ might be prohibitive.
MYTH NO. 5: Small-scale users will dominate 3-D printing:
In his best-selling 2012 book, “Makers: The New Industrial Revolution,” Chris Anderson argued that 3-D printing would usher in a decentralized economy of people generating small quantities of products for local use from printers at home or in community workshops.
The 2017 volume “Designing Reality: How to Survive and Thrive in the Third Digital Revolution ” similarly portrays a future of self-sufficient cities supplied by community “fab labs.”
In reality, relatively few individuals have bought 3-D printers. Corporations and educational institutions have purchased the majority of them, and that trend is unlikely to change.
Over time, 3-D printing may bring about the opposite of Anderson’s vision: a world where corporate manufacturers, not ordinary civilians, are empowered by the technology. With 3-D printing, companies can make a specialized product one month, then switch to a different kind of product the next if demand falls.
General Electric’s factory in Pune, India, for example, can adjust its output of parts for medical equipment or turbines depending on demand.
As a result, companies will be able to profit from operating in multiple industries. If demand in one industry slows, the firm can shift the unused factory capacity over to making products for higher-demand industries. Eventually, we’re likely to see a new wave of diversification, leading to pan-industrial behemoths that could cover much of the manufacturing economy.
[End of Article]
___________________________________________________________________________
3D Printing
3D printing is any of various processes in which material is joined or solidified under computer control to create a three-dimensional object, with material being added together (such as liquid molecules or powder grains being fused together). 3D printing is used in both rapid prototyping and additive manufacturing (AM).
Objects can be of almost any shape or geometry and typically are produced using digital model data from a 3D model or another electronic data source such as an Additive Manufacturing File (AMF) file (usually in sequential layers).
There are many different technologies, like stereolithography (SLA) or fused deposit modeling (FDM). Thus, unlike material removed from a stock in the conventional machining process, 3D printing or AM builds a three-dimensional object from computer-aided design (CAD) model or AMF file, usually by successively adding material layer by layer.
The term "3D printing" originally referred to a process that deposits a binder material onto a powder bed with inkjet printer heads layer by layer. More recently, the term is being used in popular vernacular to encompass a wider variety of additive manufacturing techniques. United States and global technical standards use the official term additive manufacturing for this broader sense.
The umbrella term additive manufacturing (AM) gained wide currency in the 2000s, inspired by the theme of material being added together (in any of various ways). In contrast, the term subtractive manufacturing appeared as a retronym for the large family of machining processes with material removal as their common theme.
The term 3D printing still referred only to the polymer technologies in most minds, and the term AM was likelier to be used in metalworking and end use part production contexts than among polymer, inkjet, or stereolithography enthusiasts.
By the early 2010s, the terms 3D printing and additive manufacturing evolved senses in which they were alternate umbrella terms for AM technologies, one being used in popular vernacular by consumer-maker communities and the media, and the other used more formally by industrial AM end-use part producers, AM machine manufacturers, and global technical standards organizations.
Until recently, the term 3D printing has been associated with machines low-end in price or in capability. Both terms reflect that the technologies share the theme of material addition or joining throughout a 3D work envelope under automated control.
Peter Zelinski, the editor-in-chief of Additive Manufacturing magazine, pointed out in 2017 that the terms are still often synonymous in casual usage but that some manufacturing industry experts are increasingly making a sense distinction whereby AM comprises 3D printing plus other technologies or other aspects of a manufacturing process.
Other terms that have been used as AM synonyms or hypernyms have included desktop manufacturing, rapid manufacturing (as the logical production-level successor to rapid prototyping), and on-demand manufacturing (which echoes on-demand printing in the 2D sense of printing).
That such application of the adjectives rapid and on-demand to the noun manufacturing was novel in the 2000s reveals the prevailing mental model of the long industrial era in which almost all production manufacturing involved long lead times for laborious tooling development.
Today, the term subtractive has not replaced the term machining, instead complementing it when a term that covers any removal method is needed. Agile tooling is the use of modular means to design tooling that is produced by additive manufacturing or 3D printing methods to enable quick prototyping and responses to tooling and fixture needs.
Agile tooling uses a cost effective and high quality method to quickly respond to customer and market needs, and it can be used in hydro-forming, stamping, injection molding and other manufacturing processes.
Click on any of the following blue hyperlinks for more about 3D Printing:
3D printed firearms:
In 2012, the U.S.-based group Defense Distributed disclosed plans to design a working plastic gun that could be downloaded and reproduced by anybody with a 3D printer.
Defense Distributed has also designed a 3D printable AR-15 type rifle lower receiver (capable of lasting more than 650 rounds) and a variety of magazines, including for the AK-47.
In May 2013, Defense Distributed completed design of the first working blueprint to produce a plastic gun with a 3D printer. The United States Department of State demanded removal of the instructions from the Defense Distributed website, deeming them a violation of the Arms Export Control Act.
In 2015, Defense Distributed founder Cody Wilson sued the United States government on free speech grounds and in 2018 the Department of Justice settled, acknowledging Wilson's right to publish instructions for the production of 3D printed firearms.
In 2013 a Texas company, Solid Concepts, demonstrated a 3D printed version of an M1911 pistol made of metal, using an industrial 3D printer.
Effect on Gun Control:
After Defense Distributed released their plans, questions were raised regarding the effects that 3D printing and widespread consumer-level CNC machining may have on gun control effectiveness.
The U.S. Department of Homeland Security and the Joint Regional Intelligence Center released a memo stating "Significant advances in three-dimensional (3D) printing capabilities, availability of free digital 3D printer files for firearms components, and difficulty regulating file sharing may present public safety risks from unqualified gun seekers who obtain or manufacture 3D printed guns," and that "proposed legislation to ban 3D printing of weapons may deter, but cannot completely prevent their production.
Even if the practice is prohibited by new legislation, online distribution of these digital files will be as difficult to control as any other illegally traded music, movie or software files."
Internationally, where gun controls are generally tighter than in the United States, some commentators have said the impact may be more strongly felt, as alternative firearms are not as easily obtainable.
European officials have noted that producing a 3D printed gun would be illegal under their gun control laws, and that criminals have access to other sources of weapons, but noted that as the technology improved the risks of an effect would increase. Downloads of the plans from the UK, Germany, Spain, and Brazil were heavy.
Attempting to restrict the distribution over the Internet of gun plans has been likened to the futility of preventing the widespread distribution of DeCSS which enabled DVD ripping. After the US government had Defense Distributed take down the plans, they were still widely available via The Pirate Bay and other file sharing sites.
Some US legislators have proposed regulations on 3D printers to prevent their use for printing guns. 3D printing advocates have suggested that such regulations would be futile, could cripple the 3D printing industry, and could infringe on free speech rights.
Legal Status in the United States:
Under the Undetectable Firearms Act any firearm that cannot be detected by a metal detector is illegal to manufacture, so legal designs for firearms such as the Liberator require a metal plate to be inserted into the printed body.
The act had a sunset provision to expire December 9, 2013. Senator Charles Schumer proposed renewing the law, and expanding the type of guns that would be prohibited.
Proposed renewals and expansions of the current Undetectable Firearms Act (H.R. 1474, S. 1149) include provisions to criminalize individual production of firearm receivers and magazines that do not include arbitrary amounts of metal, measures outside the scope of the original UFA and not extended to cover commercial manufacture.
On December 3, 2013, the United States House of Representatives passed the bill To extend the Undetectable Firearms Act of 1988 for 10 years (H.R. 3626; 113th Congress). The bill extended the Act, but did not change any of the law's provisions.
See also:
Applications of 3D Printing:
3D printing has many applications. In manufacturing, medicine, architecture, and custom art and design. Some people use 3D printers to create more 3D printers. In the current scenario, 3D printing process has been used in manufacturing, medical, industry and sociocultural sectors which facilitate 3D printing to become successful commercial technology.
Click on any of the following blue hyperlinks for more about each 3D Application:
Washington Post August 10, 2018 Opinion Piece by By Richard A. D'Aveni : the Bakala professor of strategy at Dartmouth’s Tuck School of Business. He is the author of the forthcoming book “The Pan-Industrial Revolution: How New Manufacturing Titans Will Transform the World.”
Five Myths about 3-D Printing by the Washington Post.
"Like any fast-developing technology, 3-D printing, described more technically as “additive manufacturing,” is susceptible to a variety of misconceptions. While recent debates have revolved around 3-D-printed firearms, most of the practical issues in the field come down to the emergence of new manufacturing techniques. The resulting culture of innovation has led to some persistent myths. Here are five of the most common.
MYTH NO. 1: 3-D printing is slow and expensive:
Early 3-D printing was indeed agonizingly slow, requiring pricey equipment, select materials and tedious trial-and-error fiddling to make improvements. In 2015, Quartz magazine said that 3-D printers are “still slow, inaccurate and generally only print one material at a time. And that’s not going to change any time soon.”
When the stock price of the leading printer manufacturers was free-falling in 2016, Inc. magazine announced that 3-D printing was “dying,” mostly because people were realizing the high cost of printer feedstock.
But a variety of new techniques for additive manufacturing are proving those premises wrong. Desktop Metal’s Single Pass Jetting, HP’s Multi Jet Fusion and Carbon’s Digital Light Synthesis can all make products in minutes, not hours.
Lab tests show that these printers are cost-competitive with conventional manufacturing in the tens or even hundreds of thousands of units. Many of the newest printers also use lower-price commodity materials rather than specially formulated proprietary feedstocks, so the cost is falling rapidly.
MYTH NO. 2: 3-D printers are limited to small products.
By design, 3-D printers are not large. They need an airtight build chamber to function, so most are no larger than a copy machine. Nick Allen, an early 3-D printing evangelist, once said, “Producing anything in bulk that is bigger than your fist seems to be a waste of time.”
TechRepublic warned in 2014 that 3-D printing from “plastic filament can’t make anything too sturdy,” which would further limit the size of printed objects.
But some techniques, such as Big Area Additive Manufacturing , work in the open air and can generate highly resilient pieces. They’ve been used to build products from automobiles to jet fighters. A new method for building involves a roving “printer bot” that gradually adds fast-hardening materials to carry out the construction. This spring, a Dutch company completed a pedestrian bridge using 3-D printing methods.
MYTH NO. 3: 3-D printers produce only low quality products:
As anyone who’s handled a crude 3-D-printed keychain can probably guess, the hardest part of 3-D printing is ensuring that a product looks good. When you print layer upon layer, you don’t get the smooth finish of conventional manufacturing.
“There’s no device that you’re using today that can be 3-D printed to the standard you’re going to accept as the consumer,” said Liam Casey, the chief executive of PCH International, in 2015.
The Additive Manufacturing and 3D Printing Research Group at the University of Nottingham in Britain likewise predicted that high post-printing costs, among other challenges, would help keep 3-D printing from expanding much beyond customized or highly complex parts.
But some new techniques, such as Digital Light Synthesis, can generate a high-quality finish from the start. That’s because they aren’t based on layering. The products are monolithic — they emerge smoothly from a vat of liquid , similar to the reassembled robot in the Terminator movies.
Other printer manufacturers are building automated hybrid systems that combine 3-D-printed products with conventional finishing.
If we think of quality more broadly, additive is likely to improve on conventional products.
That’s because 3-D printing can handle sophisticated internal structures and radical geometries that would be impossible via conventional manufacturing. Boeing is now installing additive support struts in its jets. They’re a good deal lighter than conventional equivalents, but they’re stronger because they have honeycomb structures that couldn’t be made before. Adidas is making running shoes with complex lattices that are firmer and better at shock absorption than conventional shoes.
MYTH NO. 4: 3-D printing will give us artificial organs:
One of the most exciting areas of additive manufacturing is bioprinting. Thousands of people die every year waiting for replacement hearts, kidneys and other organs; if we could generate artificial organs, we could eliminate a leading cause of death in the United States.
We’ve already made major advances with customized 3-D-printed prosthetics and orthodontics, and most hearing aids now come from additive manufacturing. Why not organs? A 2014 CNN article predicted that 3-D-printed organs might soon be a reality, since the machines’ “precise process can reproduce vascular systems required to make organs viable.” Smithsonian magazine likewise announced in 2015 that “Soon, Your Doctor Could Print a Human Organ on Demand.”
But scientists have yet to crack the fundamental problem of creating life. We can build a matrix that will support living tissue, and we can add a kind of “cell ink” from the recipient’s stem cells to create the tissue. But we haven’t been able to generate a microscopic capillary network to feed oxygen to this tissue.
The most promising current work focuses on artificial skin, which is of special interest to cosmetic companies looking for an unlimited supply of skin for testing new products. Skin is the easiest organ to manufacture because it’s relatively stable, but success is several years away at best. Other organs are decades away from reality: Even if we could solve the capillary problem, the cost of each organ might be prohibitive.
MYTH NO. 5: Small-scale users will dominate 3-D printing:
In his best-selling 2012 book, “Makers: The New Industrial Revolution,” Chris Anderson argued that 3-D printing would usher in a decentralized economy of people generating small quantities of products for local use from printers at home or in community workshops.
The 2017 volume “Designing Reality: How to Survive and Thrive in the Third Digital Revolution ” similarly portrays a future of self-sufficient cities supplied by community “fab labs.”
In reality, relatively few individuals have bought 3-D printers. Corporations and educational institutions have purchased the majority of them, and that trend is unlikely to change.
Over time, 3-D printing may bring about the opposite of Anderson’s vision: a world where corporate manufacturers, not ordinary civilians, are empowered by the technology. With 3-D printing, companies can make a specialized product one month, then switch to a different kind of product the next if demand falls.
General Electric’s factory in Pune, India, for example, can adjust its output of parts for medical equipment or turbines depending on demand.
As a result, companies will be able to profit from operating in multiple industries. If demand in one industry slows, the firm can shift the unused factory capacity over to making products for higher-demand industries. Eventually, we’re likely to see a new wave of diversification, leading to pan-industrial behemoths that could cover much of the manufacturing economy.
[End of Article]
___________________________________________________________________________
3D Printing
3D printing is any of various processes in which material is joined or solidified under computer control to create a three-dimensional object, with material being added together (such as liquid molecules or powder grains being fused together). 3D printing is used in both rapid prototyping and additive manufacturing (AM).
Objects can be of almost any shape or geometry and typically are produced using digital model data from a 3D model or another electronic data source such as an Additive Manufacturing File (AMF) file (usually in sequential layers).
There are many different technologies, like stereolithography (SLA) or fused deposit modeling (FDM). Thus, unlike material removed from a stock in the conventional machining process, 3D printing or AM builds a three-dimensional object from computer-aided design (CAD) model or AMF file, usually by successively adding material layer by layer.
The term "3D printing" originally referred to a process that deposits a binder material onto a powder bed with inkjet printer heads layer by layer. More recently, the term is being used in popular vernacular to encompass a wider variety of additive manufacturing techniques. United States and global technical standards use the official term additive manufacturing for this broader sense.
The umbrella term additive manufacturing (AM) gained wide currency in the 2000s, inspired by the theme of material being added together (in any of various ways). In contrast, the term subtractive manufacturing appeared as a retronym for the large family of machining processes with material removal as their common theme.
The term 3D printing still referred only to the polymer technologies in most minds, and the term AM was likelier to be used in metalworking and end use part production contexts than among polymer, inkjet, or stereolithography enthusiasts.
By the early 2010s, the terms 3D printing and additive manufacturing evolved senses in which they were alternate umbrella terms for AM technologies, one being used in popular vernacular by consumer-maker communities and the media, and the other used more formally by industrial AM end-use part producers, AM machine manufacturers, and global technical standards organizations.
Until recently, the term 3D printing has been associated with machines low-end in price or in capability. Both terms reflect that the technologies share the theme of material addition or joining throughout a 3D work envelope under automated control.
Peter Zelinski, the editor-in-chief of Additive Manufacturing magazine, pointed out in 2017 that the terms are still often synonymous in casual usage but that some manufacturing industry experts are increasingly making a sense distinction whereby AM comprises 3D printing plus other technologies or other aspects of a manufacturing process.
Other terms that have been used as AM synonyms or hypernyms have included desktop manufacturing, rapid manufacturing (as the logical production-level successor to rapid prototyping), and on-demand manufacturing (which echoes on-demand printing in the 2D sense of printing).
That such application of the adjectives rapid and on-demand to the noun manufacturing was novel in the 2000s reveals the prevailing mental model of the long industrial era in which almost all production manufacturing involved long lead times for laborious tooling development.
Today, the term subtractive has not replaced the term machining, instead complementing it when a term that covers any removal method is needed. Agile tooling is the use of modular means to design tooling that is produced by additive manufacturing or 3D printing methods to enable quick prototyping and responses to tooling and fixture needs.
Agile tooling uses a cost effective and high quality method to quickly respond to customer and market needs, and it can be used in hydro-forming, stamping, injection molding and other manufacturing processes.
Click on any of the following blue hyperlinks for more about 3D Printing:
- History
- General principles
- Processes and printers
- Applications
- Legal aspects
- Health and safety
- Impact
- See also:
- 3D bioprinting
- 3D Manufacturing Format
- Actuator
- Additive Manufacturing File Format
- AstroPrint
- Cloud manufacturing
- Computer numeric control
- Fusion3
- Laser cutting
- Limbitless Solutions
- List of 3D printer manufacturers
- List of common 3D test models
- List of emerging technologies
- List of notable 3D printed weapons and parts
- Magnetically assisted slip casting
- MakerBot Industries
- Milling center
- Organ-on-a-chip
- Self-replicating machine
- Ultimaker
- Volumetric printing
- 3D printing – Wikipedia book
- 3D Printing White Papers Expert insights on additive manufacturing and 3D printing
3D printed firearms:
In 2012, the U.S.-based group Defense Distributed disclosed plans to design a working plastic gun that could be downloaded and reproduced by anybody with a 3D printer.
Defense Distributed has also designed a 3D printable AR-15 type rifle lower receiver (capable of lasting more than 650 rounds) and a variety of magazines, including for the AK-47.
In May 2013, Defense Distributed completed design of the first working blueprint to produce a plastic gun with a 3D printer. The United States Department of State demanded removal of the instructions from the Defense Distributed website, deeming them a violation of the Arms Export Control Act.
In 2015, Defense Distributed founder Cody Wilson sued the United States government on free speech grounds and in 2018 the Department of Justice settled, acknowledging Wilson's right to publish instructions for the production of 3D printed firearms.
In 2013 a Texas company, Solid Concepts, demonstrated a 3D printed version of an M1911 pistol made of metal, using an industrial 3D printer.
Effect on Gun Control:
After Defense Distributed released their plans, questions were raised regarding the effects that 3D printing and widespread consumer-level CNC machining may have on gun control effectiveness.
The U.S. Department of Homeland Security and the Joint Regional Intelligence Center released a memo stating "Significant advances in three-dimensional (3D) printing capabilities, availability of free digital 3D printer files for firearms components, and difficulty regulating file sharing may present public safety risks from unqualified gun seekers who obtain or manufacture 3D printed guns," and that "proposed legislation to ban 3D printing of weapons may deter, but cannot completely prevent their production.
Even if the practice is prohibited by new legislation, online distribution of these digital files will be as difficult to control as any other illegally traded music, movie or software files."
Internationally, where gun controls are generally tighter than in the United States, some commentators have said the impact may be more strongly felt, as alternative firearms are not as easily obtainable.
European officials have noted that producing a 3D printed gun would be illegal under their gun control laws, and that criminals have access to other sources of weapons, but noted that as the technology improved the risks of an effect would increase. Downloads of the plans from the UK, Germany, Spain, and Brazil were heavy.
Attempting to restrict the distribution over the Internet of gun plans has been likened to the futility of preventing the widespread distribution of DeCSS which enabled DVD ripping. After the US government had Defense Distributed take down the plans, they were still widely available via The Pirate Bay and other file sharing sites.
Some US legislators have proposed regulations on 3D printers to prevent their use for printing guns. 3D printing advocates have suggested that such regulations would be futile, could cripple the 3D printing industry, and could infringe on free speech rights.
Legal Status in the United States:
Under the Undetectable Firearms Act any firearm that cannot be detected by a metal detector is illegal to manufacture, so legal designs for firearms such as the Liberator require a metal plate to be inserted into the printed body.
The act had a sunset provision to expire December 9, 2013. Senator Charles Schumer proposed renewing the law, and expanding the type of guns that would be prohibited.
Proposed renewals and expansions of the current Undetectable Firearms Act (H.R. 1474, S. 1149) include provisions to criminalize individual production of firearm receivers and magazines that do not include arbitrary amounts of metal, measures outside the scope of the original UFA and not extended to cover commercial manufacture.
On December 3, 2013, the United States House of Representatives passed the bill To extend the Undetectable Firearms Act of 1988 for 10 years (H.R. 3626; 113th Congress). The bill extended the Act, but did not change any of the law's provisions.
See also:
- Defense Distributed
- Ghost gun
- Gun control
- Gun politics in the United States
- Improvised firearm
- List of 3D printed weapons and parts
Applications of 3D Printing:
3D printing has many applications. In manufacturing, medicine, architecture, and custom art and design. Some people use 3D printers to create more 3D printers. In the current scenario, 3D printing process has been used in manufacturing, medical, industry and sociocultural sectors which facilitate 3D printing to become successful commercial technology.
Click on any of the following blue hyperlinks for more about each 3D Application:
Microwave Technology including Microwave BurnPictured below: New Innovative Microwave Technology for Processing Timber
Microwaves are a form of electromagnetic radiation with wavelengths ranging from about one meter to one millimeter; with frequencies between 300 MHz (1 m) and 300 GHz (1 mm).
Different sources define different frequency ranges as microwaves; the above broad definition includes both UHF and EHF (millimeter wave) bands.
A more common definition in radio engineering is the range between 1 and 100 GHz (wavelengths between 0.3 m and 3 mm). In all cases, microwaves include the entire SHF band (3 to 30 GHz, or 10 to 1 cm) at minimum.
Frequencies in the microwave range are often referred to by their IEEE radar band designations: S, C, X, Ku, K, or Ka band, or by similar NATO or EU designations.
The prefix micro- in microwave is not meant to suggest a wavelength in the micrometer range. Rather, it indicates that microwaves are "small" (having shorter wavelengths), compared to the radio waves used prior to microwave technology. The boundaries between far infrared, terahertz radiation, microwaves, and ultra-high-frequency radio waves are fairly arbitrary and are used variously between different fields of study.
Microwaves travel by line-of-sight; unlike lower frequency radio waves they do not diffract around hills, follow the earth's surface as ground waves, or reflect from the ionosphere, so terrestrial microwave communication links are limited by the visual horizon to about 40 miles (64 km). At the high end of the band they are absorbed by gases in the atmosphere, limiting practical communication distances to around a kilometer.
Microwaves are widely used in modern technology, for example in point-to-point communication links:
Click on any of the following blue hyperlinks for more about Microwave Technology:
Microwave burns are burn injuries caused by thermal effects of microwave radiation absorbed in a living organism.
In comparison with radiation burns caused by ionizing radiation, where the dominant mechanism of tissue damage is internal cell damage caused by free radicals, the primary damage mechanism of microwave radiation is by heat.
Microwave damage can manifest with a delay; pain or signs of skin damage can show some time after microwave exposure.
Click on any of the following blue hyperlinks for more about Microwave Burn:
Different sources define different frequency ranges as microwaves; the above broad definition includes both UHF and EHF (millimeter wave) bands.
A more common definition in radio engineering is the range between 1 and 100 GHz (wavelengths between 0.3 m and 3 mm). In all cases, microwaves include the entire SHF band (3 to 30 GHz, or 10 to 1 cm) at minimum.
Frequencies in the microwave range are often referred to by their IEEE radar band designations: S, C, X, Ku, K, or Ka band, or by similar NATO or EU designations.
The prefix micro- in microwave is not meant to suggest a wavelength in the micrometer range. Rather, it indicates that microwaves are "small" (having shorter wavelengths), compared to the radio waves used prior to microwave technology. The boundaries between far infrared, terahertz radiation, microwaves, and ultra-high-frequency radio waves are fairly arbitrary and are used variously between different fields of study.
Microwaves travel by line-of-sight; unlike lower frequency radio waves they do not diffract around hills, follow the earth's surface as ground waves, or reflect from the ionosphere, so terrestrial microwave communication links are limited by the visual horizon to about 40 miles (64 km). At the high end of the band they are absorbed by gases in the atmosphere, limiting practical communication distances to around a kilometer.
Microwaves are widely used in modern technology, for example in point-to-point communication links:
- wireless networks,
- microwave radio relay networks,
- radar,
- satellite and spacecraft communication,
- medical diathermy and cancer treatment,
- remote sensing,
- radio astronomy,
- particle accelerators,
- spectroscopy,
- industrial heating,
- collision avoidance systems,
- garage door openers
- and keyless entry systems,
- and for cooking food in microwave ovens.
Click on any of the following blue hyperlinks for more about Microwave Technology:
- Electromagnetic spectrum
- Propagation
- Antennas
- Design and analysis
- Microwave sources
- Microwave uses
- Microwave frequency bands
- Microwave frequency measurement
- Effects on health
- History
- See also:
- Block upconverter (BUC)
- Cosmic microwave background
- Electron cyclotron resonance
- International Microwave Power Institute
- Low-noise block converter (LNB)
- Maser
- Microwave auditory effect
- Microwave cavity
- Microwave chemistry
- Microwave radio relay
- Microwave transmission
- Rain fade
- RF switch matrix
- The Thing (listening device)
- EM Talk, Microwave Engineering Tutorials and Tools
- Millimeter Wave and Microwave Waveguide dimension chart.
Microwave burns are burn injuries caused by thermal effects of microwave radiation absorbed in a living organism.
In comparison with radiation burns caused by ionizing radiation, where the dominant mechanism of tissue damage is internal cell damage caused by free radicals, the primary damage mechanism of microwave radiation is by heat.
Microwave damage can manifest with a delay; pain or signs of skin damage can show some time after microwave exposure.
Click on any of the following blue hyperlinks for more about Microwave Burn:
- Frequency vs depth
- Tissue damage
- Injury cases
- Medical uses
- Perception thresholds
- Other concerns
- Low-level exposure
- Myths
Special Effects in TV, Movies and Other Entertainment Venues including Computer-generated Imagery (CGI)
- YouTube Video: Top Special Effects Software Available 2018
- YouTube Video: How To Get Started in Visual Effects
- YouTube Video: Top 10 Landmark CGI Movie Effects (by WatchMojo)
Special effects (often abbreviated as SFX, SPFX, or simply FX) are illusions or visual tricks used in the film, television, theater, video game and simulator industries to simulate the imagined events in a story or virtual world.
Special effects are traditionally divided into the categories of mechanical effects and optical effects. With the emergence of digital film-making a distinction between special effects and visual effects has grown, with the latter referring to digital post-production while "special effects" referring to mechanical and optical effects.
Mechanical effects (also called practical or physical effects) are usually accomplished during the live-action shooting. This includes the use of mechanized props, scenery, scale models, animatronics, pyrotechnics and atmospheric effects: creating physical wind, rain, fog, snow, clouds, making a car appear to drive by itself and blowing up a building, etc.
Mechanical effects are also often incorporated into set design and makeup. For example, a set may be built with break-away doors or walls to enhance a fight scene, or prosthetic makeup can be used to make an actor look like a non-human creature.
Optical effects (also called photographic effects) are techniques in which images or film frames are created photographically, either "in-camera" using multiple exposure, mattes or the Schüfftan process or in post-production using an optical printer. An optical effect might be used to place actors or sets against a different background.
Since the 1990s, computer-generated imagery (CGI: See next topic below) has come to the forefront of special effects technologies. It gives filmmakers greater control, and allows many effects to be accomplished more safely and convincingly and—as technology improves—at lower costs. As a result, many optical and mechanical effects techniques have been superseded by CGI.
Click on any of the following blue hyperlinks for more about Special Effects:
Computer-generated imagery (CGI) is the application of gogouse computer graphics to create or contribute to images in art, printed media, video games, films, television programs, shorts, commercials, videos, and simulators.
The visual scenes may be dynamic or static and may be two-dimensional (2D), though the term "CGI" is most commonly used to refer to 3D computer graphics used for creating scenes or special effects in films and television.
Additionally, the use of 2D CGI is often mistakenly referred to as "traditional animation", most often in the case when dedicated animation software such as Adobe Flash or Toon Boom is not used or the CGI is hand drawn using a tablet and mouse.
The term 'CGI animation' refers to dynamic CGI rendered as a movie. The term virtual world refers to agent-based, interactive environments. Computer graphics software is used to make computer-generated imagery for films, etc.
Availability of CGI software and increased computer speeds have allowed individual artists and small companies to produce professional-grade films, games, and fine art from their home computers. This has brought about an Internet subculture with its own set of global celebrities, clichés, and technical vocabulary.
The evolution of CGI led to the emergence of virtual cinematography in the 1990s where runs of the simulated camera are not constrained by the laws of physics.
Click on any of the following blue hyperlinks for more about Computer-Generated Imagery:
Special effects are traditionally divided into the categories of mechanical effects and optical effects. With the emergence of digital film-making a distinction between special effects and visual effects has grown, with the latter referring to digital post-production while "special effects" referring to mechanical and optical effects.
Mechanical effects (also called practical or physical effects) are usually accomplished during the live-action shooting. This includes the use of mechanized props, scenery, scale models, animatronics, pyrotechnics and atmospheric effects: creating physical wind, rain, fog, snow, clouds, making a car appear to drive by itself and blowing up a building, etc.
Mechanical effects are also often incorporated into set design and makeup. For example, a set may be built with break-away doors or walls to enhance a fight scene, or prosthetic makeup can be used to make an actor look like a non-human creature.
Optical effects (also called photographic effects) are techniques in which images or film frames are created photographically, either "in-camera" using multiple exposure, mattes or the Schüfftan process or in post-production using an optical printer. An optical effect might be used to place actors or sets against a different background.
Since the 1990s, computer-generated imagery (CGI: See next topic below) has come to the forefront of special effects technologies. It gives filmmakers greater control, and allows many effects to be accomplished more safely and convincingly and—as technology improves—at lower costs. As a result, many optical and mechanical effects techniques have been superseded by CGI.
Click on any of the following blue hyperlinks for more about Special Effects:
- Developmental history
- Planning and use
- Live special effects
- Mechanical effects
- Visual special effects techniques
- Notable special effects companies
- Notable special effects directors
- See also:
Computer-generated imagery (CGI) is the application of gogouse computer graphics to create or contribute to images in art, printed media, video games, films, television programs, shorts, commercials, videos, and simulators.
The visual scenes may be dynamic or static and may be two-dimensional (2D), though the term "CGI" is most commonly used to refer to 3D computer graphics used for creating scenes or special effects in films and television.
Additionally, the use of 2D CGI is often mistakenly referred to as "traditional animation", most often in the case when dedicated animation software such as Adobe Flash or Toon Boom is not used or the CGI is hand drawn using a tablet and mouse.
The term 'CGI animation' refers to dynamic CGI rendered as a movie. The term virtual world refers to agent-based, interactive environments. Computer graphics software is used to make computer-generated imagery for films, etc.
Availability of CGI software and increased computer speeds have allowed individual artists and small companies to produce professional-grade films, games, and fine art from their home computers. This has brought about an Internet subculture with its own set of global celebrities, clichés, and technical vocabulary.
The evolution of CGI led to the emergence of virtual cinematography in the 1990s where runs of the simulated camera are not constrained by the laws of physics.
Click on any of the following blue hyperlinks for more about Computer-Generated Imagery:
- Static images and landscapes
- Architectural scenes
- Anatomical models
- Generating cloth and skin images
- Interactive simulation and visualization
- Computer animation
- Virtual worlds
- In courtrooms
- See also:
- 3D modeling
- Cinema Research Corporation
- Anime Studio
- Animation database
- List of computer-animated films
- Digital image
- Parallel rendering
- Photoshop is the industry standard commercial digital photo editing tool. Its FOSS counterpart is GIMP.
- Poser DIY CGI optimized for soft models
- Ray tracing (graphics)
- Real-time computer graphics
- Shader
- Virtual human
- Virtual Physiological Human
- A Critical History of Computer Graphics and Animation – a course page at Ohio State University that includes all the course materials and extensive supplementary materials (videos, articles, links).
- CG101: A Computer Graphics Industry Reference ISBN 073570046X Unique and personal histories of early computer graphics production, plus a comprehensive foundation of the industry for all reading levels.
- F/X Gods, by Anne Thompson, Wired, February 2005.
- "History Gets A Computer Graphics Make-Over" Tayfun King, Click, BBC World News (2004-11-19)
- NIH Visible Human Gallery
Michael Dell, Founder of Dell Technologies
- YouTube Video: Michael Dell's Top 10 Rules For Success
- YouTube Video: Michael Dell addresses Dell's future | Fortune
- YouTube Video 15 Things You Didn't Know About Michael S. Dell
Michael Dell on Going Private, Company Management, and the Future of the Computer (Inc., Magazine)
Business Insider sat down with the famed entrepreneur at the World Economic Forum in Davos this week:
"We caught up with Dell CEO Michael Dell on Friday morning at the World Economic Forum in Davos.
We had heard from one of Dell's investors a few days ago that the company was thriving as a private company. Mr. Dell enthusiastically confirmed that.
Dell says the Dell team is energized, reinvigorated, and aligned. He says it is a great relief to be private after 25 years as a public company. PCs aren't dead, he says, and Windows 10 looks cool.
Dell's debt is getting upgraded as analysts realize the company isn't toast. Dell's software, server, and services business are growing in double digits. And, no, Google's Chromebooks aren't going to take over the world.
Highlights:
* It is a wonderful relief to be private after 25 years as a public company. The administrative hassles and costs are much less, and you have far greater flexibility. You don't have to react to daily volatility and repricing, and there's less distraction, so you can focus on your business and team.
* Dell went public because the company needed capital--but these days plenty of capital is available in the private market. Dell went public in 1988, when Michael Dell was 23.
Over the next 25 years, the stock produced fantastic returns. But the one-two punch of the financial crisis and concerns about the "death of the PC" poleaxed Dell's stock and caused many people to write the company off for dead.
Dell thought it would be easier to retool the company in the relative quiet of the private market, and he found investors willing to provide all the capital he needed. He understands why red-hot emerging tech companies like Uber, Palantir, Facebook, and Twitter don't go public until they are very mature--they can raise all the capital they need in the private market. There's no reason to subject yourself to the headaches of the public market if you don't have to.
* Dell's new management team is energized and aligned around the new mission--serving mid-market growth companies with comprehensive hardware, software, and services solutions. Some of Dell's old managers did not want to sign up for another tour of duty, Dell says. After going private, Dell replaced these folks with younger, hungrier executives who were excited to take on their bosses' responsibilities.
* Dell's debt, which some doomsayers thought would swamp the company, is now getting upgraded, as analysts realize Dell's future is much brighter than they thought. S&P, for example, recently raised its rating on Dell's debt to just a notch below "investment grade."
* It turns out the PC isn't dead. There are 1.8 billion of them out there, Dell says, and a big percentage of them are more than four years old. Dell's PC business got a bump from the retirement of Windows XP last year. Dell expects there will be another bump from the launch of Microsoft's next version of Windows, Windows 10.
* Windows 10 looks good so far. Microsoft's most recent version of Windows, Windows 8, was a dud. No one wanted to buy a PC to get it. (In fact, many people and companies chose to avoid getting new PCs to avoid it.) Windows 10, in contrast, looks like a positive step. It will most likely cause many older PC owners to upgrade, driving some growth in the PC market.
* Dell isn't just PCs anymore! The company's server, software, and services businesses are doing well, Dell says. Server growth is up double-digits year over year.
* Google's ChromeBooks--cheap, stripped-down computers that don't run Windows -; are popular in some segments of the market, but they're not going to take over the world. Dell sells some ChromeBooks. They're doing well in some market segments, like education. But Michael Dell thinks they may end up looking like the netbook market of a few years ago. They sound great, at first, especially for the low price of $249. But then many buyers find that they don't do what they want or expect them to do.
--This story first appeared on Business Insider.
___________________________________________________________________________
Michael Saul Dell (born February 23, 1965) is an American businessman, investor, philanthropist, and author. He is the Founder and CEO of Dell Technologies (see below), one of the world's largest technology infrastructure companies. He is ranked as the 20th richest person in the world by Forbes, with a net worth of $32.2 billion as of June 2019.
In 2011, his 243.35 million shares of Dell Inc. stock were worth $3.5 billion, giving him 12% ownership of the company. His remaining wealth of roughly $10 billion is invested in other companies and is managed by a firm whose name, MSD Capital, incorporates Dell's initials.
On January 5, 2013 it was announced that Dell had bid to take Dell Inc. private for $24.4 billion in the biggest management buyout since the Great Recession. Dell Inc. officially went private on October 29, 2013. The company once again went public in December 2018.
Click on any of the following blue hyperlinks for more about Michael Dell:
Dell Technologies Inc. is an American multinational technology company headquartered in Round Rock, Texas. It was formed as a result of the September 2016 merger of Dell and EMC Corporation (which later became Dell EMC).
Dell's products include personal computers, servers, smartphones, televisions, computer software, computer security and network security, as well as information security services. Dell ranked 35th on the 2018 Fortune 500 rankings of the largest United States corporations by total revenue.
Current operations:
Approximately 50% of the company's revenue is derived in the United States.
Dell operates under 3 divisions as follows:
Dell also owns 5 separate businesses:
Click on any of the following blue hyperlinks for more about Dell Technologies:
Business Insider sat down with the famed entrepreneur at the World Economic Forum in Davos this week:
"We caught up with Dell CEO Michael Dell on Friday morning at the World Economic Forum in Davos.
We had heard from one of Dell's investors a few days ago that the company was thriving as a private company. Mr. Dell enthusiastically confirmed that.
Dell says the Dell team is energized, reinvigorated, and aligned. He says it is a great relief to be private after 25 years as a public company. PCs aren't dead, he says, and Windows 10 looks cool.
Dell's debt is getting upgraded as analysts realize the company isn't toast. Dell's software, server, and services business are growing in double digits. And, no, Google's Chromebooks aren't going to take over the world.
Highlights:
* It is a wonderful relief to be private after 25 years as a public company. The administrative hassles and costs are much less, and you have far greater flexibility. You don't have to react to daily volatility and repricing, and there's less distraction, so you can focus on your business and team.
* Dell went public because the company needed capital--but these days plenty of capital is available in the private market. Dell went public in 1988, when Michael Dell was 23.
Over the next 25 years, the stock produced fantastic returns. But the one-two punch of the financial crisis and concerns about the "death of the PC" poleaxed Dell's stock and caused many people to write the company off for dead.
Dell thought it would be easier to retool the company in the relative quiet of the private market, and he found investors willing to provide all the capital he needed. He understands why red-hot emerging tech companies like Uber, Palantir, Facebook, and Twitter don't go public until they are very mature--they can raise all the capital they need in the private market. There's no reason to subject yourself to the headaches of the public market if you don't have to.
* Dell's new management team is energized and aligned around the new mission--serving mid-market growth companies with comprehensive hardware, software, and services solutions. Some of Dell's old managers did not want to sign up for another tour of duty, Dell says. After going private, Dell replaced these folks with younger, hungrier executives who were excited to take on their bosses' responsibilities.
* Dell's debt, which some doomsayers thought would swamp the company, is now getting upgraded, as analysts realize Dell's future is much brighter than they thought. S&P, for example, recently raised its rating on Dell's debt to just a notch below "investment grade."
* It turns out the PC isn't dead. There are 1.8 billion of them out there, Dell says, and a big percentage of them are more than four years old. Dell's PC business got a bump from the retirement of Windows XP last year. Dell expects there will be another bump from the launch of Microsoft's next version of Windows, Windows 10.
* Windows 10 looks good so far. Microsoft's most recent version of Windows, Windows 8, was a dud. No one wanted to buy a PC to get it. (In fact, many people and companies chose to avoid getting new PCs to avoid it.) Windows 10, in contrast, looks like a positive step. It will most likely cause many older PC owners to upgrade, driving some growth in the PC market.
* Dell isn't just PCs anymore! The company's server, software, and services businesses are doing well, Dell says. Server growth is up double-digits year over year.
* Google's ChromeBooks--cheap, stripped-down computers that don't run Windows -; are popular in some segments of the market, but they're not going to take over the world. Dell sells some ChromeBooks. They're doing well in some market segments, like education. But Michael Dell thinks they may end up looking like the netbook market of a few years ago. They sound great, at first, especially for the low price of $249. But then many buyers find that they don't do what they want or expect them to do.
--This story first appeared on Business Insider.
___________________________________________________________________________
Michael Saul Dell (born February 23, 1965) is an American businessman, investor, philanthropist, and author. He is the Founder and CEO of Dell Technologies (see below), one of the world's largest technology infrastructure companies. He is ranked as the 20th richest person in the world by Forbes, with a net worth of $32.2 billion as of June 2019.
In 2011, his 243.35 million shares of Dell Inc. stock were worth $3.5 billion, giving him 12% ownership of the company. His remaining wealth of roughly $10 billion is invested in other companies and is managed by a firm whose name, MSD Capital, incorporates Dell's initials.
On January 5, 2013 it was announced that Dell had bid to take Dell Inc. private for $24.4 billion in the biggest management buyout since the Great Recession. Dell Inc. officially went private on October 29, 2013. The company once again went public in December 2018.
Click on any of the following blue hyperlinks for more about Michael Dell:
- Early life and education
- Business career
- Penalty
- Accolades
- Affiliations
- Writings
- Wealth and personal life
- See also:
- Media related to Michael Dell at Wikimedia Commons
- Appearances on C-SPAN
Dell Technologies Inc. is an American multinational technology company headquartered in Round Rock, Texas. It was formed as a result of the September 2016 merger of Dell and EMC Corporation (which later became Dell EMC).
Dell's products include personal computers, servers, smartphones, televisions, computer software, computer security and network security, as well as information security services. Dell ranked 35th on the 2018 Fortune 500 rankings of the largest United States corporations by total revenue.
Current operations:
Approximately 50% of the company's revenue is derived in the United States.
Dell operates under 3 divisions as follows:
- Dell Client Solutions Group (48% of fiscal 2019 revenues) – produces desktop PCs, notebooks, tablets, and peripherals, such as monitors, printers, and projectors under the Dell brand name
- Dell EMC Infrastructure Solutions Group (41% of fiscal 2019 revenues) – storage solutions
- VMware (10% of fiscal 2019 revenues) – a publicly traded company focused on virtualization and cloud infrastructure
Dell also owns 5 separate businesses:
Click on any of the following blue hyperlinks for more about Dell Technologies:
- History
- See also:
- Official website
- Business data for Dell Technologies:
Benjamin Franklin (January 17, 1706 [O.S. January 6, 1705] – April 17, 1790) was an American polymath and one of the Founding Fathers of the United States.
Franklin was a leading writer, printer, political philosopher, politician, Freemason, postmaster, scientist, inventor, humorist, civic activist, statesman, and diplomat.
Franklin founded many civic organizations, including the Library Company, Philadelphia's first fire department and the University of Pennsylvania.
Franklin earned the title of "The First American" for his early and indefatigable campaigning for colonial unity, initially as an author and spokesman in London for several colonies.
As the first United States Ambassador to France, he exemplified the emerging American nation. Franklin was foundational in defining the American ethos as a marriage of the practical values of thrift, hard work, education, community spirit, self-governing institutions, and opposition to authoritarianism both political and religious, with the scientific and tolerant values of the Enlightenment.
In the words of historian Henry Steele Commager, "In a Franklin could be merged the virtues of Puritanism without its defects, the illumination of the Enlightenment without its heat." To Walter Isaacson, this makes Franklin "the most accomplished American of his age and the most influential in inventing the type of society America would become."
Franklin became a successful newspaper editor and printer in Philadelphia, the leading city in the colonies, publishing the Pennsylvania Gazette at the age of 23. He became wealthy publishing this and Poor Richard's Almanack, which he authored under the pseudonym "Richard Saunders". After 1767, he was associated with the Pennsylvania Chronicle, a newspaper that was known for its revolutionary sentiments and criticisms of British policies.
Franklin pioneered and was the first president of Academy and College of Philadelphia which opened in 1751 and later became the University of Pennsylvania. He organized and was the first secretary of the American Philosophical Society and was elected president in 1769.
Franklin became a national hero in America as an agent for several colonies when he spearheaded an effort in London to have the Parliament of Great Britain repeal the unpopular Stamp Act. An accomplished diplomat, he was widely admired among the French as American minister to Paris and was a major figure in the development of positive Franco-American relations. His efforts proved vital for the American Revolution in securing shipments of crucial munitions from France.
Franklin was promoted to deputy postmaster-general for the British colonies in 1753, having been Philadelphia postmaster for many years, and this enabled him to set up the first national communications network. During the revolution, he became the first United States Postmaster General. He was active in community affairs and colonial and state politics, as well as national and international affairs.
From 1785 to 1788, he served as governor of Pennsylvania. He initially owned and dealt in slaves but, by the late 1750s, he began arguing against slavery and became an abolitionist.
His life and legacy of scientific and political achievement, and his status as one of America's most influential Founding Fathers, have seen Franklin honored more than two centuries after his death on coinage and the $100 bill, warships, and the names of many towns, counties, educational institutions, and corporations, as well as countless cultural references.
Click on any of the following blue hyperlinks for more about Benjamin Franklin:
Franklin was a leading writer, printer, political philosopher, politician, Freemason, postmaster, scientist, inventor, humorist, civic activist, statesman, and diplomat.
- As a scientist, he was a major figure in the American Enlightenment and the history of physics for his discoveries and theories regarding electricity.
- As an inventor, he is known for the lightning rod,
- bifocals,
- and the Franklin stove,
- among other inventions.
Franklin founded many civic organizations, including the Library Company, Philadelphia's first fire department and the University of Pennsylvania.
Franklin earned the title of "The First American" for his early and indefatigable campaigning for colonial unity, initially as an author and spokesman in London for several colonies.
As the first United States Ambassador to France, he exemplified the emerging American nation. Franklin was foundational in defining the American ethos as a marriage of the practical values of thrift, hard work, education, community spirit, self-governing institutions, and opposition to authoritarianism both political and religious, with the scientific and tolerant values of the Enlightenment.
In the words of historian Henry Steele Commager, "In a Franklin could be merged the virtues of Puritanism without its defects, the illumination of the Enlightenment without its heat." To Walter Isaacson, this makes Franklin "the most accomplished American of his age and the most influential in inventing the type of society America would become."
Franklin became a successful newspaper editor and printer in Philadelphia, the leading city in the colonies, publishing the Pennsylvania Gazette at the age of 23. He became wealthy publishing this and Poor Richard's Almanack, which he authored under the pseudonym "Richard Saunders". After 1767, he was associated with the Pennsylvania Chronicle, a newspaper that was known for its revolutionary sentiments and criticisms of British policies.
Franklin pioneered and was the first president of Academy and College of Philadelphia which opened in 1751 and later became the University of Pennsylvania. He organized and was the first secretary of the American Philosophical Society and was elected president in 1769.
Franklin became a national hero in America as an agent for several colonies when he spearheaded an effort in London to have the Parliament of Great Britain repeal the unpopular Stamp Act. An accomplished diplomat, he was widely admired among the French as American minister to Paris and was a major figure in the development of positive Franco-American relations. His efforts proved vital for the American Revolution in securing shipments of crucial munitions from France.
Franklin was promoted to deputy postmaster-general for the British colonies in 1753, having been Philadelphia postmaster for many years, and this enabled him to set up the first national communications network. During the revolution, he became the first United States Postmaster General. He was active in community affairs and colonial and state politics, as well as national and international affairs.
From 1785 to 1788, he served as governor of Pennsylvania. He initially owned and dealt in slaves but, by the late 1750s, he began arguing against slavery and became an abolitionist.
His life and legacy of scientific and political achievement, and his status as one of America's most influential Founding Fathers, have seen Franklin honored more than two centuries after his death on coinage and the $100 bill, warships, and the names of many towns, counties, educational institutions, and corporations, as well as countless cultural references.
Click on any of the following blue hyperlinks for more about Benjamin Franklin:
- Ancestry
- Early life in Boston
- Philadelphia
- Inventions and scientific inquiries
- Musical endeavors
- Chess
- Public life
- Virtue, religion, and personal beliefs
- Slavery
- Vegetarianism
- Death
- Legacy
- See also:
- Benjamin Franklin in popular culture
- U.S. Constitution, floor leader in Convention
- Fugio Cent, 1787 coin designed by Franklin
- Thomas Birch's newly discovered Franklin letters
- William Goddard (patriot/publisher), apprentice/partner of Franklin
- Franklin's electrostatic machine
- Louis Timothee, apprentice/partner of Franklin
- Elizabeth Timothy, apprentice/partner of Franklin
- James Parker (publisher), apprentice/partner of Franklin
- Benjamin Franklin on postage stamps
- Observations Concerning the Increase of Mankind, Peopling of Countries, etc., by Franklin
- Order (virtue)
- List of richest Americans in history
- List of wealthiest historical figures
- List of slave owners
- List of abolitionist forerunners
- List of opponents of slavery
- Benjamin Franklin and Electrostatics experiments and Franklin's electrical writings from Wright Center for Science Education
- Franklin's impact on medicine – talk by medical historian, Dr. Jim Leavesley celebrating the 300th anniversary of Franklin's birth on Okham's Razor ABC Radio National – December 2006
- Benjamin Franklin Papers, Kislak Center for Special Collections, Rare Books and Manuscripts, University of Pennsylvania.
- Biographical and guides:
- Special Report: Citizen Ben's Greatest Virtues Time Magazine
- Biography at the Biographical Directory of the United States Congress
- Guide to Benjamin Franklin By a history professor at the University of Illinois.
- Benjamin Franklin: An extraordinary life PBS
- Benjamin Franklin: First American Diplomat, 1776–1785 US State Department
- The Electric Benjamin Franklin ushistory.org
- Benjamin Franklin: A Documentary History by J.A. Leo Lemay
- Online edition of Franklin's personal library
- Chisholm, Hugh, ed. (1911). "Franklin, Benjamin" . Encyclopædia Britannica (11th ed.). Cambridge University Press.
- "Writings of Benjamin Franklin" from C-SPAN's American Writers: A Journey Through History
- Online writings:
- Yale edition of complete works, the standard scholarly edition
- Works by Benjamin Franklin at Project Gutenberg
- Works by or about Benjamin Franklin at Internet Archive
- Works by Benjamin Franklin at LibriVox (public domain audiobooks)
- Online Works by Franklin
- Franklin's Last Will & Testament Transcription.
- Library of Congress web resource: Benjamin Franklin ... In His Own Words
- "A Silence Dogood Sampler" – Selections from Franklin's Silence Dogood writings
- Abridgement of the Book of Common Prayer (1773), by Benjamin Franklin and Francis Dashwood, transcribed by Richard Mammana
- Autobiography:
- The Autobiography of Benjamin Franklin Single page version, UShistory.org
- The Autobiography of Benjamin Franklin from American Studies at the University of Virginia
- The Autobiography of Benjamin Franklin at Project Gutenberg
- The Autobiography of Benjamin Franklin LibriVox recording
- In the arts
- Benjamin Franklin 300 (1706–2006) Official web site of the Benjamin Franklin Tercentenary.
- The Historical Society of Pennsylvania Collection of Benjamin Franklin Papers, including correspondence, government documents, writings and a copy of his will, are available for research use at the Historical Society of Pennsylvania.
- The Benjamin Franklin House Franklin's only surviving residence.
Invention of the telephone by Alexander Graham Bell (and others)
As you can see from the figure below, the design of the telephone has changed considerably over its lifetime, reflecting the improvements in technology, materials, components and manufacturing processes.
- YouTube Video: History of (Landline) Telephone Technology
- YouTube Video: Ernestine the telephone operator (Lily Tomlin) calls General Motors
- YouTube Video of Alexander Graham Bell: A Great Inventor
As you can see from the figure below, the design of the telephone has changed considerably over its lifetime, reflecting the improvements in technology, materials, components and manufacturing processes.
- Figures 5(a) to (f) show some of the early progress. Figure 5a is a replica of Bell's ‘liquid transmitter’ of 1876 and Figure 5b is a Bell telephone and terminal panel from 1877 showing the adaptation for two-way conversation. Edison's wall telephone (Figure 5c) was developed by 1880 and the classic ‘candlestick’ table top phone (Figure 5d) by 1900. As the technology improved both transmitter and receiver were incorporated into a single handset (Figure 5e), and once automatic exchanges had been invented room had to be found for a dial (Figure 5f, the Strowger automatic dial telephone, 1905).
- The appearance of synthetic plastics, starting with Bakelite in the 1920s, permitted new shapes (Figure 5g, Bakelite handset), and later developments led to colour being used in telephones for the first time (Figure 5h, plastic handset from the 1960s; Figure 5i, Trimphone, 1970s). Dials were gradually superseded by push buttons (Figure 5j, Keyphone, 1972).
- Finally digitalisation and miniaturisation have challenged designers to fit an increasing number of functions into ever-smaller handsets. Figure 6(k) shows Motorola's MicroTAC personal cellular phone, which was the smallest and lightest on the market in 1989, and Figure 5(l) is Samsung's A800 ‘hinged’ mobile phone of 2004.
The invention of the telephone, although generally credited to Alexander Bell (see below), was the culmination of work done by many individuals, and led to an array of lawsuits relating to the patent claims of several individuals and numerous companies.
Click on any of the following blue hyperlinks for more about the Invention of the Telephone:
___________________________________________________________________________
Alexander Graham Bell
Alexander Graham Bell (March 3, 1847 – August 2, 1922) was a Scottish-born American inventor, scientist, and engineer who is credited with inventing and patenting the first practical telephone. He also co-founded the American Telephone and Telegraph Company (AT&T) in 1885.
Bell's father, grandfather, and brother had all been associated with work on elocution and speech and both his mother and wife were deaf, profoundly influencing Bell's life's work.
Bell's research on hearing and speech further led him to experiment with hearing devices which eventually culminated in Bell being awarded the first U.S. patent for the telephone, on March 7, 1876. Bell considered his invention an intrusion on his real work as a scientist and refused to have a telephone in his study.
Many other inventions marked Bell's later life, including groundbreaking work in optical telecommunications, hydrofoils, and aeronautics. Although Bell was not one of the 33 founders of the National Geographic Society, he had a strong influence on the magazine while serving as the second president from January 7, 1898, until 1903.
Beyond his scientific work, Bell was an advocate of compulsory sterilization, and served as chairman or president of several eugenics organizations.
Click on any of the following blue hyperlinks for more about Alexander Graham Bell:
Click on any of the following blue hyperlinks for more about the Invention of the Telephone:
- Early development
- Electro-magnetic transmitters and receivers
- Bell's success:
- Improvements to the early telephone
- Controversies
- Memorial to the invention
- See also:
- History of the telephone
- The Telephone Cases, U.S. patent dispute and infringement court cases
- Timeline of the telephone Heroes of the Telegraph by John Munro at Project Gutenberg
- Alexander Bell's Experiments – the tuning fork and liquid transmitter.
- Scientific American Supplement No. 520, December 19, 1885
- Telephone Patents
___________________________________________________________________________
Alexander Graham Bell
Alexander Graham Bell (March 3, 1847 – August 2, 1922) was a Scottish-born American inventor, scientist, and engineer who is credited with inventing and patenting the first practical telephone. He also co-founded the American Telephone and Telegraph Company (AT&T) in 1885.
Bell's father, grandfather, and brother had all been associated with work on elocution and speech and both his mother and wife were deaf, profoundly influencing Bell's life's work.
Bell's research on hearing and speech further led him to experiment with hearing devices which eventually culminated in Bell being awarded the first U.S. patent for the telephone, on March 7, 1876. Bell considered his invention an intrusion on his real work as a scientist and refused to have a telephone in his study.
Many other inventions marked Bell's later life, including groundbreaking work in optical telecommunications, hydrofoils, and aeronautics. Although Bell was not one of the 33 founders of the National Geographic Society, he had a strong influence on the magazine while serving as the second president from January 7, 1898, until 1903.
Beyond his scientific work, Bell was an advocate of compulsory sterilization, and served as chairman or president of several eugenics organizations.
Click on any of the following blue hyperlinks for more about Alexander Graham Bell:
- Early life
- Canada
- Work with the deaf
- Continuing experimentation
- The telephone
- Family life
- Later inventions
- Eugenics
- Legacy and honors
- Portrayal in film and television
- Death
- See also:
- Alexander Graham Bell Association for the Deaf and Hard of Hearing
- Alexander Graham Bell National Historic Site
- Bell Boatyard
- Bell Homestead National Historic Site
- Bell Telephone Memorial
- Berliner, Emile
- Bourseul, Charles
- IEEE Alexander Graham Bell Medal
- John Peirce, submitted telephone ideas to Bell
- Manzetti, Innocenzo
- Meucci, Antonio
- Oriental Telephone Company
- People on Scottish banknotes
- Pioneers, a Volunteer Network
- Reis, Philipp
- The Story of Alexander Graham Bell, a 1939 movie of his life
- The Telephone Cases
- Volta Laboratory and Bureau
- William Francis Channing, submitted telephone ideas to Bell
Boston Dynamics, Creator of Intelligent Robots as featured on 60 Minutes
- YouTube Video: Watch Boston Dynamics' Robots Dancing
- YouTube Video: Robots of the future at Boston Dynamics
- YouTube Video: UpTown Spot
Boston Dynamics is an American engineering and robotics design company founded in 1992 as a spin-off from the Massachusetts Institute of Technology. Headquartered in Waltham, Massachusetts, Boston Dynamics is owned by the Hyundai Motor Group since December 2020.
Boston Dynamics is best known for the development of a series of dynamic highly-mobile robots, including BigDog, Spot, Atlas, and Handle. Since 2019, Spot has been made commercially available, making it the first commercially available robot from Boston Dynamics, with the company stating its intent to commercialize other robots as well, including Handle.
History:
The company was founded by Marc Raibert, who spun the company off from the Massachusetts Institute of Technology in 1992.
Early in the company's history, it worked with the American Systems Corporation under a contract from the Naval Air Warfare Center Training Systems Division (NAWCTSD) to replace naval training videos for aircraft launch operations with interactive 3D computer simulations featuring characters made with DI-Guy, software for realistic human simulation.
Eventually the company started making physical robots—for example, BigDog was a quadruped robot designed for the U.S. military with funding from Defense Advanced Research Projects Agency (DARPA).
On December 13, 2013, the company was acquired by Google X (later X, a subsidiary of Alphabet Inc.) for an unknown price, where it was managed by Andy Rubin until his departure from Google in 2014. Immediately before the acquisition, Boston Dynamics transferred their DI-Guy software product line to VT MÄK, a simulation software vendor based in Cambridge, Massachusetts.
On June 8, 2017, Alphabet Inc. announced the sale of the company to Japan's SoftBank Group for an undisclosed sum. On April 2, 2019, Boston Dynamics acquired the Silicon Valley startup Kinema Systems.
In December 2020 Hyundai Motor Group acquired an 80% stake in the company from SoftBank for approximately $880 million dollars. SoftBank Group retains about 20% through an affiliate.
Products:
BigDog:
Main article: BigDog
BigDog was a quadrupedal robot created in 2004 by Boston Dynamics, in conjunction with Foster-Miller, the Jet Propulsion Laboratory, and the Harvard University Concord Field Station.
It was funded by DARPA in the hopes that it would be able to serve as a robotic pack mule to accompany soldiers in terrain too rough for vehicles, but the project was shelved after BigDog was deemed too loud to be used in combat.
Instead of wheels, BigDog used four legs for movement, allowing it to move across surfaces that would defeat wheels. Called "the world's most ambitious legged robot", it was designed to carry 340 pounds (150 kg) alongside a soldier at 4 miles per hour (6.4 km/h; 1.8 m/s), traversing rough terrain at inclines up to 35 degrees.
Cheetah:
The Cheetah is a four-footed robot that gallops at 28 miles per hour (45 km/h; 13 m/s), which as of August 2012 is a land speed record for legged robots.
A similar but independently developed robot also known as Cheetah is made by MIT's Biomimetic Robotics Lab, which, by 2014, could jump over obstacles while running. By 2018 the robot was able to climb stairs.
LittleDog:
Released around 2010, LittleDog is a small quadruped robot developed for DARPA by Boston Dynamics for research. Unlike BigDog, which is run by Boston Dynamics, LittleDog is intended as a testbed for other institutions. Boston Dynamics maintains the robots for DARPA as a standard platform.
LittleDog has four legs, each powered by three electric motors. The legs have a large range of motion. The robot is strong enough for climbing and dynamic locomotion gaits. The onboard PC-level computer does sensing, actuator control and communications. LittleDog's sensors measure joint angles, motor currents, body orientation and foot/ground contact.
Control programs access the robot through the Boston Dynamics Robot API. Onboard lithium polymer batteries allow for 30 minutes of continuous operation without recharging.
Wireless communications and data logging support remote operation and data analysis. LittleDog development is funded by the DARPA Information Processing Technology Office.
PETMAN:
PETMAN (Protection Ensemble Test Mannequin) is a bipedal device constructed for testing chemical protection suits. It is the first anthropomorphic robot that moves dynamically like a real person.
LS3:
Main article: Legged Squad Support System
Legged Squad Support System (LS3), also known as AlphaDog, is a militarized version of BigDog. It is ruggedized for military use, with the ability to operate in hot, cold, wet, and dirty environments.
Atlas:
Main article: Atlas (robot)
The Agile Anthropomorphic Robot "Atlas" is a 6-foot (183 cm) bipedal humanoid robot, based on Boston Dynamics' earlier PETMAN humanoid robot, and designed for a variety of search and rescue tasks.
In February 2016 Boston Dynamics published a YouTube video entitled "Atlas, The Next Generation" showing a new humanoid robot about 5' 9" tall (175 cm, about a head shorter than the original DRC Atlas). In the video, the robot is shown performing a number of tasks that would have been difficult or impossible for the previous generation of humanoid robots.
A video posted to the Boston Dynamics channel of YouTube dated October 11, 2018, titled "Parkour Atlas", shows the robot easily running up 2' high steps onto a platform.
Atlas is shown in a September 2019 YouTube video doing "More Parkour".
Spot:
On June 23, 2016, Boston Dynamics revealed the four-legged canine-inspired Spot which only weighs 25 kg (55 pounds) and is lighter than their other products.
In February 2018, a promotional video of the Spot using its forward claw to open a door for another robot reached #1 on YouTube, with over 2 million views. A later video the same month showed Spot persisting in attempting to open the door in the face of human interference.
Viewers perceived the robot as "creepy" and "reminiscent of all kinds of sci-fi robots that wouldn't give up in their missions to seek and destroy".
On May 11, 2018 CEO of Boston Dynamics Marc Raibert on TechCrunch Robotics Session 2018 announced that the Spot robot was in pre-production and preparing for commercial availability in 2019. On its website, Boston Dynamics highlights that Spot is the "quietest robot [they] have built."
The company says it has plans with contract manufacturers to build the first 100 Spots later that year for commercial purposes, with them starting to scale production with the goal of selling Spot in 2019.
However, in September 2019, journalists were informed that the robots will not be sold, but they will be given on lease to selected business partners. In November 2019 Massachusetts State Police became the first law enforcement agency to use Spot mini as robot cop as well as in the unit's bomb squad.
Since January 23, 2020, Spot's SDK is available for anyone via GitHub. It will allow programmers to develop custom applications for Spot to do various actions that could be used across different industries. On June 16, 2020 Boston Dynamics made Spot available for the general public to purchase at a price of US$74,500.
On June 23, 2020, a lone Spot named 'Zeus' was used by SpaceX at their Boca Chica Starship Test Site to help contain sub-cooled liquid nitrogen and to inspect 'Potentially Dangerous' sites at and around the Launchpad.
On July 9, 2020, a team of Spot robots performed as cheerleaders in the stands at a baseball match between the Fukuoka SoftBank Hawks and the Rakuten Eagles, backed by a team of SoftBank Pepper Robots.
Spot performed inspection tasks on the Skarv floating production storage and offloading vessel in November 2020.
Handle:
Handle is a research robot with two flexible legs on wheels and two "hands" for manipulating or carrying objects. It can stand 6.5 feet (2 m) tall, travel at 9 miles per hour (14 km/h) and jump 4 feet (1.2 m) vertically. It uses electric power to operate various electric and hydraulic actuators, with a range of about 15 miles (25 km) on one battery charge.
Handle uses many of the same dynamics, balance and mobile manipulation principles found in the other robots by Boston Dynamics but, with only about 10 actuated joints, it is significantly less complex.
Stretch:
On March 29th 2021, Boston Dynamics announced via a video on their Youtube channel the Stretch robot that was designed for warehouse automation. The machine was able to lift up to 50 pound objects using a suction cup array.
In popular culture:
See also:
Boston Dynamics is best known for the development of a series of dynamic highly-mobile robots, including BigDog, Spot, Atlas, and Handle. Since 2019, Spot has been made commercially available, making it the first commercially available robot from Boston Dynamics, with the company stating its intent to commercialize other robots as well, including Handle.
History:
The company was founded by Marc Raibert, who spun the company off from the Massachusetts Institute of Technology in 1992.
Early in the company's history, it worked with the American Systems Corporation under a contract from the Naval Air Warfare Center Training Systems Division (NAWCTSD) to replace naval training videos for aircraft launch operations with interactive 3D computer simulations featuring characters made with DI-Guy, software for realistic human simulation.
Eventually the company started making physical robots—for example, BigDog was a quadruped robot designed for the U.S. military with funding from Defense Advanced Research Projects Agency (DARPA).
On December 13, 2013, the company was acquired by Google X (later X, a subsidiary of Alphabet Inc.) for an unknown price, where it was managed by Andy Rubin until his departure from Google in 2014. Immediately before the acquisition, Boston Dynamics transferred their DI-Guy software product line to VT MÄK, a simulation software vendor based in Cambridge, Massachusetts.
On June 8, 2017, Alphabet Inc. announced the sale of the company to Japan's SoftBank Group for an undisclosed sum. On April 2, 2019, Boston Dynamics acquired the Silicon Valley startup Kinema Systems.
In December 2020 Hyundai Motor Group acquired an 80% stake in the company from SoftBank for approximately $880 million dollars. SoftBank Group retains about 20% through an affiliate.
Products:
BigDog:
Main article: BigDog
BigDog was a quadrupedal robot created in 2004 by Boston Dynamics, in conjunction with Foster-Miller, the Jet Propulsion Laboratory, and the Harvard University Concord Field Station.
It was funded by DARPA in the hopes that it would be able to serve as a robotic pack mule to accompany soldiers in terrain too rough for vehicles, but the project was shelved after BigDog was deemed too loud to be used in combat.
Instead of wheels, BigDog used four legs for movement, allowing it to move across surfaces that would defeat wheels. Called "the world's most ambitious legged robot", it was designed to carry 340 pounds (150 kg) alongside a soldier at 4 miles per hour (6.4 km/h; 1.8 m/s), traversing rough terrain at inclines up to 35 degrees.
Cheetah:
The Cheetah is a four-footed robot that gallops at 28 miles per hour (45 km/h; 13 m/s), which as of August 2012 is a land speed record for legged robots.
A similar but independently developed robot also known as Cheetah is made by MIT's Biomimetic Robotics Lab, which, by 2014, could jump over obstacles while running. By 2018 the robot was able to climb stairs.
LittleDog:
Released around 2010, LittleDog is a small quadruped robot developed for DARPA by Boston Dynamics for research. Unlike BigDog, which is run by Boston Dynamics, LittleDog is intended as a testbed for other institutions. Boston Dynamics maintains the robots for DARPA as a standard platform.
LittleDog has four legs, each powered by three electric motors. The legs have a large range of motion. The robot is strong enough for climbing and dynamic locomotion gaits. The onboard PC-level computer does sensing, actuator control and communications. LittleDog's sensors measure joint angles, motor currents, body orientation and foot/ground contact.
Control programs access the robot through the Boston Dynamics Robot API. Onboard lithium polymer batteries allow for 30 minutes of continuous operation without recharging.
Wireless communications and data logging support remote operation and data analysis. LittleDog development is funded by the DARPA Information Processing Technology Office.
PETMAN:
PETMAN (Protection Ensemble Test Mannequin) is a bipedal device constructed for testing chemical protection suits. It is the first anthropomorphic robot that moves dynamically like a real person.
LS3:
Main article: Legged Squad Support System
Legged Squad Support System (LS3), also known as AlphaDog, is a militarized version of BigDog. It is ruggedized for military use, with the ability to operate in hot, cold, wet, and dirty environments.
Atlas:
Main article: Atlas (robot)
The Agile Anthropomorphic Robot "Atlas" is a 6-foot (183 cm) bipedal humanoid robot, based on Boston Dynamics' earlier PETMAN humanoid robot, and designed for a variety of search and rescue tasks.
In February 2016 Boston Dynamics published a YouTube video entitled "Atlas, The Next Generation" showing a new humanoid robot about 5' 9" tall (175 cm, about a head shorter than the original DRC Atlas). In the video, the robot is shown performing a number of tasks that would have been difficult or impossible for the previous generation of humanoid robots.
A video posted to the Boston Dynamics channel of YouTube dated October 11, 2018, titled "Parkour Atlas", shows the robot easily running up 2' high steps onto a platform.
Atlas is shown in a September 2019 YouTube video doing "More Parkour".
Spot:
On June 23, 2016, Boston Dynamics revealed the four-legged canine-inspired Spot which only weighs 25 kg (55 pounds) and is lighter than their other products.
In February 2018, a promotional video of the Spot using its forward claw to open a door for another robot reached #1 on YouTube, with over 2 million views. A later video the same month showed Spot persisting in attempting to open the door in the face of human interference.
Viewers perceived the robot as "creepy" and "reminiscent of all kinds of sci-fi robots that wouldn't give up in their missions to seek and destroy".
On May 11, 2018 CEO of Boston Dynamics Marc Raibert on TechCrunch Robotics Session 2018 announced that the Spot robot was in pre-production and preparing for commercial availability in 2019. On its website, Boston Dynamics highlights that Spot is the "quietest robot [they] have built."
The company says it has plans with contract manufacturers to build the first 100 Spots later that year for commercial purposes, with them starting to scale production with the goal of selling Spot in 2019.
However, in September 2019, journalists were informed that the robots will not be sold, but they will be given on lease to selected business partners. In November 2019 Massachusetts State Police became the first law enforcement agency to use Spot mini as robot cop as well as in the unit's bomb squad.
Since January 23, 2020, Spot's SDK is available for anyone via GitHub. It will allow programmers to develop custom applications for Spot to do various actions that could be used across different industries. On June 16, 2020 Boston Dynamics made Spot available for the general public to purchase at a price of US$74,500.
On June 23, 2020, a lone Spot named 'Zeus' was used by SpaceX at their Boca Chica Starship Test Site to help contain sub-cooled liquid nitrogen and to inspect 'Potentially Dangerous' sites at and around the Launchpad.
On July 9, 2020, a team of Spot robots performed as cheerleaders in the stands at a baseball match between the Fukuoka SoftBank Hawks and the Rakuten Eagles, backed by a team of SoftBank Pepper Robots.
Spot performed inspection tasks on the Skarv floating production storage and offloading vessel in November 2020.
Handle:
Handle is a research robot with two flexible legs on wheels and two "hands" for manipulating or carrying objects. It can stand 6.5 feet (2 m) tall, travel at 9 miles per hour (14 km/h) and jump 4 feet (1.2 m) vertically. It uses electric power to operate various electric and hydraulic actuators, with a range of about 15 miles (25 km) on one battery charge.
Handle uses many of the same dynamics, balance and mobile manipulation principles found in the other robots by Boston Dynamics but, with only about 10 actuated joints, it is significantly less complex.
Stretch:
On March 29th 2021, Boston Dynamics announced via a video on their Youtube channel the Stretch robot that was designed for warehouse automation. The machine was able to lift up to 50 pound objects using a suction cup array.
In popular culture:
- "Metalhead", a 2017 episode of Black Mirror, features killer-robot dogs resembling, and inspired by, Boston Dynamics robot dogs.
- In June 2019, a parody video went viral across social media in which a robot resembling Atlas was abused, before turning on its human attackers. The video turned out to be the work of Corridor Digital, who used the watermark "Bosstown Dynamics" instead of "Boston Dynamics". This video tricked many people, including celebrities like Joe Rogan, causing them to believe it was real.
- In Heroes of the Storm (2015), a multiplayer video game by Blizzard Entertainment, playable heroes are able to move quickly through the battleground by using mount called "Project: D.E.R.P.A" which references one of the Boston Dynamics' quadrupedal robots.
- The HBO Show Silicon Valley has had two prominent references to the company ‒ an episode featured a robotics company called Somerville Dynamics named after Somerville, a city that neighbors Boston, as well as in the season premiere of Season 3 featured a real Boston Dynamics Spot robot, seen crossing a street.
See also:
Thomas Edison, "America's Greatest Inventor"
- YouTube Video: Top 10 Inventions by Thomas Edison (That You've Never Heard Of)
- YouTube Video: Thomas Edison: America's Greatest Inventor | Biography Documentary
- YouTube Video: How Thomas Edison Changed The World
Thomas Alva Edison (February 11, 1847 – October 18, 1931) was an American inventor and businessman who has been described as America's greatest inventor. He developed many devices in fields such as electric power generation, mass communication, sound recording, and motion pictures.
These inventions, which include the phonograph, the motion picture camera, and early versions of the electric light bulb, have had a widespread impact on the modern industrialized world. He was one of the first inventors to apply the principles of organized science and teamwork to the process of invention, working with many researchers and employees. He established the first industrial research laboratory.
Edison was raised in the American Midwest; early in his career he worked as a telegraph operator, which inspired some of his earliest inventions. In 1876, he established his first laboratory facility in Menlo Park, New Jersey, where many of his early inventions were developed. He later established a botanic laboratory in Fort Myers, Florida in collaboration with businessmen Henry Ford and Harvey S. Firestone, and a laboratory in West Orange, New Jersey that featured the world's first film studio, the Black Maria.
Edison was a prolific inventor, holding 1,093 US patents in his name, as well as patents in other countries. Edison married twice and fathered six children. He died in 1931 of complications of diabetes.
Click on any of the following blue hyperlinks for more about Thomas Edison:
These inventions, which include the phonograph, the motion picture camera, and early versions of the electric light bulb, have had a widespread impact on the modern industrialized world. He was one of the first inventors to apply the principles of organized science and teamwork to the process of invention, working with many researchers and employees. He established the first industrial research laboratory.
Edison was raised in the American Midwest; early in his career he worked as a telegraph operator, which inspired some of his earliest inventions. In 1876, he established his first laboratory facility in Menlo Park, New Jersey, where many of his early inventions were developed. He later established a botanic laboratory in Fort Myers, Florida in collaboration with businessmen Henry Ford and Harvey S. Firestone, and a laboratory in West Orange, New Jersey that featured the world's first film studio, the Black Maria.
Edison was a prolific inventor, holding 1,093 US patents in his name, as well as patents in other countries. Edison married twice and fathered six children. He died in 1931 of complications of diabetes.
Click on any of the following blue hyperlinks for more about Thomas Edison:
- Early life
- Early career
- Menlo Park laboratory (1876–1886)
- West Orange and Fort Myers (1886–1931)
- Other inventions and projects
- Final years and death
- Marriages and children
- Views
- Awards
- Tributes
- People who worked for Edison
- See also:
- Edison Pioneers – a group formed in 1918 by employees and other associates of Thomas Edison
- Thomas Alva Edison Birthplace
- Museums:
- Information and media:
- Thomas Edison at the Encyclopædia Britannica
- Thomas Edison on In Our Time at the BBC
- Interview with Thomas Edison in 1931
- The Diary of Thomas Edison
- Works by Thomas Edison at Project Gutenberg
- Works by or about Thomas Edison at Internet Archive
- Edison's patent application for the light bulb at the National Archives.
- Thomas Edison at IMDb
- "January 4, 1903: Edison Fries an Elephant to Prove His Point" – Wired article about Edison's "macabre form of a series of animal electrocutions using AC".
- "The Invention Factory: Thomas Edison's Laboratories" National Park Service (NPS)
- Thomas Edison Personal Manuscripts and Letters
- Edison, His Life and Inventions at Project Gutenberg by Frank Lewis Dyer and Thomas Commerford Martin.
- The short film Story of Thomas Alva Edison is available for free download at the Internet Archive
- Edison Papers Rutgers.
- Edisonian Museum Antique Electrics
- Edison Innovation Foundation – Non-profit foundation supporting the legacy of Thomas Edison.
- Thomas Alva Edison at Find a Grave
- The Illustrious Vagabonds Henry Ford Heritage Association
- "The World's Greatest Inventor" October 1931, Popular Mechanics. Detailed, illustrated article.
- 14 minutes "instructional" film with fictional elements The boyhood of Thomas Edison from 1964, produced by Coronet, published by archive.org
- "Edison's Miracle of Light" PBS – American Experience. Premiered January 2015.
- Newspaper clippings about Thomas Edison in the 20th Century Press Archives of the ZBW
Henry Ford, Inventor of the First Affordable Automobile
- YouTube Video: How the Ford Model T Took Over the World
- YouTube Video: 1927 Ford Model T - Jay Leno's Garage
- YouTube Video: Henry Ford's assembly line turns 100
Henry Ford (July 30, 1863 – April 7, 1947) was an American industrialist and business magnate, founder of the Ford Motor Company, and chief developer of the assembly line technique of mass production.
By creating the first automobile that middle-class Americans could afford, Ford converted the automobile from an expensive curiosity into an accessible conveyance that profoundly impacted the landscape of the 20th century.
Ford's introduction of the Model T automobile revolutionized transportation and American industry. As the Ford Motor Company owner, he became one of the richest and best-known people in the world. He is credited with "Fordism": mass production of inexpensive goods coupled with high wages for workers.
Ford had a global vision, with consumerism as the key to peace. His intense commitment to systematically lowering costs resulted in many technical and business innovations, including a franchise system that put dealerships throughout North America and major cities on six continents. Ford left most of his vast wealth to the Ford Foundation and arranged for his family to permanently control it.
Ford was also widely known for his pacifism during the first years of World War I, and for promoting antisemitic content, including The Protocols of the Elders of Zion, through his newspaper The Dearborn Independent, and the book The International Jew.
Click on any of the following blue hyperlinks for more about Henry Ford:
By creating the first automobile that middle-class Americans could afford, Ford converted the automobile from an expensive curiosity into an accessible conveyance that profoundly impacted the landscape of the 20th century.
Ford's introduction of the Model T automobile revolutionized transportation and American industry. As the Ford Motor Company owner, he became one of the richest and best-known people in the world. He is credited with "Fordism": mass production of inexpensive goods coupled with high wages for workers.
Ford had a global vision, with consumerism as the key to peace. His intense commitment to systematically lowering costs resulted in many technical and business innovations, including a franchise system that put dealerships throughout North America and major cities on six continents. Ford left most of his vast wealth to the Ford Foundation and arranged for his family to permanently control it.
Ford was also widely known for his pacifism during the first years of World War I, and for promoting antisemitic content, including The Protocols of the Elders of Zion, through his newspaper The Dearborn Independent, and the book The International Jew.
Click on any of the following blue hyperlinks for more about Henry Ford:
- Early life
- Marriage and family
- Career
- Ford Motor Company
- Model T
- Model A and Ford's later career
- Labor philosophy
- Ford Airplane Company
- Peace and war
- World War I era
- The coming of World War II and Ford's mental collapse
- Ford Motor Company
- The Dearborn Independent and antisemitism
- International business
- Racing
- Later career and death
- Personal interests
- In popular culture
- Honors and recognition
- See also:
- Outline of Henry Ford
- Detroit, Toledo and Ironton Railroad
- Dodge v. Ford Motor Company
- Edison and Ford Winter Estates
- Ferdinand Porsche
- Ferdinand Verbiest
- Ford family tree
- List of covers of Time magazine (1920s)
- List of wealthiest historical figures
- List of richest Americans in history
- Preston Tucker
- Ransom Olds
- William Benson Mayo
- John Burroughs
- Media related to Henry Ford at Wikimedia Commons
- Quotations related to Henry Ford at Wikiquote
- Works written by or about Henry Ford at Wikisource
- Henry Ford (American industrialist) at the Encyclopædia Britannica
- Full text of My Life and Work from Project Gutenberg
- Timeline
- The Henry Ford Heritage Association
- Henry Ford—an American Experience documentary
- Works by Henry Ford at Project Gutenberg
- Works by or about Henry Ford at Internet Archive
- Works by Henry Ford at LibriVox (public domain audiobooks)
- Newspaper clippings about Henry Ford in the 20th Century Press Archives of the ZBW
People really are giving NFTs as gifts. Results may vary. (Washington Post By Heather Kelly, December 23, 2021 at 8:00 a.m. EST)
and Non-Fungible Tokens (NFTs) (Wikipedia)
and Non-Fungible Tokens (NFTs) (Wikipedia)
- YouTube Video: Four Things to Know About NFTs (Non-Fungible Tokens)
- YouTube Video: Non-Fungible Tokens & The Digital Art World
- YouTube Video: How to Make and Sell an NFT (Crypto Art Tutorial)
Below we cite three articles about NFTs:
The Hype Around NFTs: What Are They? And How Pricey Do They Get?
By Andrew Lisa Yahoo Finance April 2, 2021
If you’re still struggling to wrap your head around cryptocurrency like Bitcoin, strap in — it’s about to get worse. There’s a brand new kind of digital money-not-money that’s trending and a brand new acronym to remember: NFT.
Economy Explained: How Does Cryptocurrency Work – and Is It Safe?
NFT stands for “non-fungible tokens.” If that did nothing for you except make you think of mushrooms, you are not alone. NFTs are widely misunderstood or, more commonly, not understood at all — and for good reason. They’re new, they’re unfamiliar and they’re barely on the fringes of the mainstream. With celebrities like Mark Cuban making headlines as NFT investors, however, the general public is starting to catch up to the early adopters. Here’s what you need to know.
First… Fungible?
When it comes to goods and services, “fungible” is a synonym for “interchangeable.” Commodities like gold and oil are fungible, as are currency, stock market shares and bonds. If two people each have a $20 bill — or a barrel of oil, an ounce of gold or a share of Amazon stock — and they trade, neither party gains or loses anything. If the same two people trade cars, diamonds or Fabergé eggs, on the other hand, it will never be an even transaction.
That’s because each individual diamond, car and Fabergé egg is unique and has its own individual value based on variables like quality and condition. In short:
Read More: Economy Explained: What Is Inflation and What Does It Mean When It Goes Up or Down?
NFTs Are Snowflakes:
Cryptocurrency like Bitcoin is a medium of exchange, just like regular money. Unlike regular money, cryptocurrency is created, distributed and verified via decentralized blockchain without a middleman like a bank or government — but it’s still fungible. Both in the physical and digital spaces, one Bitcoin is the same as the next, just like a dollar.
NFTs are similar to cryptocurrencies in that they’re generated, distributed and verified via blockchain without a bank or other centralized authority. Unlike Bitcoin, however, non-fungible tokens are — as the name implies — non-fungible. Each individual NFT has its own unique value. Each appreciates in value at a different rate, and no two in the world can be swapped for an even trade.
Keep Reading: What Is the Consumer Confidence Index and How Does It Affect Me?
Money and Cryptocurrency Aren’t Enough?
NFTs are digital assets just like Bitcoin, but unlike Bitcoin, each NFT is unique — and that’s the whole point. NFTs were created to be distinctive because they’re digital representations of other things that are unique, like:
The purpose of NFTs is to denote the value of things while also protecting their unique, individual authenticity — something that’s not possible with a fungible asset like money or Bitcoin. Before NFTs, these kinds of digital files had essentially no value.
More: Economy Explained: What’s the Difference Between Fiscal vs. Monetary Policy?
For example, if an artist draws a picture and posts it online and 10,000 people download it, then 10,001 people have it, but nobody owns it. If that same artist “mints” the drawing on a blockchain and turns it into an NFT, the drawing is now verified as original to that artist.
No matter how many times it’s downloaded or duplicated, the picture’s authenticity — and the artist’s ownership — is easily verifiable on a publicly accessible blockchain record and stored safely in the artist’s digital wallet.
In the End, NFTs Are All About Security, Protection and Credibility:
NFTs allow everyone involved to have their cake and eat it, too. With an NFT, the artist from the example still gets to upload and show off the picture. Then, 10,000 people who like the picture can still right-click and download it for free. So, just as before, 10,001 people have the picture, but now, the artist owns it and the 10,000 free downloads are just great publicity.
Economy Explained: Understanding US Productivity and All the Ways It Affects You
The same holds true for avatars, memes, images and just about anything else. NFTs provide a universal system of verification and valuation, which allows people like Mark Cuban to safely and securely buy, sell, auction, trade and invest in NFTs just like they would physical art, memorabilia, collectibles and other things that hold unique, non-fungible value.
In short, NFTs are digital tokens that represent the unique value of all kinds of items, both intangible and tangible, while providing verifiable authenticity of ownership and creation.
Bitcoins and dollars, on the other hand, can only buy stuff and sell stuff.
A Few Crazy Expensive NFTs Broke Records:
The following are among the most expensive NFTs of all time. You’ll see that some of these have their value measured in ether (ETH), which is a type of cryptocurrency based on the Ethereum, a community blockchain used in NFT transactions. Purchases were made based on a conversion rate of $2,010 per ETH for most of these:
[End of Yahoo Article]
___________________________________________________________________________
People really are giving NFTs as gifts. Results may vary. (Washington Post)
Alex Caton put a lot of thought into his girlfriend’s Christmas present this year.
The 24-year-old found a stunning picture taken by a local photographer of her hometown of Mississauga, 17 miles south of Toronto. In the foreground is her city and in the distance the glittering skyline of Toronto, where the couple lives together now. He thinks of it like her old life looking toward the future, to their new life together.
Help Desk: Technology coverage that makes tech work for you
There is one small catch. The image he bought for around $200 is in the form of an NFT, a one-of-a-kind asset that exists digitally. Caton, a computer engineer, is the one in the relationship who’s most interested in NFTs. He’s aware that even though they talk about NFTs together and took in a real-world NFT gallery show recently, his girlfriend would probably enjoy something more tangible, too. So he’s trying to get an official print of the photo to wrap up, along with a fitness tracker.
“It’s not something I’d want to push onto somebody,” Caton said of the NFT. “I thought it would be a meaningful gift.”
It’s too late to order or find some of this year’s hottest Christmas presents, but there is one buzzy gift that’s still doable (if risky): An NFT. A virtual gift is often a fallback for last-minute shoppers, but it’s also appealing for anyone worried about supply chain issues, the rising prices for physical goods and a rapidly spreading coronavirus variant that makes shopping in person less attractive than usual.
The term NFT stands for non-fungible token, which rarely clears anything up, but they are unique digital assets, like an image or audio recording. Their ownership is stored on the blockchain — a kind of public ledger -- and they can double as an investment and a kind of art, albeit one that you admire on a screen. They’ve taken off in the past year, with an NFT created by an artist named Beeple selling for $69 million at auction.
More recently, Melania Trump was pushing an NFT painting of her eyes, and Tom Brady offered NFTs of his college resume and old cleats.
What is an NFT, and how one sell for $69 million at Christie’s?
They combine an age-old enjoyment in collectibles like baseball cards with the rush of gambling. For people who may have stayed away from the more purely monetary world of bitcoin, NFTs can be a more accessible entry point.
Yes, you might be buying a unique digital token stored on the blockchain, but you’re also getting a cartoon of a depressed primate in a cute sailor hat. And once the recipient has one, they might hold onto it indefinitely for the sentimental value, or trade it away (the rare gift where immediately selling it off isn’t always considered rude).
As with any present, your mileage may vary. NFT values can fluctuate and they could end up worth less than you paid. But unlike cryptocurrency, they might always be worth a little something sentimentally. Many families are already all in, and know a virtual gift will be appreciated and even reciprocated. Others hope gifting an NFT will hook their loved ones so it can become a shared passion instead of something one person won’t stop talking about. But there’s no guarantee the person getting it will appreciate the gift and it could backfire, or at least be met with confusion.
Every tech support task you should do for your family this week
There’s the question of how to actually package a gifted NFT. You can simply put it in the recipient’s virtual wallet, but then you miss out on the drama. Usually people give a virtual representation when they can’t get the physical gift on time, like a picture of a back-ordered gadget. Making a real-world representation of an NFT is the reverse — a physical gift that’s a placeholder for the virtual.
You can print out a version to wrap or pop in a nice envelope, like Caton, who is getting a photo for his girlfriend. Kristen Langer is an art teacher and calligrapher who is planning to set up virtual wallets for her niece and nephew as a present.
When you set up the new wallet you get a list of random words to access it as a recovery phrase, so Langer is going to write the words out in calligraphic style. 3-D printing company Itemfarm has seen an increase in requests to make physical versions of the images on NFTs. It involves confirming the person owns the NFT, then often wrestling a 2-D image into a 3-D file, says Itemfarm CEO Alder Riley.
For people who buy and sell NFTs, it’s usually not a casual interest. It’s the kind of hobby that inspires passion and, in some cases, talking about it to obliging loved ones. Perhaps it’s because NFTs are only increasing in value as long as more people buy into the idea. It has been compared to a pyramid scheme, but defenders say it’s no more or less an asset than sneakers, paper money or stocks. For some families, it’s more about being involved in something together than hitting it big.
Mariana Benton has a holiday list of her dream NFTs and at the top is a Cool Cat, one of a line of drawings of cats (she’s not expecting anything from the list, but just in case). Benton wasn’t into NFTs at first, but her husband Alex eventually won her over by showing her the NBA Top Shots NFTs, the league’s digital collectibles. The couple exchanged NFTs for Hanukkah.
“At first I didn’t understand why Alex was spending so much time in this thing,” Mariana Benton said. “Now it’s a whole cool new thing we can talk about.”
For the couple, who live in Los Angeles with their two kids, collecting things was already a family affair. Everyone in the house is into Pokémon cards, and Mariana and Alex collect baseball cards. Now the kids have their own crypto wallets and their 10-year-old daughter is writing about NFTs for a school paper.
“My daughter and I minted our first NFT together. We sat holding hands and clicked the button,” Mariana Benton said proudly.
Getting involved in NFTs from scratch isn’t exactly easy, and neither is giving one as a gift. First there are the technical issues — the recipient needs a wallet to “hold” the NFT, and the giver needs the right cryptocurrency to purchase it. The cost of entry is high, at least a couple hundred dollars, for the NFTs that have the potential to appreciate. There is also special lingo, different subcultures, Twitter accounts to follow and Discord rooms to join.
Alex Benton is also buying his mom an NFT for Christmas, at her request. She follows him on Twitter and wants to be more involved with what he loves, so he’s going to set up a wallet and buy her an NFT.
Unlike a nice scarf, a pair of earrings or a Swedish ax, getting an NFT is either accepting an entire world that you need to learn about, or forgetting about it like a bond your grandparents gave you and not knowing if you’ll ever benefit financially.
When Langer’s husband Josh lost his job earlier in the pandemic and got into NFTs full time, she wasn’t entirely on board.
But he had struggled with anxiety, depression and addiction issues in the past, and she saw how his new interest was pulling him out of it. Eventually she started to participate with some caveats: Kristen Langer has final say over most financial decisions around NFTs, and while they’ve invested some of their savings, it’s not so much that they couldn’t recover from it.
Will NFTs transform the art world? Are they even art?
“He has a pattern where he gets just stupid excited about something,” said Kristen Langer, 36. “But I really feel like it’s made us grow closer because it’s something he can teach me about instead of us coming home and complaining about our days.”
For her birthday, Josh Langer got his wife an NFT of the Scissor Sisters song “I Don’t Feel Like Dancin’.”
“It was my anthem in college,” Kristen Langer said. “I don’t know about resell value but this song is about me.”
Emily Cornelius does not want an NFT for Christmas. Her boyfriend, Ian Schenholm, is an avid gamer studying for the bar exam who spends hours researching crypto and NFTs online.
He enjoys telling Cornelius about it all, but she’s made it clear that just because they can talk about it, that doesn’t mean she wants to be as involved.
“I don’t even want to know how to do it. I don’t ask him to get into astrology, I don’t ask him to get into color correction and how that could really enhance photos of himself,” said Cornelius, a comedian in Denver. “I would rather have something that is meaningful to me. I think that’s true of any gift.”
[End of Washington Post Article]
___________________________________________________________________________
Non-fungible token (Wikipedia)
A non-fungible token (NFT) is a unique and non-interchangeable unit of data stored on a blockchain, a form of digital ledger. NFTs can be associated with reproducible digital files such as photos, videos, and audio.
NFTs use a digital ledger to provide a public certificate of authenticity or proof of ownership, but do not restrict the sharing or copying of the underlying digital files. The lack of interchangeability (fungibility) distinguishes NFTs from blockchain cryptocurrencies, such as Bitcoin.
NFTs have drawn criticism with respect to the energy cost and carbon footprint associated with validating blockchain transactions as well as their frequent use in art scams.
Further criticisms challenge the usefulness of establishing proof of ownership in an often extralegal unregulated market.
Description:
An NFT is a unit of data stored on a digital ledger, called a blockchain, which can be sold and traded. The NFT can be associated with a particular digital or physical asset (such as a file or a physical object) and a license to use the asset for a specified purpose.
An NFT (and the associated license to use, copy or display the underlying asset) can be traded and sold on digital markets. The extralegal nature of NFT trading usually results in an informal exchange of ownership over the asset that has no legal basis for enforcement,often conferring little more than use as a status symbol.
NFTs function like cryptographic tokens, but, unlike cryptocurrencies such as Bitcoin or Ethereum, NFTs are not mutually interchangeable, hence not fungible. While all bitcoins are equal, each NFT may represent a different underlying asset and thus may have a different value.
NFTs are created when blockchains string records of cryptographic hash, a set of characters identifying a set of data, onto previous records therefore creating a chain of identifiable data blocks. This cryptographic transaction process ensures the authentication of each digital file by providing a digital signature that is used to track NFT ownership. However, data links that point to details such as where the art is stored can be affected by link rot.
Copyright:
Ownership of an NFT does not inherently grant copyright or intellectual property rights to whatever digital asset the token represents. While someone may sell an NFT representing their work, the buyer will not necessarily receive copyright privileges when ownership of the NFT is changed and so the original owner is allowed to create more NFTs of the same work.
In that sense, an NFT is merely a proof of ownership that is separate from a copyright. According to legal scholar Rebecca Tushnet, "In one sense, the purchaser acquires whatever the art world thinks they have acquired. They definitely do not own the copyright to the underlying work unless it is explicitly transferred."
In practice, NFT purchasers do not generally acquire the copyright of the underlying artwork.
Technology applications:
The unique identity and ownership of an NFT is verifiable via the blockchain ledger. Ownership of the NFT is often associated with a license to use the underlying digital asset, but generally does not confer copyright to the buyer. Some agreements only grant a license for personal, non-commercial use, while other licenses also allow commercial use of the underlying digital asset.
Digital art:
Digital art was an early use case for NFTs, because of the blockchain's ability to assure the unique signature and ownership of NFTs. The digital artwork entitled Everydays: the First 5000 Days, by artist Mike Winkelmann (known professionally as Beeple), sold for US$69.3 million in 2021. This was the third-highest auction price for a work by living artist, after works by Jeff Koons and David Hockney, respectively.
Blockchain technology has also been used to publicly register and authenticate preexisting physical artworks to differentiate them from forgeries and verify their ownership via physical trackers or labels.
Another Beeple piece entitled Crossroad, a 10-second video showing animated pedestrians walking past a figure of Donald Trump, sold for US$6.6 million at Nifty Gateway in March 2021.
Curio Cards, a digital set of 30 unique cards considered to be the first NFT art collectibles on the Ethereum blockchain, sold for $1.2 million at Christie's Post-War to Present auction. The lot included the card "17b", a digital "misprint" (a series of which were made by mistake).
Some NFT collections, including EtherRocks and CryptoPunks are examples of generative art, where many different images can be created by assembling a selection of simple picture components in different combinations.
In March 2021, the blockchain company Injective Protocol bought a $95,000 original screen print entitled "Morons (White)" from English graffiti artist Banksy, and filmed somebody burning it with a cigarette lighter, with the video being minted and sold as an NFT.
The person who destroyed the artwork, who called themselves "Burnt Banksy", described the act as a way to transfer a physical work of art to the NFT space.
In June 2021, Sotheby’s hosted "Natively Digital", the first curated NFT sale at the auction house.
Games:
Main article: Blockchain game
NFTs can be used to represent in-game assets, such as digital plots of land, which are controlled by the user instead of the game developer. NFTs allow assets to be traded on third-party marketplaces without permission from the game developer.
In October 2021, developer Valve banned applications that use blockchain technology or NFTs to exchange value or game artifacts from their Steam platform.
In December 2021, Ubisoft announced Ubisoft Quartz, “an NFT initiative which allows people to buy artificially scarce digital items using cryptocurrency". The announcement has raised significant criticism, with 96% dislike ratio over the Youtube announcement video, which has been unlisted since then.
Some Ubisoft Developers have also raised their concern over the announcement.
Music:
Blockchain and the technology enabling the network have given the opportunity for musicians to tokenize and publish their work as non-fungible tokens. As their popularity grew in 2021, NFTs were used by artists and touring musicians to recuperate lost income due to the 2020 COVID-19 pandemic. In February 2021, NFTs reportedly generated around $25 million within the music industry.
On February 28, 2021, electronic dance musician 3LAU sold a collection of 33 NFTs for a total of $11.7 million to commemorate the three-year anniversary of his Ultraviolet album.
On March 3, 2021, rock band Kings of Leon became the first to announce the release of a new album, When You See Yourself, in the form of an NFT which generated a reported $2 million in sales. Other musicians that have used NFTs include American rapper Lil Pump, visual artist Shepard Fairey in collaboration with record producer Mike Dean, and rapper Eminem.
Film:
In May 2018, 20th Century Fox partnered with Atom Tickets and released limited-edition Deadpool 2 digital posters to promote the film. They were available from OpenSea and the GFT exchange. In March 2021 Adam Benzine's 2015 documentary Claude Lanzmann: Spectres of the Shoah became the first motion picture and documentary film to be auctioned as an NFT.
Other projects in the film industry using NFTs include the announcement that an exclusive NFT artwork collection will be released for Godzilla vs. Kong and director Kevin Smith announcing in April 2021 that his forthcoming horror movie Killroy Was Here would be released as an NFT. The 2021 film Zero Contact, directed by Rick Dugdale and starring Anthony Hopkins, was also released as an NFT.
In April 2021, an NFT associated with the score of the movie Triumph, composed by Gregg Leonard, was minted as the first NFT for a feature film score.
In November 2021, film director Quentin Tarantino released seven NFTs based on uncut scenes of Pulp Fiction. Miramax subsequently filed a lawsuit claiming that their film rights were violated.
Other uses:
A number of internet memes have been associated with NFTs, which were minted and sold by their creators or by their subjects. Examples include Doge, an image of a Shiba Inu dog whose NFT was sold for $4 million in June 2021, as well as Charlie Bit My Finger, Nyan Cat and Disaster Girl.
Some private online communities have been formed around the confirmed ownership of certain NFT releases.
Some virtual worlds, often marketed as metaverses, have incorporated NFTs as a means of trading virtual items and virtual real estate.
Some pornographic works have been sold as NFTs, though hostility from NFT marketplaces towards pornographic material has presented significant drawbacks for creators.
In May 2021, UC Berkeley announced that it would be auctioning NFTs for the patent disclosures for two Nobel Prize-winning inventions: CRISPR-Cas9 gene editing and cancer immunotherapy. The university will continue to own the patents for these inventions, as the NFTs relate only to the university patent disclosure form, an internal form used by the university for researchers to disclose inventions.
Tickets, for any type of event, have been suggested for sale as NFTs. Such proposals would enable event organizers or performers to garner royalties on resales.
The first credited political protest NFT ("Destruction of Nazi Monument Symbolizing Contemporary Lithuania") was a video filmed by Professor Stanislovas Tomas on April 8, 2019, and minted on March 29, 2021. In the video, Tomas uses a sledgehammer to destroy a state-sponsored Lithuanian plaque located on the Lithuanian Academy of Sciences honoring Nazi war criminal Jonas Noreika.
Standards in blockchains:
Specific token standards have been created to support various blockchain use-cases.
Ethereum was the first blockchain to support NFTs with its ERC-721 standard and is currently the most widely used. Many other blockchains have added or plan to add support for NFTs with their growing popularity.
Ethereum:
ERC-721 was the first standard for representing non-fungible digital assets on the Ethereum blockchain. ERC-721 is an inheritable Solidity smart contract standard, meaning that developers can create new ERC-721-compliant contracts by importing them from the OpenZeppelin library. ERC-721 provides core methods that allow tracking the owner of a unique identifier, as well as a permissioned way for the owner to transfer the asset to others.
The ERC-1155 standard offers "semi-fungibility", as well as providing a superset of ERC-721 functionality (meaning that an ERC-721 asset could be built using ERC-1155).
Unlike ERC-721 where a unique ID represents a single asset, the unique ID of an ERC-1155 token represent a class of assets, and there is an additional quantity field to represent the amount of the class that a particular wallet has. The assets under the same class are interchangeable, and the user can transfer any amount of assets to others.
Because Ethereum currently has high transaction fees (known as gas fees), layer 2 solutions for Ethereum have emerged which also supports NFTs:
Other blockchains:
History:
Early history (2014–2017):
The first known "NFT", Quantum, was created by Kevin McCoy and Anil Dash in May 2014, consisting of a video clip made by McCoy's wife Jennifer. McCoy registered the video on the Namecoin blockchain and sold it to Dash for $4, during a live presentation for the Seven on Seven conference at the New Museum in New York City.
In October 2015, the first NFT project, Etheria, was launched and demonstrated at DEVCON 1, Ethereum's first developer conference, in London, UK, three months after the launch of the Ethereum blockchain. Most of Etheria's 457 purchasable and tradable hexagonal tiles went unsold for more than five years until March 13, 2021, when renewed interest in NFTs sparked a buying frenzy. Within 24 hours, all tiles of the current version and a prior version, each hardcoded to 1 ETH ($0.43 at the time of launch), were sold for a total of $1.4 million.
The term "NFT" only gained currency with the ERC-721 standard, first proposed in 2017 via the Ethereum GitHub, following the launch of various NFT projects that year. These include Curio Cards, CryptoPunks (a project to trade unique cartoon characters, released by the American studio Larva Labs on the Ethereum blockchain) and the Decentraland platform. All three projects were referenced in the original proposal along with rare Pepe trading cards.
Public awareness (Late 2017–2021):
Public awareness in NFTs began with the success of CryptoKitties, an online game where players adopt and trade virtual cats. Soon after release, the project went viral, raising a $12.5 million investment, with some kitties selling for over $100,000 each.
Following its success, CryptoKitties were added to the ERC-721 standard, which was created in January 2018 (and finalized in June), and affirmed the use of the term "NFT" to refer to "non-fungible tokens".
In 2018, Decentraland, a blockchain-based virtual world which first sold its tokens in August 2017, raised $26 million in an initial coin offering, and had a $20 million internal economy as of September 2018. Following CryptoKitties' success, another similar NFT-based online game Axie Infinity was launched in March 2018, which then proceeded to become the most expensive NFT collection in May 2021.
In 2019, Nike patented a system called CryptoKicks that would use NFTs to verify the authenticity of physical sneakers and give a virtual version of the shoe to the customer.
In early 2020, the developer of CryptoKitties, Dapper Labs, released the beta version of NBA TopShot, a project to sell tokenized collectibles of NBA highlights. The project was built on top of Flow, a newer and more efficient blockchain compared to Ethereum. Later that year, the project was released to the public and reported over $230 million in gross sales as of February 28, 2021.
The NFT market experienced rapid growth during 2020, with its value tripling to $250 million. In the first three months of 2021, more than $200 million were spent on NFTs.
NFT buying surge (2021–present):
In the early months of 2021, interest in NFTs increased after a number of high-profile sales. NFT sales in February 2021 included digital art created by the musician Grimes, an NFT of the Nyan Cat meme, and NFTs created by 3LAU to promote his album Ultraviolet.
More publicized NFT sales were made in March 2021, which included an NFT made to promote the Kings of Leon album When You See Yourself, a $69.3 million sale of a digital work from Mike Winkelmann called Everydays: The First 5000 Days, and an NFT made by Twitter founder Jack Dorsey that represented his first tweet.
The speculative market for NFTs has led more investors to trade at greater volumes and rates. The NFT buying surge was called an economic bubble by experts, who also compared it to the Dot-com bubble.
By mid-April 2021, demand appeared to have substantially subsided, causing prices to fall significantly; early buyers were reported to have "done supremely well" by Bloomberg Businessweek.
An NFT of the source code of the World Wide Web, credited to internet inventor computer scientist Sir Tim Berners-Lee, was auctioned in June 2021 by Sotheby’s in London, and was sold for US$5.4 million.
In September 2021, Sotheby's sold a bundle of 101 Bored Ape Yacht Club NFTs for $24.4 million. On October 1, 2021, Christie's auctioned a full set of Curio Cards, plus the "17b" misprint, for ETH393 ($1.3 million at the time) – the first time live bidding at an auction was conducted in Ether.
A Sotheby's sale later that month included a CryptoPunk, various cat-based NFTs and a rare Pepe, Pepenopoulos, 2016, that sold for $3.6m. This was the first auction hosted on Sotheby's "Metaverse", a platform specifically dedicated to NFT collectors, intended to become a biannual event.
Popular culture:
A comedy skit on the March 27, 2021 episode of Saturday Night Live featured characters explaining NFTs through rap to US Treasury Secretary Janet Yellen, as played by Kate McKinnon.
The Paramount+ television film South Park: Post Covid: The Return of Covid featured an adult version of Butters Stotch in his Dr. Chaos persona tricking people into purchasing NFTs in 2061. Although the film portrays them as a poor investment, he has grown so adept at selling them that he is locked in a mental institution.
Issues and criticisms:
Storage off-chain:
NFTs involving digital art generally do not store the file on the blockchain due to its size.
The token functions in a way more similar to a certificate of ownership, with a web address pointing to the piece of art in question, making the art still subject to link rot.
Because NFTs are functionally separate from the underlying artworks, anybody can easily save a copy of an NFT's image, popularly through a right click. NFT supporters disparage this duplication of NFT artwork as a "right-clicker mentality", with one collector comparing the value of a purchased NFT to that of a status symbol "to show off that they can afford to pay that much".
The "right-clicker mentality" phrase spread virally after its introduction, particularly among those that were critical of the NFT marketplace who used the term to flaunt the ability to capture digital art backed by NFT with ease. This criticism was promoted by Australian programmer Geoffrey Huntley who created "The NFT Bay", modeled after The Pirate Bay.
The NFT Bay advertised a torrent file purported to contain 19 terabytes of digital art NFT images. Huntley compared his work to an art project from Pauline Pantsdown, and hoped the site would help educate users on what NFTs are and are not.
Environmental concerns:
NFT purchases and sales are enmeshed in a controversy regarding the high energy usage, and consequent greenhouse gas emissions, associated with blockchain transactions.
A major aspect of this is the proof-of-work protocol required to regulate and verify blockchain transactions on networks such as Ethereum, which consumes a large amount of electricity; estimating the carbon footprint of a given NFT transaction involves a variety of assumptions about the manner in which that transaction is set up on the blockchain, the economic behavior of blockchain miners (and the energy demands of their mining equipment), as well as the amount of renewable energy being used on these networks.
There are also conceptual questions, such as whether the carbon footprint estimate for an NFT purchase should incorporate some portion of the ongoing energy demand of the underlying network, or just the marginal impact of that particular purchase. An analogy that's been described for this is the footprint associated with an additional passenger on a given airline flight.
Some more recent NFT technologies use alternative validation protocols, such as proof of stake, that have much less energy usage for a given validation cycle. Other approaches to reducing electricity include the use of off-chain transactions as part of minting an NFT.
A number of NFT art sites are also looking to address these concerns, and some are moving to using technologies and protocols with lower associated footprints. Others now allow the option of buying carbon offsets when making NFT purchases, although the environmental benefits of this have been questioned. In some instances, NFT artists have decided against selling some of their own work to limit carbon emission contributions.
Artist and buyer fees:
Sales platforms charge artists and buyers fees for minting, listing, claiming and secondary sales. Analysis of NFT markets in March 2021, in the immediate aftermath of Beeple's "Everydays: the First 5000 Days" selling for US$69.3 million, found that most NFT artworks were selling for less than $200, with a third selling for less than $100.
Those selling below $100 were paying network usage fees between 72.5 and 157.5 per cent of that amount, meaning that such artists were on average paying more money in fees than they were making in sales.
Plagiarism and fraud:
There have been examples of "artists having their work copied without permission" and sold as an NFT. After the artist Qing Han died in 2020, her identity was assumed by a fraudster and a number of her works became available for purchase as NFTs. Similarly, a seller posing as Banksy succeeded in selling an NFT supposedly made by the artist for $336,000 in 2021; with the seller in this case refunding the money after the case drew media attention.
A process known as "sleepminting" can also allow a fraudster to mint an NFT in an artist's wallet and transfer it back to their own account without the artist becoming aware. This allowed a white hat hacker to mint a fraudulent NFT that had seemingly originated from the wallet of the artist Beeple.
The BBC reported a case of insider trading when an employee of the NFT marketplace OpenSea bought specific NFTs before they were launched, with the prior knowledge they would be promoted on the company's home page. NFT trading is an unregulated market that has no legal recourse for such abuses.
In their announcement of developing NFT support for the graphics editor Photoshop, Adobe proposed creating an InterPlanetary File System database as an alternative means of establishing authenticity for digital works.
See also:
The Hype Around NFTs: What Are They? And How Pricey Do They Get?
By Andrew Lisa Yahoo Finance April 2, 2021
If you’re still struggling to wrap your head around cryptocurrency like Bitcoin, strap in — it’s about to get worse. There’s a brand new kind of digital money-not-money that’s trending and a brand new acronym to remember: NFT.
Economy Explained: How Does Cryptocurrency Work – and Is It Safe?
NFT stands for “non-fungible tokens.” If that did nothing for you except make you think of mushrooms, you are not alone. NFTs are widely misunderstood or, more commonly, not understood at all — and for good reason. They’re new, they’re unfamiliar and they’re barely on the fringes of the mainstream. With celebrities like Mark Cuban making headlines as NFT investors, however, the general public is starting to catch up to the early adopters. Here’s what you need to know.
First… Fungible?
When it comes to goods and services, “fungible” is a synonym for “interchangeable.” Commodities like gold and oil are fungible, as are currency, stock market shares and bonds. If two people each have a $20 bill — or a barrel of oil, an ounce of gold or a share of Amazon stock — and they trade, neither party gains or loses anything. If the same two people trade cars, diamonds or Fabergé eggs, on the other hand, it will never be an even transaction.
That’s because each individual diamond, car and Fabergé egg is unique and has its own individual value based on variables like quality and condition. In short:
- Fungible items can be directly exchanged without anything being gained or lost
- Non-fungible items cannot
Read More: Economy Explained: What Is Inflation and What Does It Mean When It Goes Up or Down?
NFTs Are Snowflakes:
Cryptocurrency like Bitcoin is a medium of exchange, just like regular money. Unlike regular money, cryptocurrency is created, distributed and verified via decentralized blockchain without a middleman like a bank or government — but it’s still fungible. Both in the physical and digital spaces, one Bitcoin is the same as the next, just like a dollar.
NFTs are similar to cryptocurrencies in that they’re generated, distributed and verified via blockchain without a bank or other centralized authority. Unlike Bitcoin, however, non-fungible tokens are — as the name implies — non-fungible. Each individual NFT has its own unique value. Each appreciates in value at a different rate, and no two in the world can be swapped for an even trade.
Keep Reading: What Is the Consumer Confidence Index and How Does It Affect Me?
Money and Cryptocurrency Aren’t Enough?
NFTs are digital assets just like Bitcoin, but unlike Bitcoin, each NFT is unique — and that’s the whole point. NFTs were created to be distinctive because they’re digital representations of other things that are unique, like:
- Artwork
- GIFs
- Avatars
- Memes
- Music albums
- Videos
- Images
The purpose of NFTs is to denote the value of things while also protecting their unique, individual authenticity — something that’s not possible with a fungible asset like money or Bitcoin. Before NFTs, these kinds of digital files had essentially no value.
More: Economy Explained: What’s the Difference Between Fiscal vs. Monetary Policy?
For example, if an artist draws a picture and posts it online and 10,000 people download it, then 10,001 people have it, but nobody owns it. If that same artist “mints” the drawing on a blockchain and turns it into an NFT, the drawing is now verified as original to that artist.
No matter how many times it’s downloaded or duplicated, the picture’s authenticity — and the artist’s ownership — is easily verifiable on a publicly accessible blockchain record and stored safely in the artist’s digital wallet.
In the End, NFTs Are All About Security, Protection and Credibility:
NFTs allow everyone involved to have their cake and eat it, too. With an NFT, the artist from the example still gets to upload and show off the picture. Then, 10,000 people who like the picture can still right-click and download it for free. So, just as before, 10,001 people have the picture, but now, the artist owns it and the 10,000 free downloads are just great publicity.
Economy Explained: Understanding US Productivity and All the Ways It Affects You
The same holds true for avatars, memes, images and just about anything else. NFTs provide a universal system of verification and valuation, which allows people like Mark Cuban to safely and securely buy, sell, auction, trade and invest in NFTs just like they would physical art, memorabilia, collectibles and other things that hold unique, non-fungible value.
In short, NFTs are digital tokens that represent the unique value of all kinds of items, both intangible and tangible, while providing verifiable authenticity of ownership and creation.
Bitcoins and dollars, on the other hand, can only buy stuff and sell stuff.
A Few Crazy Expensive NFTs Broke Records:
The following are among the most expensive NFTs of all time. You’ll see that some of these have their value measured in ether (ETH), which is a type of cryptocurrency based on the Ethereum, a community blockchain used in NFT transactions. Purchases were made based on a conversion rate of $2,010 per ETH for most of these:
- Hashmask #9939: On Feb. 3, 2021, Hashmask #9939 sold for 420 ETH worth $844,216.
- CryptoPunk #6487: On Feb. 21, 2021, CryptoPunk #6487 sold for 550 ETH worth $1,105,500.
- CryptoPunk #2890: On Jan. 24, 2021, CryptoPunk #2890 sold for 605 ETH worth $1,216,074.
- CryptoPunk #4156: On Feb. 18, 2021, CryptoPunk #4156 sold for 650 ETH worth $1,306,526.
- CryptoPunk #6965: On Feb. 21, 2021, CryptoPunk #6965 sold for 800 ETH worth $1,608,032.
- Twitter CEO Jack Dorsey’s first tweet: On March 22, 2021, his first tweet sold for 1,630.58 ether. That was equivalent to about $2.9 million based on ether’s price at the time of sale.
- Beeple, Everydays–The First 5000 Days: Sold in March 2021 through Christie’s for $69 million.
[End of Yahoo Article]
___________________________________________________________________________
People really are giving NFTs as gifts. Results may vary. (Washington Post)
Alex Caton put a lot of thought into his girlfriend’s Christmas present this year.
The 24-year-old found a stunning picture taken by a local photographer of her hometown of Mississauga, 17 miles south of Toronto. In the foreground is her city and in the distance the glittering skyline of Toronto, where the couple lives together now. He thinks of it like her old life looking toward the future, to their new life together.
Help Desk: Technology coverage that makes tech work for you
There is one small catch. The image he bought for around $200 is in the form of an NFT, a one-of-a-kind asset that exists digitally. Caton, a computer engineer, is the one in the relationship who’s most interested in NFTs. He’s aware that even though they talk about NFTs together and took in a real-world NFT gallery show recently, his girlfriend would probably enjoy something more tangible, too. So he’s trying to get an official print of the photo to wrap up, along with a fitness tracker.
“It’s not something I’d want to push onto somebody,” Caton said of the NFT. “I thought it would be a meaningful gift.”
It’s too late to order or find some of this year’s hottest Christmas presents, but there is one buzzy gift that’s still doable (if risky): An NFT. A virtual gift is often a fallback for last-minute shoppers, but it’s also appealing for anyone worried about supply chain issues, the rising prices for physical goods and a rapidly spreading coronavirus variant that makes shopping in person less attractive than usual.
The term NFT stands for non-fungible token, which rarely clears anything up, but they are unique digital assets, like an image or audio recording. Their ownership is stored on the blockchain — a kind of public ledger -- and they can double as an investment and a kind of art, albeit one that you admire on a screen. They’ve taken off in the past year, with an NFT created by an artist named Beeple selling for $69 million at auction.
More recently, Melania Trump was pushing an NFT painting of her eyes, and Tom Brady offered NFTs of his college resume and old cleats.
What is an NFT, and how one sell for $69 million at Christie’s?
They combine an age-old enjoyment in collectibles like baseball cards with the rush of gambling. For people who may have stayed away from the more purely monetary world of bitcoin, NFTs can be a more accessible entry point.
Yes, you might be buying a unique digital token stored on the blockchain, but you’re also getting a cartoon of a depressed primate in a cute sailor hat. And once the recipient has one, they might hold onto it indefinitely for the sentimental value, or trade it away (the rare gift where immediately selling it off isn’t always considered rude).
As with any present, your mileage may vary. NFT values can fluctuate and they could end up worth less than you paid. But unlike cryptocurrency, they might always be worth a little something sentimentally. Many families are already all in, and know a virtual gift will be appreciated and even reciprocated. Others hope gifting an NFT will hook their loved ones so it can become a shared passion instead of something one person won’t stop talking about. But there’s no guarantee the person getting it will appreciate the gift and it could backfire, or at least be met with confusion.
Every tech support task you should do for your family this week
There’s the question of how to actually package a gifted NFT. You can simply put it in the recipient’s virtual wallet, but then you miss out on the drama. Usually people give a virtual representation when they can’t get the physical gift on time, like a picture of a back-ordered gadget. Making a real-world representation of an NFT is the reverse — a physical gift that’s a placeholder for the virtual.
You can print out a version to wrap or pop in a nice envelope, like Caton, who is getting a photo for his girlfriend. Kristen Langer is an art teacher and calligrapher who is planning to set up virtual wallets for her niece and nephew as a present.
When you set up the new wallet you get a list of random words to access it as a recovery phrase, so Langer is going to write the words out in calligraphic style. 3-D printing company Itemfarm has seen an increase in requests to make physical versions of the images on NFTs. It involves confirming the person owns the NFT, then often wrestling a 2-D image into a 3-D file, says Itemfarm CEO Alder Riley.
For people who buy and sell NFTs, it’s usually not a casual interest. It’s the kind of hobby that inspires passion and, in some cases, talking about it to obliging loved ones. Perhaps it’s because NFTs are only increasing in value as long as more people buy into the idea. It has been compared to a pyramid scheme, but defenders say it’s no more or less an asset than sneakers, paper money or stocks. For some families, it’s more about being involved in something together than hitting it big.
Mariana Benton has a holiday list of her dream NFTs and at the top is a Cool Cat, one of a line of drawings of cats (she’s not expecting anything from the list, but just in case). Benton wasn’t into NFTs at first, but her husband Alex eventually won her over by showing her the NBA Top Shots NFTs, the league’s digital collectibles. The couple exchanged NFTs for Hanukkah.
“At first I didn’t understand why Alex was spending so much time in this thing,” Mariana Benton said. “Now it’s a whole cool new thing we can talk about.”
For the couple, who live in Los Angeles with their two kids, collecting things was already a family affair. Everyone in the house is into Pokémon cards, and Mariana and Alex collect baseball cards. Now the kids have their own crypto wallets and their 10-year-old daughter is writing about NFTs for a school paper.
“My daughter and I minted our first NFT together. We sat holding hands and clicked the button,” Mariana Benton said proudly.
Getting involved in NFTs from scratch isn’t exactly easy, and neither is giving one as a gift. First there are the technical issues — the recipient needs a wallet to “hold” the NFT, and the giver needs the right cryptocurrency to purchase it. The cost of entry is high, at least a couple hundred dollars, for the NFTs that have the potential to appreciate. There is also special lingo, different subcultures, Twitter accounts to follow and Discord rooms to join.
Alex Benton is also buying his mom an NFT for Christmas, at her request. She follows him on Twitter and wants to be more involved with what he loves, so he’s going to set up a wallet and buy her an NFT.
Unlike a nice scarf, a pair of earrings or a Swedish ax, getting an NFT is either accepting an entire world that you need to learn about, or forgetting about it like a bond your grandparents gave you and not knowing if you’ll ever benefit financially.
When Langer’s husband Josh lost his job earlier in the pandemic and got into NFTs full time, she wasn’t entirely on board.
But he had struggled with anxiety, depression and addiction issues in the past, and she saw how his new interest was pulling him out of it. Eventually she started to participate with some caveats: Kristen Langer has final say over most financial decisions around NFTs, and while they’ve invested some of their savings, it’s not so much that they couldn’t recover from it.
Will NFTs transform the art world? Are they even art?
“He has a pattern where he gets just stupid excited about something,” said Kristen Langer, 36. “But I really feel like it’s made us grow closer because it’s something he can teach me about instead of us coming home and complaining about our days.”
For her birthday, Josh Langer got his wife an NFT of the Scissor Sisters song “I Don’t Feel Like Dancin’.”
“It was my anthem in college,” Kristen Langer said. “I don’t know about resell value but this song is about me.”
Emily Cornelius does not want an NFT for Christmas. Her boyfriend, Ian Schenholm, is an avid gamer studying for the bar exam who spends hours researching crypto and NFTs online.
He enjoys telling Cornelius about it all, but she’s made it clear that just because they can talk about it, that doesn’t mean she wants to be as involved.
“I don’t even want to know how to do it. I don’t ask him to get into astrology, I don’t ask him to get into color correction and how that could really enhance photos of himself,” said Cornelius, a comedian in Denver. “I would rather have something that is meaningful to me. I think that’s true of any gift.”
[End of Washington Post Article]
___________________________________________________________________________
Non-fungible token (Wikipedia)
A non-fungible token (NFT) is a unique and non-interchangeable unit of data stored on a blockchain, a form of digital ledger. NFTs can be associated with reproducible digital files such as photos, videos, and audio.
NFTs use a digital ledger to provide a public certificate of authenticity or proof of ownership, but do not restrict the sharing or copying of the underlying digital files. The lack of interchangeability (fungibility) distinguishes NFTs from blockchain cryptocurrencies, such as Bitcoin.
NFTs have drawn criticism with respect to the energy cost and carbon footprint associated with validating blockchain transactions as well as their frequent use in art scams.
Further criticisms challenge the usefulness of establishing proof of ownership in an often extralegal unregulated market.
Description:
An NFT is a unit of data stored on a digital ledger, called a blockchain, which can be sold and traded. The NFT can be associated with a particular digital or physical asset (such as a file or a physical object) and a license to use the asset for a specified purpose.
An NFT (and the associated license to use, copy or display the underlying asset) can be traded and sold on digital markets. The extralegal nature of NFT trading usually results in an informal exchange of ownership over the asset that has no legal basis for enforcement,often conferring little more than use as a status symbol.
NFTs function like cryptographic tokens, but, unlike cryptocurrencies such as Bitcoin or Ethereum, NFTs are not mutually interchangeable, hence not fungible. While all bitcoins are equal, each NFT may represent a different underlying asset and thus may have a different value.
NFTs are created when blockchains string records of cryptographic hash, a set of characters identifying a set of data, onto previous records therefore creating a chain of identifiable data blocks. This cryptographic transaction process ensures the authentication of each digital file by providing a digital signature that is used to track NFT ownership. However, data links that point to details such as where the art is stored can be affected by link rot.
Copyright:
Ownership of an NFT does not inherently grant copyright or intellectual property rights to whatever digital asset the token represents. While someone may sell an NFT representing their work, the buyer will not necessarily receive copyright privileges when ownership of the NFT is changed and so the original owner is allowed to create more NFTs of the same work.
In that sense, an NFT is merely a proof of ownership that is separate from a copyright. According to legal scholar Rebecca Tushnet, "In one sense, the purchaser acquires whatever the art world thinks they have acquired. They definitely do not own the copyright to the underlying work unless it is explicitly transferred."
In practice, NFT purchasers do not generally acquire the copyright of the underlying artwork.
Technology applications:
The unique identity and ownership of an NFT is verifiable via the blockchain ledger. Ownership of the NFT is often associated with a license to use the underlying digital asset, but generally does not confer copyright to the buyer. Some agreements only grant a license for personal, non-commercial use, while other licenses also allow commercial use of the underlying digital asset.
Digital art:
Digital art was an early use case for NFTs, because of the blockchain's ability to assure the unique signature and ownership of NFTs. The digital artwork entitled Everydays: the First 5000 Days, by artist Mike Winkelmann (known professionally as Beeple), sold for US$69.3 million in 2021. This was the third-highest auction price for a work by living artist, after works by Jeff Koons and David Hockney, respectively.
Blockchain technology has also been used to publicly register and authenticate preexisting physical artworks to differentiate them from forgeries and verify their ownership via physical trackers or labels.
Another Beeple piece entitled Crossroad, a 10-second video showing animated pedestrians walking past a figure of Donald Trump, sold for US$6.6 million at Nifty Gateway in March 2021.
Curio Cards, a digital set of 30 unique cards considered to be the first NFT art collectibles on the Ethereum blockchain, sold for $1.2 million at Christie's Post-War to Present auction. The lot included the card "17b", a digital "misprint" (a series of which were made by mistake).
Some NFT collections, including EtherRocks and CryptoPunks are examples of generative art, where many different images can be created by assembling a selection of simple picture components in different combinations.
In March 2021, the blockchain company Injective Protocol bought a $95,000 original screen print entitled "Morons (White)" from English graffiti artist Banksy, and filmed somebody burning it with a cigarette lighter, with the video being minted and sold as an NFT.
The person who destroyed the artwork, who called themselves "Burnt Banksy", described the act as a way to transfer a physical work of art to the NFT space.
In June 2021, Sotheby’s hosted "Natively Digital", the first curated NFT sale at the auction house.
Games:
Main article: Blockchain game
NFTs can be used to represent in-game assets, such as digital plots of land, which are controlled by the user instead of the game developer. NFTs allow assets to be traded on third-party marketplaces without permission from the game developer.
In October 2021, developer Valve banned applications that use blockchain technology or NFTs to exchange value or game artifacts from their Steam platform.
In December 2021, Ubisoft announced Ubisoft Quartz, “an NFT initiative which allows people to buy artificially scarce digital items using cryptocurrency". The announcement has raised significant criticism, with 96% dislike ratio over the Youtube announcement video, which has been unlisted since then.
Some Ubisoft Developers have also raised their concern over the announcement.
Music:
Blockchain and the technology enabling the network have given the opportunity for musicians to tokenize and publish their work as non-fungible tokens. As their popularity grew in 2021, NFTs were used by artists and touring musicians to recuperate lost income due to the 2020 COVID-19 pandemic. In February 2021, NFTs reportedly generated around $25 million within the music industry.
On February 28, 2021, electronic dance musician 3LAU sold a collection of 33 NFTs for a total of $11.7 million to commemorate the three-year anniversary of his Ultraviolet album.
On March 3, 2021, rock band Kings of Leon became the first to announce the release of a new album, When You See Yourself, in the form of an NFT which generated a reported $2 million in sales. Other musicians that have used NFTs include American rapper Lil Pump, visual artist Shepard Fairey in collaboration with record producer Mike Dean, and rapper Eminem.
Film:
In May 2018, 20th Century Fox partnered with Atom Tickets and released limited-edition Deadpool 2 digital posters to promote the film. They were available from OpenSea and the GFT exchange. In March 2021 Adam Benzine's 2015 documentary Claude Lanzmann: Spectres of the Shoah became the first motion picture and documentary film to be auctioned as an NFT.
Other projects in the film industry using NFTs include the announcement that an exclusive NFT artwork collection will be released for Godzilla vs. Kong and director Kevin Smith announcing in April 2021 that his forthcoming horror movie Killroy Was Here would be released as an NFT. The 2021 film Zero Contact, directed by Rick Dugdale and starring Anthony Hopkins, was also released as an NFT.
In April 2021, an NFT associated with the score of the movie Triumph, composed by Gregg Leonard, was minted as the first NFT for a feature film score.
In November 2021, film director Quentin Tarantino released seven NFTs based on uncut scenes of Pulp Fiction. Miramax subsequently filed a lawsuit claiming that their film rights were violated.
Other uses:
A number of internet memes have been associated with NFTs, which were minted and sold by their creators or by their subjects. Examples include Doge, an image of a Shiba Inu dog whose NFT was sold for $4 million in June 2021, as well as Charlie Bit My Finger, Nyan Cat and Disaster Girl.
Some private online communities have been formed around the confirmed ownership of certain NFT releases.
Some virtual worlds, often marketed as metaverses, have incorporated NFTs as a means of trading virtual items and virtual real estate.
Some pornographic works have been sold as NFTs, though hostility from NFT marketplaces towards pornographic material has presented significant drawbacks for creators.
In May 2021, UC Berkeley announced that it would be auctioning NFTs for the patent disclosures for two Nobel Prize-winning inventions: CRISPR-Cas9 gene editing and cancer immunotherapy. The university will continue to own the patents for these inventions, as the NFTs relate only to the university patent disclosure form, an internal form used by the university for researchers to disclose inventions.
Tickets, for any type of event, have been suggested for sale as NFTs. Such proposals would enable event organizers or performers to garner royalties on resales.
The first credited political protest NFT ("Destruction of Nazi Monument Symbolizing Contemporary Lithuania") was a video filmed by Professor Stanislovas Tomas on April 8, 2019, and minted on March 29, 2021. In the video, Tomas uses a sledgehammer to destroy a state-sponsored Lithuanian plaque located on the Lithuanian Academy of Sciences honoring Nazi war criminal Jonas Noreika.
Standards in blockchains:
Specific token standards have been created to support various blockchain use-cases.
Ethereum was the first blockchain to support NFTs with its ERC-721 standard and is currently the most widely used. Many other blockchains have added or plan to add support for NFTs with their growing popularity.
Ethereum:
ERC-721 was the first standard for representing non-fungible digital assets on the Ethereum blockchain. ERC-721 is an inheritable Solidity smart contract standard, meaning that developers can create new ERC-721-compliant contracts by importing them from the OpenZeppelin library. ERC-721 provides core methods that allow tracking the owner of a unique identifier, as well as a permissioned way for the owner to transfer the asset to others.
The ERC-1155 standard offers "semi-fungibility", as well as providing a superset of ERC-721 functionality (meaning that an ERC-721 asset could be built using ERC-1155).
Unlike ERC-721 where a unique ID represents a single asset, the unique ID of an ERC-1155 token represent a class of assets, and there is an additional quantity field to represent the amount of the class that a particular wallet has. The assets under the same class are interchangeable, and the user can transfer any amount of assets to others.
Because Ethereum currently has high transaction fees (known as gas fees), layer 2 solutions for Ethereum have emerged which also supports NFTs:
- Immutable X – Immutable X is a layer 2 protocol for Ethereum designed specifically for NFTs, utilizing ZK rollups to eliminate gas fees for transactions.
- Polygon – Formerly known as the Matic Network, Polygon is a proof-of-stake blockchain which is supported by major NFT marketplaces such as OpenSea.
Other blockchains:
- Bitcoin Cash – Bitcoin Cash supports NFTs and powers the Juungle NFT marketplace.
- Cardano – Cardano introduced native tokens that enable the creation of NFTs without smart contracts with its March 2021 update. Cardano NFT marketplaces include CNFT and Theos.
- Flow – The Flow blockchain, which uses a proof of stake consensus model, supports NFTs. CryptoKitties plans to switch from Ethereum to Flow in the future.
- GoChain – GoChain, a blockchain which bills itself as 'eco-friendly', powers the Zeromint NFT marketplace and the VeVe app.
- Solana – The Solana blockchain also supports non-fungible tokens.
- Tezos – Tezos is a blockchain network that operates on proof of stake and supports the sale of NFT art.
History:
Early history (2014–2017):
The first known "NFT", Quantum, was created by Kevin McCoy and Anil Dash in May 2014, consisting of a video clip made by McCoy's wife Jennifer. McCoy registered the video on the Namecoin blockchain and sold it to Dash for $4, during a live presentation for the Seven on Seven conference at the New Museum in New York City.
In October 2015, the first NFT project, Etheria, was launched and demonstrated at DEVCON 1, Ethereum's first developer conference, in London, UK, three months after the launch of the Ethereum blockchain. Most of Etheria's 457 purchasable and tradable hexagonal tiles went unsold for more than five years until March 13, 2021, when renewed interest in NFTs sparked a buying frenzy. Within 24 hours, all tiles of the current version and a prior version, each hardcoded to 1 ETH ($0.43 at the time of launch), were sold for a total of $1.4 million.
The term "NFT" only gained currency with the ERC-721 standard, first proposed in 2017 via the Ethereum GitHub, following the launch of various NFT projects that year. These include Curio Cards, CryptoPunks (a project to trade unique cartoon characters, released by the American studio Larva Labs on the Ethereum blockchain) and the Decentraland platform. All three projects were referenced in the original proposal along with rare Pepe trading cards.
Public awareness (Late 2017–2021):
Public awareness in NFTs began with the success of CryptoKitties, an online game where players adopt and trade virtual cats. Soon after release, the project went viral, raising a $12.5 million investment, with some kitties selling for over $100,000 each.
Following its success, CryptoKitties were added to the ERC-721 standard, which was created in January 2018 (and finalized in June), and affirmed the use of the term "NFT" to refer to "non-fungible tokens".
In 2018, Decentraland, a blockchain-based virtual world which first sold its tokens in August 2017, raised $26 million in an initial coin offering, and had a $20 million internal economy as of September 2018. Following CryptoKitties' success, another similar NFT-based online game Axie Infinity was launched in March 2018, which then proceeded to become the most expensive NFT collection in May 2021.
In 2019, Nike patented a system called CryptoKicks that would use NFTs to verify the authenticity of physical sneakers and give a virtual version of the shoe to the customer.
In early 2020, the developer of CryptoKitties, Dapper Labs, released the beta version of NBA TopShot, a project to sell tokenized collectibles of NBA highlights. The project was built on top of Flow, a newer and more efficient blockchain compared to Ethereum. Later that year, the project was released to the public and reported over $230 million in gross sales as of February 28, 2021.
The NFT market experienced rapid growth during 2020, with its value tripling to $250 million. In the first three months of 2021, more than $200 million were spent on NFTs.
NFT buying surge (2021–present):
In the early months of 2021, interest in NFTs increased after a number of high-profile sales. NFT sales in February 2021 included digital art created by the musician Grimes, an NFT of the Nyan Cat meme, and NFTs created by 3LAU to promote his album Ultraviolet.
More publicized NFT sales were made in March 2021, which included an NFT made to promote the Kings of Leon album When You See Yourself, a $69.3 million sale of a digital work from Mike Winkelmann called Everydays: The First 5000 Days, and an NFT made by Twitter founder Jack Dorsey that represented his first tweet.
The speculative market for NFTs has led more investors to trade at greater volumes and rates. The NFT buying surge was called an economic bubble by experts, who also compared it to the Dot-com bubble.
By mid-April 2021, demand appeared to have substantially subsided, causing prices to fall significantly; early buyers were reported to have "done supremely well" by Bloomberg Businessweek.
An NFT of the source code of the World Wide Web, credited to internet inventor computer scientist Sir Tim Berners-Lee, was auctioned in June 2021 by Sotheby’s in London, and was sold for US$5.4 million.
In September 2021, Sotheby's sold a bundle of 101 Bored Ape Yacht Club NFTs for $24.4 million. On October 1, 2021, Christie's auctioned a full set of Curio Cards, plus the "17b" misprint, for ETH393 ($1.3 million at the time) – the first time live bidding at an auction was conducted in Ether.
A Sotheby's sale later that month included a CryptoPunk, various cat-based NFTs and a rare Pepe, Pepenopoulos, 2016, that sold for $3.6m. This was the first auction hosted on Sotheby's "Metaverse", a platform specifically dedicated to NFT collectors, intended to become a biannual event.
Popular culture:
A comedy skit on the March 27, 2021 episode of Saturday Night Live featured characters explaining NFTs through rap to US Treasury Secretary Janet Yellen, as played by Kate McKinnon.
The Paramount+ television film South Park: Post Covid: The Return of Covid featured an adult version of Butters Stotch in his Dr. Chaos persona tricking people into purchasing NFTs in 2061. Although the film portrays them as a poor investment, he has grown so adept at selling them that he is locked in a mental institution.
Issues and criticisms:
Storage off-chain:
NFTs involving digital art generally do not store the file on the blockchain due to its size.
The token functions in a way more similar to a certificate of ownership, with a web address pointing to the piece of art in question, making the art still subject to link rot.
Because NFTs are functionally separate from the underlying artworks, anybody can easily save a copy of an NFT's image, popularly through a right click. NFT supporters disparage this duplication of NFT artwork as a "right-clicker mentality", with one collector comparing the value of a purchased NFT to that of a status symbol "to show off that they can afford to pay that much".
The "right-clicker mentality" phrase spread virally after its introduction, particularly among those that were critical of the NFT marketplace who used the term to flaunt the ability to capture digital art backed by NFT with ease. This criticism was promoted by Australian programmer Geoffrey Huntley who created "The NFT Bay", modeled after The Pirate Bay.
The NFT Bay advertised a torrent file purported to contain 19 terabytes of digital art NFT images. Huntley compared his work to an art project from Pauline Pantsdown, and hoped the site would help educate users on what NFTs are and are not.
Environmental concerns:
NFT purchases and sales are enmeshed in a controversy regarding the high energy usage, and consequent greenhouse gas emissions, associated with blockchain transactions.
A major aspect of this is the proof-of-work protocol required to regulate and verify blockchain transactions on networks such as Ethereum, which consumes a large amount of electricity; estimating the carbon footprint of a given NFT transaction involves a variety of assumptions about the manner in which that transaction is set up on the blockchain, the economic behavior of blockchain miners (and the energy demands of their mining equipment), as well as the amount of renewable energy being used on these networks.
There are also conceptual questions, such as whether the carbon footprint estimate for an NFT purchase should incorporate some portion of the ongoing energy demand of the underlying network, or just the marginal impact of that particular purchase. An analogy that's been described for this is the footprint associated with an additional passenger on a given airline flight.
Some more recent NFT technologies use alternative validation protocols, such as proof of stake, that have much less energy usage for a given validation cycle. Other approaches to reducing electricity include the use of off-chain transactions as part of minting an NFT.
A number of NFT art sites are also looking to address these concerns, and some are moving to using technologies and protocols with lower associated footprints. Others now allow the option of buying carbon offsets when making NFT purchases, although the environmental benefits of this have been questioned. In some instances, NFT artists have decided against selling some of their own work to limit carbon emission contributions.
Artist and buyer fees:
Sales platforms charge artists and buyers fees for minting, listing, claiming and secondary sales. Analysis of NFT markets in March 2021, in the immediate aftermath of Beeple's "Everydays: the First 5000 Days" selling for US$69.3 million, found that most NFT artworks were selling for less than $200, with a third selling for less than $100.
Those selling below $100 were paying network usage fees between 72.5 and 157.5 per cent of that amount, meaning that such artists were on average paying more money in fees than they were making in sales.
Plagiarism and fraud:
There have been examples of "artists having their work copied without permission" and sold as an NFT. After the artist Qing Han died in 2020, her identity was assumed by a fraudster and a number of her works became available for purchase as NFTs. Similarly, a seller posing as Banksy succeeded in selling an NFT supposedly made by the artist for $336,000 in 2021; with the seller in this case refunding the money after the case drew media attention.
A process known as "sleepminting" can also allow a fraudster to mint an NFT in an artist's wallet and transfer it back to their own account without the artist becoming aware. This allowed a white hat hacker to mint a fraudulent NFT that had seemingly originated from the wallet of the artist Beeple.
The BBC reported a case of insider trading when an employee of the NFT marketplace OpenSea bought specific NFTs before they were launched, with the prior knowledge they would be promoted on the company's home page. NFT trading is an unregulated market that has no legal recourse for such abuses.
In their announcement of developing NFT support for the graphics editor Photoshop, Adobe proposed creating an InterPlanetary File System database as an alternative means of establishing authenticity for digital works.
See also:
Introduction:
Elon Musk, the visionary entrepreneur, engineer, and inventor, is a name that commands respect and curiosity in equal measure. Renowned for his groundbreaking work in the fields of space exploration, electric vehicles, and renewable energy, Musk has undeniably earned the title of a genius. Yet, simultaneously, his actions and statements have raised eyebrows and led some to question his sanity. In this article, we’ll embark on a journey to understand the duality of Elon Musk’s persona as a genius and an idiot, exploring the various facets of his life and career.
A Tale of Contradictions:
Elon Musk’s life story is filled with paradoxes and contradictions. His ability to envision a future that pushes the boundaries of innovation is awe-inspiring, while some of his decisions and statements have caused uproar and disbelief. Let’s explore the multiple dimensions of his genius and idiocy.
The Visionary Innovator:
Elon Musk’s visionary nature is at the core of his genius. He has a unique ability to identify the world’s most pressing problems and provide innovative solutions that challenge conventional wisdom. Here are some examples of his groundbreaking achievements:
1. Tesla Motors: Revolutionizing the Automotive Industry:
Tesla, the electric vehicle company founded by Elon Musk, has revolutionized the automotive industry. By introducing high-performance electric cars with cutting-edge technology, Musk has accelerated the world’s transition to sustainable transportation.
2. SpaceX: Pioneering Space Exploration:
SpaceX, another brainchild of Musk, has redefined space exploration. With successful rocket launches, reusable rockets, and plans for colonizing Mars, Musk’s ambition to make humanity a multi-planetary species is unparalleled.
3. SolarCity: Promoting Renewable Energy:
Musk’s involvement with SolarCity, a solar energy company, showcases his commitment to sustainable energy solutions. By promoting solar power and energy storage, he has championed the cause of environmental conservation.
4. Neuralink: Advancing Brain-Computer Interface:
In his quest to merge the human brain with artificial intelligence, Musk founded Neuralink. This ambitious project aims to develop brain-computer interfaces to enhance human cognition and address neurological disorders.
The Eccentric Ecclesiastic:
For all his brilliance, Elon Musk is no stranger to eccentricity, which has often led to public scrutiny and skepticism. His unfiltered tweets, audacious claims, and unconventional behavior have sparked debates about his sanity. Here are some instances where Musk’s actions raised eyebrows:
5. Social Media Controversies:
Elon Musk’s tweets have been the subject of numerous controversies. From sharing unverified information to making impulsive statements, his social media presence has landed him in hot water on multiple occasions.
6. Unconventional Business Practices:
Musk’s management style at Tesla and SpaceX has been described as unorthodox. His hands-on approach and demanding work culture have led to both admiration and criticism.
7. Divergent Ventures:
Elon Musk’s involvement in multiple ventures has sometimes been viewed as stretching himself too thin. Critics argue that spreading his focus across various projects may dilute the impact of his genius.
8. Clashes with Authorities:
From regulatory disputes over Tesla’s autopilot technology to confrontations with the SEC, Musk’s battles with authorities have made headlines and sparked debates about his impulsiveness.
The Enigma Unveiled:
To understand the enigma of Elon Musk, we must recognize that true genius often coexists with eccentricity. His unyielding pursuit of ambitious goals, combined with his idiosyncrasies, has shaped a personality that defies conventional definitions. Musk’s ability to think beyond the imaginable is what makes him a genius, while his impulsive actions may stem from his passion and desire for progress.
Elon Musk, the visionary entrepreneur, engineer, and inventor, is a name that commands respect and curiosity in equal measure. Renowned for his groundbreaking work in the fields of space exploration, electric vehicles, and renewable energy, Musk has undeniably earned the title of a genius. Yet, simultaneously, his actions and statements have raised eyebrows and led some to question his sanity. In this article, we’ll embark on a journey to understand the duality of Elon Musk’s persona as a genius and an idiot, exploring the various facets of his life and career.
A Tale of Contradictions:
Elon Musk’s life story is filled with paradoxes and contradictions. His ability to envision a future that pushes the boundaries of innovation is awe-inspiring, while some of his decisions and statements have caused uproar and disbelief. Let’s explore the multiple dimensions of his genius and idiocy.
The Visionary Innovator:
Elon Musk’s visionary nature is at the core of his genius. He has a unique ability to identify the world’s most pressing problems and provide innovative solutions that challenge conventional wisdom. Here are some examples of his groundbreaking achievements:
1. Tesla Motors: Revolutionizing the Automotive Industry:
Tesla, the electric vehicle company founded by Elon Musk, has revolutionized the automotive industry. By introducing high-performance electric cars with cutting-edge technology, Musk has accelerated the world’s transition to sustainable transportation.
2. SpaceX: Pioneering Space Exploration:
SpaceX, another brainchild of Musk, has redefined space exploration. With successful rocket launches, reusable rockets, and plans for colonizing Mars, Musk’s ambition to make humanity a multi-planetary species is unparalleled.
3. SolarCity: Promoting Renewable Energy:
Musk’s involvement with SolarCity, a solar energy company, showcases his commitment to sustainable energy solutions. By promoting solar power and energy storage, he has championed the cause of environmental conservation.
4. Neuralink: Advancing Brain-Computer Interface:
In his quest to merge the human brain with artificial intelligence, Musk founded Neuralink. This ambitious project aims to develop brain-computer interfaces to enhance human cognition and address neurological disorders.
The Eccentric Ecclesiastic:
For all his brilliance, Elon Musk is no stranger to eccentricity, which has often led to public scrutiny and skepticism. His unfiltered tweets, audacious claims, and unconventional behavior have sparked debates about his sanity. Here are some instances where Musk’s actions raised eyebrows:
5. Social Media Controversies:
Elon Musk’s tweets have been the subject of numerous controversies. From sharing unverified information to making impulsive statements, his social media presence has landed him in hot water on multiple occasions.
6. Unconventional Business Practices:
Musk’s management style at Tesla and SpaceX has been described as unorthodox. His hands-on approach and demanding work culture have led to both admiration and criticism.
7. Divergent Ventures:
Elon Musk’s involvement in multiple ventures has sometimes been viewed as stretching himself too thin. Critics argue that spreading his focus across various projects may dilute the impact of his genius.
8. Clashes with Authorities:
From regulatory disputes over Tesla’s autopilot technology to confrontations with the SEC, Musk’s battles with authorities have made headlines and sparked debates about his impulsiveness.
The Enigma Unveiled:
To understand the enigma of Elon Musk, we must recognize that true genius often coexists with eccentricity. His unyielding pursuit of ambitious goals, combined with his idiosyncrasies, has shaped a personality that defies conventional definitions. Musk’s ability to think beyond the imaginable is what makes him a genius, while his impulsive actions may stem from his passion and desire for progress.
Breakthrough of the Year
- YouTube Video: Science’s 2021 Breakthrough of the Year: AI brings protein structures to all
- YouTube Video: Covid 19 Vaccination Is the2020 Breakthrough of the Year
- YouTube Video: 2019 Breakthrough of the Year
The Breakthrough of the Year is an annual award for the most significant development in scientific research made by the AAAS journal Science, an academic journal covering all branches of science.
Originating in 1989 as the Molecule of the Year, and inspired by Time's Person of the Year, it was renamed the Breakthrough of the Year in 1996.
Molecule of the Year:
Breakthrough of the Year:
Originating in 1989 as the Molecule of the Year, and inspired by Time's Person of the Year, it was renamed the Breakthrough of the Year in 1996.
Molecule of the Year:
- 1989 PCR and DNA polymerase
- 1990 the manufacture of synthetic diamonds
- 1991 buckminsterfullerene
- 1992 nitric oxide
- 1993 p53
- 1994 DNA repair enzyme
Breakthrough of the Year:
- 1996: Understanding HIV
- 1997: Dolly the sheep, the first mammal to be cloned from adult cells
- 1998: Accelerating universe
- 1999: Prospective stem-cell therapies
- 2000: Full genome sequencing
- 2001: Nanocircuits or Molecular circuit
- 2002: RNA interference
- 2003: Dark energy
- 2004: Spirit rover landed on Mars
- 2005: Evolution in action
- 2006: Proof of the Poincaré conjecture
- 2007: Human genetic variation
- 2008: Cellular reprogramming
- 2009: Ardipithecus ramidus
- 2010: The first quantum machine
- 2011: HIV treatment as prevention (HPTN 052)
- 2012: Discovery of the Higgs boson
- 2013: Cancer immunotherapy
- 2014: Rosetta comet mission
- 2015: CRISPR genome-editing method
- 2016: First observation of gravitational waves
- 2017: Neutron star merger (GW170817)
- 2018: Single-cell sequencing
- 2019: A black hole made visible
- 2020: COVID-19 vaccine, developed and tested at record speed
- 2021: An AI brings protein structures to all
- 2022: James Webb Space Telescope debut
- 2023: GLP-1 Drugs
- Physics World, also has a Breakthrough of the Year award
Graphic Design
- YouTube Video: All Graphic Design Jobs Explained | Design Insights
- YouTube Video: Graphic Design Basics | FREE COURSE
- YouTube Video: GRAPHIC DESIGN MAJOR & CAREER | Life as a Graphic Designer!
Graphic Design
Graphic design is a profession, academic discipline and applied art whose activity consists in projecting visual communications intended to transmit specific messages to social groups, with specific objectives.
Graphic design is an interdisciplinary branch of design and of the fine arts. Its practice involves creativity, innovation and lateral thinking using manual or digital tools, where it is usual to use text and graphics to communicate visually.
The role of the graphic designer in the communication process is that of the encoder or interpreter of the message. They work on the interpretation, ordering, and presentation of visual messages.
Usually, graphic design uses the aesthetics of typography and the compositional arrangement of the text, ornamentation, and imagery to convey ideas, feelings, and attitudes beyond what language alone expresses.
The design work can be based on a customer's demand, a demand that ends up being established linguistically, either orally or in writing, that is, that graphic design transforms a linguistic message into a graphic manifestation.
Graphic design has, as a field of application, different areas of knowledge focused on any visual communication system. For example, it can be applied in advertising strategies, or it can also be applied in the aviation world or space exploration.
In this sense, in some countries graphic design is related as only associated with the production of sketches and drawings, this is incorrect, since visual communication is a small part of a huge range of types and classes where it can be applied.
With origins in Antiquity and the Middle Ages, graphic design as applied art was initially linked to the boom of the rise of printing in Europe in the 15th century and the growth of consumer culture in the Industrial Revolution.
From there it emerged as a distinct profession in the West, closely associated with advertising in the 19th century and its evolution allowed its consolidation in the 20th century.
Given the rapid and massive growth in information exchange today, the demand for experienced designers is greater than ever, particularly because of the development of new technologies and the need to pay attention to human factors beyond the competence of the engineers who develop them.
Terminology:
William Addison Dwiggins is often credited with first using the term "graphic design" in a 1922 article, although it appears in a 4 July 1908 issue (volume 9, number 27) of Organized Labor, a publication of the Labor Unions of San Francisco, in an article about technical education for printers:
History
Main article: History of graphic design
In both its lengthy history and in the relatively recent explosion of visual communication in the 20th and 21st centuries, the distinction between advertising, art, graphic design and fine art has disappeared. They share many elements, theories, principles, practices, languages and sometimes the same benefactor or client.
In advertising, the ultimate objective is the sale of goods and services. In graphic design, "the essence is to give order to information, form to ideas, expression, and feeling to artifacts that document the human experience."
The definition of the graphic designer profession is relatively recent concerning its preparation, activity, and objectives. Although there is no consensus on an exact date when graphic design emerged, some date it back to the Interwar period. Others understand that it began to be identified as such by the late 19th century.
It can be argued that graphic communications with specific purposes have their origins in Paleolithic cave paintings and the birth of written language in the third millennium BCE.
However, the differences in working methods, auxiliary sciences, and required training are such that it is not possible to clearly identify the current graphic designer with prehistoric man, the 15th-century xylographer, or the lithographer of 1890.
The diversity of opinions stems from some considering any graphic manifestation as a product of graphic design, while others only recognize those that arise as a result of the application of an industrial production model—visual manifestations that have been "projected" to address various needs: productive, symbolic, ergonomic, contextual, among others.
Nevertheless, the evolution of graphic design as a practice and profession has been closely linked to technological innovations, social needs, and the visual imagination of professionals.
Graphic design has been practiced in various forms throughout history; in fact, good examples of graphic design date back to manuscripts from ancient China, Egypt, and Greece.
As printing and book production developed in the 15th century, advances in graphic design continued over the subsequent centuries, with composers or typographers often designing pages according to established type.
By the late 19th century, graphic design emerged as a distinct profession in the West, partly due to the process of labor specialization that occurred there and partly due to the new technologies and business possibilities brought about by the Industrial Revolution.
New production methods led to the separation of the design of a communication medium (such as a poster) from its actual production. Increasingly, throughout the 19th and early 20th centuries, advertising agencies, book publishers, and magazines hired art directors who organized all visual elements of communication and integrated them into a harmonious whole, creating an expression appropriate to the content. In 1922, typographer William A. Dwiggins coined the term graphic design to identify the emerging field.
Throughout the 20th century, the technology available to designers continued to advance rapidly, as did the artistic and commercial possibilities of design. The profession expanded greatly, and graphic designers created, among other things:
By the early 21st century, graphic design had become a global profession as advanced technology and industry spread worldwide.
Historical background:
Main article: History of printing
In China, during the Tang dynasty (618–907) wood blocks were cut to print on textiles and later to reproduce Buddhist texts. A Buddhist scripture printed in 868 is the earliest known printed book.
Beginning in the 11th century in China, longer scrolls and books were produced using movable type printing, making books widely available during the Song dynasty (960–1279).
In the mid-15th century in Mainz, Germany, Johannes Gutenberg developed a way to reproduce printed pages at a faster pace using movable type made with a new metal alloy that created a revolution in the dissemination of information.
Nineteenth century:
In 1849, Henry Cole became one of the major forces in design education in Great Britain, informing the government of the importance of design in his Journal of Design and Manufactures. He organized the Great Exhibition as a celebration of modern industrial technology and Victorian design.
From 1891 to 1896, William Morris' Kelmscott Press was a leader in graphic design associated with the Arts and Crafts movement, creating hand-made books in medieval and Renaissance era style, in addition to wallpaper and textile designs. Morris' work, along with the rest of the Private Press movement, directly influenced Art Nouveau.
Will H. Bradley became one of the notable graphic designers in the late nineteenth-century due to creating art pieces in various Art Nouveau styles. Bradley created a number of designs as promotions for a literary magazine titled The Chap-Book.
Twentieth century
In 1917, Frederick H. Meyer, director and instructor at the California School of Arts and Crafts, taught a class entitled "Graphic Design and Lettering".
Raffe's Graphic Design, published in 1927, was the first book to use "Graphic Design" in its title.
In 1936, author and graphic designer Leon Friend published his book titled "Graphic Design" and it is known to be the first piece of literature to cover the topic extensively.
The signage in the London Underground is a classic design example of the modern era. Although he lacked artistic training, Frank Pick led the Underground Group design and publicity movement.
The first Underground station signs were introduced in 1908 with a design of a solid red disk with a blue bar in the center and the name of the station. The station name was in white sans-serif letters.
It was in 1916 when Pick used the expertise of Edward Johnston to design a new typeface for the Underground. Johnston redesigned the Underground sign and logo to include his typeface on the blue bar in the center of a red circle.
In the 1920s, Soviet constructivism applied 'intellectual production' in different spheres of production. The movement saw individualistic art as useless in revolutionary Russia and thus moved towards creating objects for utilitarian purposes. They designed:
Jan Tschichold codified the principles of modern typography in his 1928 book, New Typography. He later repudiated the philosophy he espoused in this book as fascistic, but it remained influential. Tschichold, Bauhaus typographers such as Herbert Bayer and László Moholy-Nagy and El Lissitzky greatly influenced graphic design. They pioneered production techniques and stylistic devices used throughout the twentieth century. The following years saw graphic design in the modern style gain widespread acceptance and application
The professional graphic design industry grew in parallel with consumerism. This raised concerns and criticisms, notably from within the graphic design community with the First Things First manifesto.
First launched by Ken Garland in 1964, it was re-published as the First Things First 2000 manifesto in 1999 in the magazine Emigre stating "We propose a reversal of priorities in favor of more useful, lasting and democratic forms of communication – a mindshift away from product marketing and toward the exploration and production of a new kind of meaning.
The scope of debate is shrinking; it must expand. Consumerism is running uncontested; it must be challenged by other perspectives expressed, in part, through the visual languages and resources of design."
Applications
Graphic design can have many applications, from road signs to technical schematics and reference manuals. It is often used in branding products and elements of company identity such as logos, colors, packaging, labelling and text.
From scientific journals to news reporting, the presentation of opinions and facts is often improved with graphics and thoughtful compositions of visual information – known as information design.
With the advent of the web, information designers with experience in interactive tools are increasingly used to illustrate the background to news stories. Information design can include Data and information visualization, which involves using programs to interpret and form data into a visually compelling presentation, and can be tied in with information graphics.
Skills:
A graphic design project may involve the creative presentation of existing text, ornament, and images.
The "process school" is concerned with communication; it highlights the channels and media through which messages are transmitted and by which senders and receivers encode and decode these messages. The semiotic school treats a message as a construction of signs which through interaction with receivers, produces meaning; communication as an agent.
Typography
Main article: Typography
Typography includes type design, modifying type glyphs and arranging type. Type glyphs (characters) are created and modified using illustration techniques. Type arrangement is:
Typography is performed by typesetters, compositors, typographers, graphic artists, art directors, and clerical workers. Until the digital age, typography was a specialized occupation.
Certain fonts communicate or resemble stereotypical notions. For example, the 1942 Report is a font which types text akin to a typewriter or a vintage report
Page layout
Further information: Grid (graphic design)
Page layout deals with the arrangement of elements (content) on a page, such as image placement, text layout and style. Page design has always been a consideration in printed material and more recently extended to displays such as web pages.
Elements typically consist of:
Grids:
Pictured below: the role of grid systems in graphic design:
Graphic design is a profession, academic discipline and applied art whose activity consists in projecting visual communications intended to transmit specific messages to social groups, with specific objectives.
Graphic design is an interdisciplinary branch of design and of the fine arts. Its practice involves creativity, innovation and lateral thinking using manual or digital tools, where it is usual to use text and graphics to communicate visually.
The role of the graphic designer in the communication process is that of the encoder or interpreter of the message. They work on the interpretation, ordering, and presentation of visual messages.
Usually, graphic design uses the aesthetics of typography and the compositional arrangement of the text, ornamentation, and imagery to convey ideas, feelings, and attitudes beyond what language alone expresses.
The design work can be based on a customer's demand, a demand that ends up being established linguistically, either orally or in writing, that is, that graphic design transforms a linguistic message into a graphic manifestation.
Graphic design has, as a field of application, different areas of knowledge focused on any visual communication system. For example, it can be applied in advertising strategies, or it can also be applied in the aviation world or space exploration.
In this sense, in some countries graphic design is related as only associated with the production of sketches and drawings, this is incorrect, since visual communication is a small part of a huge range of types and classes where it can be applied.
With origins in Antiquity and the Middle Ages, graphic design as applied art was initially linked to the boom of the rise of printing in Europe in the 15th century and the growth of consumer culture in the Industrial Revolution.
From there it emerged as a distinct profession in the West, closely associated with advertising in the 19th century and its evolution allowed its consolidation in the 20th century.
Given the rapid and massive growth in information exchange today, the demand for experienced designers is greater than ever, particularly because of the development of new technologies and the need to pay attention to human factors beyond the competence of the engineers who develop them.
Terminology:
William Addison Dwiggins is often credited with first using the term "graphic design" in a 1922 article, although it appears in a 4 July 1908 issue (volume 9, number 27) of Organized Labor, a publication of the Labor Unions of San Francisco, in an article about technical education for printers:
- An Enterprising Trades Union
- … The admittedly high standard of intelligence which prevails among printers is an assurance that with the elemental principles of design at their finger ends many of them will grow in knowledge and develop into specialists in graphic design and decorating.
- A decade later, the 1917–1918 course catalog of the California School of Arts & Crafts advertised a course titled Graphic Design and Lettering, which replaced one called Advanced Design and Lettering. Both classes were taught by Frederick Meyer.
History
Main article: History of graphic design
In both its lengthy history and in the relatively recent explosion of visual communication in the 20th and 21st centuries, the distinction between advertising, art, graphic design and fine art has disappeared. They share many elements, theories, principles, practices, languages and sometimes the same benefactor or client.
In advertising, the ultimate objective is the sale of goods and services. In graphic design, "the essence is to give order to information, form to ideas, expression, and feeling to artifacts that document the human experience."
The definition of the graphic designer profession is relatively recent concerning its preparation, activity, and objectives. Although there is no consensus on an exact date when graphic design emerged, some date it back to the Interwar period. Others understand that it began to be identified as such by the late 19th century.
It can be argued that graphic communications with specific purposes have their origins in Paleolithic cave paintings and the birth of written language in the third millennium BCE.
However, the differences in working methods, auxiliary sciences, and required training are such that it is not possible to clearly identify the current graphic designer with prehistoric man, the 15th-century xylographer, or the lithographer of 1890.
The diversity of opinions stems from some considering any graphic manifestation as a product of graphic design, while others only recognize those that arise as a result of the application of an industrial production model—visual manifestations that have been "projected" to address various needs: productive, symbolic, ergonomic, contextual, among others.
Nevertheless, the evolution of graphic design as a practice and profession has been closely linked to technological innovations, social needs, and the visual imagination of professionals.
Graphic design has been practiced in various forms throughout history; in fact, good examples of graphic design date back to manuscripts from ancient China, Egypt, and Greece.
As printing and book production developed in the 15th century, advances in graphic design continued over the subsequent centuries, with composers or typographers often designing pages according to established type.
By the late 19th century, graphic design emerged as a distinct profession in the West, partly due to the process of labor specialization that occurred there and partly due to the new technologies and business possibilities brought about by the Industrial Revolution.
New production methods led to the separation of the design of a communication medium (such as a poster) from its actual production. Increasingly, throughout the 19th and early 20th centuries, advertising agencies, book publishers, and magazines hired art directors who organized all visual elements of communication and integrated them into a harmonious whole, creating an expression appropriate to the content. In 1922, typographer William A. Dwiggins coined the term graphic design to identify the emerging field.
Throughout the 20th century, the technology available to designers continued to advance rapidly, as did the artistic and commercial possibilities of design. The profession expanded greatly, and graphic designers created, among other things:
- magazine pages,
- book covers,
- posters,
- CD covers,
- postage stamps,
- packaging,
- brands,
- signs,
- advertisements,
- kinetic titles for TV programs and movies,
- and websites.
By the early 21st century, graphic design had become a global profession as advanced technology and industry spread worldwide.
Historical background:
Main article: History of printing
In China, during the Tang dynasty (618–907) wood blocks were cut to print on textiles and later to reproduce Buddhist texts. A Buddhist scripture printed in 868 is the earliest known printed book.
Beginning in the 11th century in China, longer scrolls and books were produced using movable type printing, making books widely available during the Song dynasty (960–1279).
In the mid-15th century in Mainz, Germany, Johannes Gutenberg developed a way to reproduce printed pages at a faster pace using movable type made with a new metal alloy that created a revolution in the dissemination of information.
Nineteenth century:
In 1849, Henry Cole became one of the major forces in design education in Great Britain, informing the government of the importance of design in his Journal of Design and Manufactures. He organized the Great Exhibition as a celebration of modern industrial technology and Victorian design.
From 1891 to 1896, William Morris' Kelmscott Press was a leader in graphic design associated with the Arts and Crafts movement, creating hand-made books in medieval and Renaissance era style, in addition to wallpaper and textile designs. Morris' work, along with the rest of the Private Press movement, directly influenced Art Nouveau.
Will H. Bradley became one of the notable graphic designers in the late nineteenth-century due to creating art pieces in various Art Nouveau styles. Bradley created a number of designs as promotions for a literary magazine titled The Chap-Book.
Twentieth century
In 1917, Frederick H. Meyer, director and instructor at the California School of Arts and Crafts, taught a class entitled "Graphic Design and Lettering".
Raffe's Graphic Design, published in 1927, was the first book to use "Graphic Design" in its title.
In 1936, author and graphic designer Leon Friend published his book titled "Graphic Design" and it is known to be the first piece of literature to cover the topic extensively.
The signage in the London Underground is a classic design example of the modern era. Although he lacked artistic training, Frank Pick led the Underground Group design and publicity movement.
The first Underground station signs were introduced in 1908 with a design of a solid red disk with a blue bar in the center and the name of the station. The station name was in white sans-serif letters.
It was in 1916 when Pick used the expertise of Edward Johnston to design a new typeface for the Underground. Johnston redesigned the Underground sign and logo to include his typeface on the blue bar in the center of a red circle.
In the 1920s, Soviet constructivism applied 'intellectual production' in different spheres of production. The movement saw individualistic art as useless in revolutionary Russia and thus moved towards creating objects for utilitarian purposes. They designed:
- buildings,
- film and theater sets,
- posters,
- fabrics,
- clothing,
- furniture,
- logos,
- menus,
- etc.
Jan Tschichold codified the principles of modern typography in his 1928 book, New Typography. He later repudiated the philosophy he espoused in this book as fascistic, but it remained influential. Tschichold, Bauhaus typographers such as Herbert Bayer and László Moholy-Nagy and El Lissitzky greatly influenced graphic design. They pioneered production techniques and stylistic devices used throughout the twentieth century. The following years saw graphic design in the modern style gain widespread acceptance and application
The professional graphic design industry grew in parallel with consumerism. This raised concerns and criticisms, notably from within the graphic design community with the First Things First manifesto.
First launched by Ken Garland in 1964, it was re-published as the First Things First 2000 manifesto in 1999 in the magazine Emigre stating "We propose a reversal of priorities in favor of more useful, lasting and democratic forms of communication – a mindshift away from product marketing and toward the exploration and production of a new kind of meaning.
The scope of debate is shrinking; it must expand. Consumerism is running uncontested; it must be challenged by other perspectives expressed, in part, through the visual languages and resources of design."
Applications
Graphic design can have many applications, from road signs to technical schematics and reference manuals. It is often used in branding products and elements of company identity such as logos, colors, packaging, labelling and text.
From scientific journals to news reporting, the presentation of opinions and facts is often improved with graphics and thoughtful compositions of visual information – known as information design.
With the advent of the web, information designers with experience in interactive tools are increasingly used to illustrate the background to news stories. Information design can include Data and information visualization, which involves using programs to interpret and form data into a visually compelling presentation, and can be tied in with information graphics.
Skills:
A graphic design project may involve the creative presentation of existing text, ornament, and images.
The "process school" is concerned with communication; it highlights the channels and media through which messages are transmitted and by which senders and receivers encode and decode these messages. The semiotic school treats a message as a construction of signs which through interaction with receivers, produces meaning; communication as an agent.
Typography
Main article: Typography
Typography includes type design, modifying type glyphs and arranging type. Type glyphs (characters) are created and modified using illustration techniques. Type arrangement is:
- the selection of typefaces,
- point size,
- tracking (the space between all characters used),
- kerning (the space between two specific characters)
- and leading (line spacing).
Typography is performed by typesetters, compositors, typographers, graphic artists, art directors, and clerical workers. Until the digital age, typography was a specialized occupation.
Certain fonts communicate or resemble stereotypical notions. For example, the 1942 Report is a font which types text akin to a typewriter or a vintage report
Page layout
Further information: Grid (graphic design)
Page layout deals with the arrangement of elements (content) on a page, such as image placement, text layout and style. Page design has always been a consideration in printed material and more recently extended to displays such as web pages.
Elements typically consist of:
- type (text),
- images (pictures),
- and (with print media) occasionally place-holder graphics such as a
- dieline for elements that are not printed with ink such as die/laser cutting,
- foil stamping
- or blind embossing.
Grids:
Pictured below: the role of grid systems in graphic design:
A grid serves as a method of arranging both space and information, allowing the reader to easily comprehend the overall project. Furthermore, a grid functions as a container for information and a means of establishing and maintaining order.
Despite grids being utilized for centuries, many graphic designers associate them with Swiss design. The desire for order in the 1940s resulted in a highly systematic approach to visualizing information.
However, grids were later regarded as tedious and uninteresting, earning the label of "designersaur." Today, grids are once again considered crucial tools for professionals, whether they are novices or veterans.
Tools:
In the mid-1980s desktop publishing and graphic art software applications introduced computer image manipulation and creation capabilities that had previously been manually executed. Computers enabled designers to instantly see the effects of layout or typographic changes, and to simulate the effects of traditional media.
Traditional tools such as pencils can be useful even when computers are used for finalization; a designer or art director may sketch numerous concepts as part of the creative process. Styluses can be used with tablet computers to capture hand drawings digitally.
Computers and software:
Designers disagree whether computers enhance the creative process. Some designers argue that computers allow them to explore multiple ideas quickly and in more detail than can be achieved by hand-rendering or paste-up.
While other designers find the limitless choices from digital design can lead to paralysis or endless iterations with no clear outcome.
Most designers use a hybrid process that combines traditional and computer-based technologies. First, hand-rendered layouts are used to get approval to execute an idea, then the polished visual product is produced on a computer.
Graphic designers are expected to be proficient in software programs for image-making, typography and layout.
Nearly all of the popular and "industry standard" software programs used by graphic designers since the early 1990s are products of Adobe Inc.
CorelDraw, a vector graphics editing software developed and marketed by Corel Corporation, is also used worldwide.
Designers often use pre-designed raster images and vector graphics in their work from online design databases. Raster images may be edited in Adobe Photoshop, vector logos and illustrations in Adobe Illustrator and CorelDraw, and the final product assembled in one of the major page layout programs, such as:
Many free and open-source programs are also used by both professionals and casual graphic designers. Inkscape uses Scalable Vector Graphics (SVG) as its primary file format and allows importing and exporting other formats.
Other open-source programs used include GIMP for photo-editing and image manipulation, Krita for digital painting, and Scribus for page layout.
Related design fields:
Interface design
Main article: User interface design
Since the advent of personal computers, many graphic designers have become involved in interface design, in an environment commonly referred to as a Graphical user interface (GUI). This has included web design and software design when end user-interactivity is a design consideration of the layout or interface.
Combining visual communication skills with an understanding of user interaction and online branding, graphic designers often work with software developers and web developers to create the look and feel of a web site or software application. An important aspect of interface design is icon design.
User experience design
Main article: User experience design
User experience design (UX) is the study, analysis, and development of creating products that provide meaningful and relevant experiences to users. This involves the creation of the entire process of acquiring and integrating the product, including aspects of branding, design, usability, and function.
UX design involves creating the interface and interactions for a website or application, and is considered both an act and an art. This profession requires a combination of skills, including visual design, social psychology, development, project management, and most importantly, empathy towards the end-users.
Experiential graphic design:
Experiential graphic design is the application of communication skills to the built environment. This area of graphic design requires practitioners to understand physical installations that have to be manufactured and withstand the same environmental conditions as buildings. As such, it is a cross-disciplinary collaborative process involving designers, fabricators, city planners, architects, manufacturers and construction teams.
Experiential graphic designers try to solve problems that people encounter while interacting with buildings and space (also called environmental graphic design). Examples of practice areas for environmental graphic designers are:
Occupations
Main article: Graphic design occupations
Graphic design career paths cover all parts of the creative spectrum and often overlap. Workers perform specialized tasks, such as design services, publishing, advertising and public relations.
As of 2023, median pay was $50,710 per year. The main job titles within the industry are often country specific. They can include:
Depending on the industry served, the responsibilities may have different titles such as "DTP associate" or "Graphic Artist". The responsibilities may involve specialized skills such as illustration, photography, animation, visual effects or interactive design.
Employment in design of online projects was expected to increase by 35% by 2026, while employment in traditional media, such as newspaper and book design, expect to go down by 22%.
Graphic designers will be expected to constantly learn new techniques, programs, and methods.
Graphic designers can work within companies devoted specifically to the industry, such as design consultancies or branding agencies, others may work within publishing, marketing or other communications companies. Especially since the introduction of personal computers, many graphic designers work as in-house designers in non-design oriented organizations.
Graphic designers may also work freelance, working on their own terms, prices, ideas, etc.
A graphic designer typically reports to the:
As a designer becomes more senior, they spend less time designing and more time leading and directing other designers on broader creative activities, such as brand development and corporate identity development. They are often expected to interact more directly with clients, for example taking and interpreting briefs.
Crowdsourcing in graphic design
Main article: Crowdsourcing creative work
Jeff Howe of Wired Magazine first used the term "crowdsourcing" in his 2006 article, "The Rise of Crowdsourcing." It spans such creative domains as:
Tasks may be assigned to individuals or a group and may be categorized as convergent or divergent. An example of a divergent task is generating alternative designs for a poster. An example of a convergent task is selecting one poster design.
Companies, startups, small businesses and entrepreneurs have all benefitted from design crowdsourcing since it helps them source great graphic designs at a fraction of the budget they used to spend before. Getting a logo design through crowdsourcing being one of the most common.
Major companies that operate in the design crowdsourcing space are generally referred to as design contest sites.
Role of graphic design:
Graphic design is essential for advertising, branding, and marketing, influencing how people act. Good graphic design builds strong, recognizable brands, communicates messages clearly, and shapes how consumers see and react to things.
One way that graphic design influences consumer behavior is through the use of visual elements, such as color, typography, and imagery. Studies have shown that certain colors can evoke specific emotions and behaviors in consumers, and that typography can influence how information is perceived and remembered.
For example, serif fonts are often associated with tradition and elegance, while sans-serif fonts are seen as modern and minimalistic. These factors can all impact the way consumers perceive a brand and its messaging.
Another way that graphic design impacts consumer behavior is through its ability to communicate complex information in a clear and accessible way. For example, infographics and data visualizations can help to distill complex information into a format that is easy to understand and engaging for consumers. This can help to build trust and credibility with consumers, and encourage them to take action.
Ethical consideration in graphic design:
Ethics are an important consideration in graphicdesign, particularly when it comes to accurately representing information and avoiding harmful stereotypes. Graphic designers have a responsibility to ensure that their work is truthful, accurate, and free from any misleading or deceptive elements. This requires a commitment to honesty, integrity, and transparency in all aspects of the design process.
One of the key ethical considerations in graphic design is the responsibility to accurately represent information. This means ensuring that any claims or statements made in advertising or marketing materials are true and supported by evidence.
For example, a company should not use misleading statistics to promote their product or service, or make false claims about its benefits. Graphic designers must take care to accurately represent information in all visual elements, such as graphs, charts, and images, and avoid distorting or misrepresenting data.
Another important ethical consideration in graphic design is the need to avoid harmful stereotypes. This means avoiding any images or messaging that perpetuate negative or harmful stereotypes based on race, gender, religion, or other characteristics. Graphic designers should strive to create designs that are inclusive and respectful of all individuals and communities, and avoid reinforcing negative attitudes or biases.
Future of graphic design:
The future of graphic design is likely to be heavily influenced by emerging technologies and social trends. Advancements in areas such as artificial intelligence, virtual and augmented reality, and automation are likely to transform the way that graphic designers work and create designs.
Social trends, such as a greater focus on sustainability and inclusivity, are also likely to impact the future of graphic design.
One area where emerging technologies are likely to have a significant impact on graphic design is in the automation of certain tasks. Machine learning algorithms, for example, can analyze large datasets and create designs based on patterns and trends, freeing up designers to focus on more complex and creative tasks.
Virtual and augmented reality technologies may also allow designers to create immersive and interactive experiences for users, blurring the lines between the digital and physical worlds.
Social trends are also likely to shape the future of graphic design. As consumers become more conscious of environmental issues, for example, there may be a greater demand for designs that prioritize sustainability and minimize waste.
Similarly, there is likely to be a growing focus on inclusivity and diversity in design, with designers seeking to create designs that are accessible and representative of a wide range of individuals and communities.
See also:
Related areas
Related topics
Despite grids being utilized for centuries, many graphic designers associate them with Swiss design. The desire for order in the 1940s resulted in a highly systematic approach to visualizing information.
However, grids were later regarded as tedious and uninteresting, earning the label of "designersaur." Today, grids are once again considered crucial tools for professionals, whether they are novices or veterans.
Tools:
In the mid-1980s desktop publishing and graphic art software applications introduced computer image manipulation and creation capabilities that had previously been manually executed. Computers enabled designers to instantly see the effects of layout or typographic changes, and to simulate the effects of traditional media.
Traditional tools such as pencils can be useful even when computers are used for finalization; a designer or art director may sketch numerous concepts as part of the creative process. Styluses can be used with tablet computers to capture hand drawings digitally.
Computers and software:
Designers disagree whether computers enhance the creative process. Some designers argue that computers allow them to explore multiple ideas quickly and in more detail than can be achieved by hand-rendering or paste-up.
While other designers find the limitless choices from digital design can lead to paralysis or endless iterations with no clear outcome.
Most designers use a hybrid process that combines traditional and computer-based technologies. First, hand-rendered layouts are used to get approval to execute an idea, then the polished visual product is produced on a computer.
Graphic designers are expected to be proficient in software programs for image-making, typography and layout.
Nearly all of the popular and "industry standard" software programs used by graphic designers since the early 1990s are products of Adobe Inc.
- Adobe:
- Photoshop is a raster-based program for photo editing)
- while Adobe Illustrator is a vector-based program for drawing
- there are often used in the final stage.
CorelDraw, a vector graphics editing software developed and marketed by Corel Corporation, is also used worldwide.
Designers often use pre-designed raster images and vector graphics in their work from online design databases. Raster images may be edited in Adobe Photoshop, vector logos and illustrations in Adobe Illustrator and CorelDraw, and the final product assembled in one of the major page layout programs, such as:
Many free and open-source programs are also used by both professionals and casual graphic designers. Inkscape uses Scalable Vector Graphics (SVG) as its primary file format and allows importing and exporting other formats.
Other open-source programs used include GIMP for photo-editing and image manipulation, Krita for digital painting, and Scribus for page layout.
Related design fields:
Interface design
Main article: User interface design
Since the advent of personal computers, many graphic designers have become involved in interface design, in an environment commonly referred to as a Graphical user interface (GUI). This has included web design and software design when end user-interactivity is a design consideration of the layout or interface.
Combining visual communication skills with an understanding of user interaction and online branding, graphic designers often work with software developers and web developers to create the look and feel of a web site or software application. An important aspect of interface design is icon design.
User experience design
Main article: User experience design
User experience design (UX) is the study, analysis, and development of creating products that provide meaningful and relevant experiences to users. This involves the creation of the entire process of acquiring and integrating the product, including aspects of branding, design, usability, and function.
UX design involves creating the interface and interactions for a website or application, and is considered both an act and an art. This profession requires a combination of skills, including visual design, social psychology, development, project management, and most importantly, empathy towards the end-users.
Experiential graphic design:
Experiential graphic design is the application of communication skills to the built environment. This area of graphic design requires practitioners to understand physical installations that have to be manufactured and withstand the same environmental conditions as buildings. As such, it is a cross-disciplinary collaborative process involving designers, fabricators, city planners, architects, manufacturers and construction teams.
Experiential graphic designers try to solve problems that people encounter while interacting with buildings and space (also called environmental graphic design). Examples of practice areas for environmental graphic designers are:
- wayfinding,
- placemaking,
- branded environments,
- exhibitions and museum displays,
- public installations
- and digital environments.
Occupations
Main article: Graphic design occupations
Graphic design career paths cover all parts of the creative spectrum and often overlap. Workers perform specialized tasks, such as design services, publishing, advertising and public relations.
As of 2023, median pay was $50,710 per year. The main job titles within the industry are often country specific. They can include:
- graphic designer,
- art director,
- creative director,
- animator
- and entry level production artist.
Depending on the industry served, the responsibilities may have different titles such as "DTP associate" or "Graphic Artist". The responsibilities may involve specialized skills such as illustration, photography, animation, visual effects or interactive design.
Employment in design of online projects was expected to increase by 35% by 2026, while employment in traditional media, such as newspaper and book design, expect to go down by 22%.
Graphic designers will be expected to constantly learn new techniques, programs, and methods.
Graphic designers can work within companies devoted specifically to the industry, such as design consultancies or branding agencies, others may work within publishing, marketing or other communications companies. Especially since the introduction of personal computers, many graphic designers work as in-house designers in non-design oriented organizations.
Graphic designers may also work freelance, working on their own terms, prices, ideas, etc.
A graphic designer typically reports to the:
- art director,
- creative director
- or senior media creative.
As a designer becomes more senior, they spend less time designing and more time leading and directing other designers on broader creative activities, such as brand development and corporate identity development. They are often expected to interact more directly with clients, for example taking and interpreting briefs.
Crowdsourcing in graphic design
Main article: Crowdsourcing creative work
Jeff Howe of Wired Magazine first used the term "crowdsourcing" in his 2006 article, "The Rise of Crowdsourcing." It spans such creative domains as:
- graphic design,
- architecture,
- apparel design,
- writing,
- illustration,
- and others.
Tasks may be assigned to individuals or a group and may be categorized as convergent or divergent. An example of a divergent task is generating alternative designs for a poster. An example of a convergent task is selecting one poster design.
Companies, startups, small businesses and entrepreneurs have all benefitted from design crowdsourcing since it helps them source great graphic designs at a fraction of the budget they used to spend before. Getting a logo design through crowdsourcing being one of the most common.
Major companies that operate in the design crowdsourcing space are generally referred to as design contest sites.
Role of graphic design:
Graphic design is essential for advertising, branding, and marketing, influencing how people act. Good graphic design builds strong, recognizable brands, communicates messages clearly, and shapes how consumers see and react to things.
One way that graphic design influences consumer behavior is through the use of visual elements, such as color, typography, and imagery. Studies have shown that certain colors can evoke specific emotions and behaviors in consumers, and that typography can influence how information is perceived and remembered.
For example, serif fonts are often associated with tradition and elegance, while sans-serif fonts are seen as modern and minimalistic. These factors can all impact the way consumers perceive a brand and its messaging.
Another way that graphic design impacts consumer behavior is through its ability to communicate complex information in a clear and accessible way. For example, infographics and data visualizations can help to distill complex information into a format that is easy to understand and engaging for consumers. This can help to build trust and credibility with consumers, and encourage them to take action.
Ethical consideration in graphic design:
Ethics are an important consideration in graphicdesign, particularly when it comes to accurately representing information and avoiding harmful stereotypes. Graphic designers have a responsibility to ensure that their work is truthful, accurate, and free from any misleading or deceptive elements. This requires a commitment to honesty, integrity, and transparency in all aspects of the design process.
One of the key ethical considerations in graphic design is the responsibility to accurately represent information. This means ensuring that any claims or statements made in advertising or marketing materials are true and supported by evidence.
For example, a company should not use misleading statistics to promote their product or service, or make false claims about its benefits. Graphic designers must take care to accurately represent information in all visual elements, such as graphs, charts, and images, and avoid distorting or misrepresenting data.
Another important ethical consideration in graphic design is the need to avoid harmful stereotypes. This means avoiding any images or messaging that perpetuate negative or harmful stereotypes based on race, gender, religion, or other characteristics. Graphic designers should strive to create designs that are inclusive and respectful of all individuals and communities, and avoid reinforcing negative attitudes or biases.
Future of graphic design:
The future of graphic design is likely to be heavily influenced by emerging technologies and social trends. Advancements in areas such as artificial intelligence, virtual and augmented reality, and automation are likely to transform the way that graphic designers work and create designs.
Social trends, such as a greater focus on sustainability and inclusivity, are also likely to impact the future of graphic design.
One area where emerging technologies are likely to have a significant impact on graphic design is in the automation of certain tasks. Machine learning algorithms, for example, can analyze large datasets and create designs based on patterns and trends, freeing up designers to focus on more complex and creative tasks.
Virtual and augmented reality technologies may also allow designers to create immersive and interactive experiences for users, blurring the lines between the digital and physical worlds.
Social trends are also likely to shape the future of graphic design. As consumers become more conscious of environmental issues, for example, there may be a greater demand for designs that prioritize sustainability and minimize waste.
Similarly, there is likely to be a growing focus on inclusivity and diversity in design, with designers seeking to create designs that are accessible and representative of a wide range of individuals and communities.
See also:
Related areas
- Concept art
- Copywriting
- Digital illustration
- Illustration
- Instructional design
- Landscape architecture
- Marketing communications
- Motion graphic design
- New media
- Technical illustration
- Technical writing
- User Experience Design
- User Interface Design
- Visual communication
- Communication design
- Visual culture
Related topics
- Aesthetics
- Color theory
- Design principles and elements
- European Design Award
- "First Things First 2000"
- Infographic
- List of graphic design institutions
- List of notable graphic designers
- Logotype
- Material culture
- Style guide
- Value
- Visualization (computer graphics)
- International Typographic Style
- Swiss Style (design)
- Media related to Graphic design at Wikimedia Commons
- The Universal Arts of Graphic Design – Documentary produced by Off Book
- Graphic Designers, entry in the Occupational Outlook Handbook of the Bureau of Labor Statistics of the United States Department of Labor