Copyright © 2015 Bert N. Langford (Images may be subject to copyright. Please send feedback)
Welcome to Our Generation USA!
Computer Advancements
covers the many forms of computer technology, whether hardware, software, applications, or computing standards.
(For Medical Breakthroughs, Click Here)
(For Smartphones, Video and Online Games, Click Here)
(For Innovations and their Innovators, Click Here)
(For Computer Security, Click Here)
(For the Internet, Click Here)
[Your Webhost is one of the earliest magazine professionals to use computers (then called "microcomputers") starting in 1982: While consulting for a publisher, I purchased a Tandy Radio Shack TRS-80 computers. This was before PCs and Windows, so the operating system was very unstable, resulting in lost files, etc.
Once PCs came available we moved up to Windows and installed our own (primitive) Local Area Network (LAN). Meanwhile, the leading "magazine about magazines", Folio got wind of this and began a decade old relationship in which I wrote articles for Folio as well as spoke at seminars.]
Once PCs came available we moved up to Windows and installed our own (primitive) Local Area Network (LAN). Meanwhile, the leading "magazine about magazines", Folio got wind of this and began a decade old relationship in which I wrote articles for Folio as well as spoke at seminars.]
Computer Technology, including a Timeline of Computing Developments
YouTube video of "Information Age: The computer that changed our world"
YouTube Video about the Top 10 Most Powerful Supercomputers in the world 2021
Click here for a Timeline of Computing
A computer is a device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming.
Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks.
Computers are used as control systems for a wide variety of industrial and consumer devices. This includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer-aided design, and also general purpose devices like personal computers and mobile devices such as smartphones.
Early computers were only conceived as calculating devices. Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms.
More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II. The speed, power, and versatility of computers have been increasing dramatically ever since then.
Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU), and some form of memory. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information.
Peripheral devices include input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input/output devices that perform both functions (e.g., the 2000s-era touchscreen). Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved
Click on any of the following blue hyperlinks for more about Computers:
A computer is a device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming.
Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks.
Computers are used as control systems for a wide variety of industrial and consumer devices. This includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer-aided design, and also general purpose devices like personal computers and mobile devices such as smartphones.
Early computers were only conceived as calculating devices. Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms.
More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II. The speed, power, and versatility of computers have been increasing dramatically ever since then.
Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU), and some form of memory. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information.
Peripheral devices include input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input/output devices that perform both functions (e.g., the 2000s-era touchscreen). Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved
Click on any of the following blue hyperlinks for more about Computers:
- Etymology
- History
- Types
- Hardware
- Software
- Firmware
- Networking and the Internet
- Unconventional computers
- Unconventional computing
- Future
- Professions and organizations
- See also:
- Information technology portal
- Glossary of computers
- Computability theory
- Computer insecurity
- Computer security
- Glossary of computer hardware terms
- History of computer science
- List of computer term etymologies
- List of fictional computers
- List of pioneers in computer science
- Pulse computation
- TOP500 (list of most powerful computers)
- Media related to Computers at Wikimedia Commons
- Wikiversity has a quiz on this article
- Warhol & The Computer
Computer Operating Systems, including a List of Operating Systems
YouTube Video about the History of Computer Operating Systems
Pictured below: What is an Operating System, and what tasks does the Operating System perform?
Click here for a list of Operating Systems.
A computer operating system (OS) is system software that manages computer hardware and software resources and provides common services for computer programs.
Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources.
For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is usually executed directly by the hardware and frequently makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers.
The dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. macOS by Apple Inc. is in second place (13.23%), and the varieties of Linux are collectively in third place (1.57%).
In the mobile (smartphone and tablet combined) sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in the server and supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications.
Click on any of the following blue hyperlinks for more about Operating Systems:
A computer operating system (OS) is system software that manages computer hardware and software resources and provides common services for computer programs.
Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources.
For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is usually executed directly by the hardware and frequently makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers.
The dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. macOS by Apple Inc. is in second place (13.23%), and the varieties of Linux are collectively in third place (1.57%).
In the mobile (smartphone and tablet combined) sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in the server and supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications.
Click on any of the following blue hyperlinks for more about Operating Systems:
- Types of operating systems
- History
- Examples
- Components
- Real-time operating systems
- Operating system development as a hobby
- Diversity of operating systems and portability
- Market share
- See also:
- Antivirus software
- Comparison of operating systems
- Crash (computing)
- Hypervisor
- Interruptible operating system
- List of important publications in operating systems
- List of pioneers in computer science
- Live CD
- Glossary of operating systems terms
- Microcontroller
- Mobile device
- Mobile operating system
- Network operating system
- Object-oriented operating system
- Operating System Projects
- System Commander
- System image
- Timeline of operating systems
- Usage share of operating systems
The Digital Revolution
YouTube video: illustrating how much the Digital Revolution has Changed Our World
Pictured: The Digital Revolution in Just 10 Years!
The Digital Revolution, known as the Third Industrial Revolution, is the change from mechanical and analogue electronic technology to digital electronics which began anywhere from the late 1950s to the late 1970s with the adoption and proliferation of digital computers and digital record keeping that continues to the present day.
Implicitly, the term also refers to the sweeping changes brought about by digital computing and communication technology during (and after) the latter half of the 20th century.
Analogous to the Agricultural Revolution and Industrial Revolution, the Digital Revolution marked the beginning of the Information Age.
Central to this revolution is the mass production and widespread use of digital logic circuits, and its derived technologies, including the computer, digital cellular phone, and the Internet.
Unlike older machines, digital computers can be reprogrammed to fit any purpose.
For expansion, click on any of the following hyperlinks:
Implicitly, the term also refers to the sweeping changes brought about by digital computing and communication technology during (and after) the latter half of the 20th century.
Analogous to the Agricultural Revolution and Industrial Revolution, the Digital Revolution marked the beginning of the Information Age.
Central to this revolution is the mass production and widespread use of digital logic circuits, and its derived technologies, including the computer, digital cellular phone, and the Internet.
Unlike older machines, digital computers can be reprogrammed to fit any purpose.
For expansion, click on any of the following hyperlinks:
- 1 Trends of technological revolutions
- 2 Rise in digital technology use, 1990–2010
- 3 Brief history
- 4 Timeline
- 5 Converted technologies
- 6 Technological basis
- 7 Socio-economic impact
- 8 Concerns
- See also:
Computer Science
YouTube Video: Map of Computer Science
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. It is the scientific and practical approach to computation and its applications and the systematic study of the feasibility, structure, expression, and mechanization of the methodical procedures (or algorithms) that underlie the acquisition, representation, processing, storage, communication of, and access to, information.
An alternate, more succinct definition of computer science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems. See glossary of computer science.
Computer Science fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory (which explores the fundamental properties of computational and intractable problems), are highly abstract, while fields such as computer graphics emphasize real-world visual applications.
Other fields still focus on challenges in implementing computation. For example, programming language theory considers various approaches to the description of computation, while the study of computer programming itself investigates various aspects of the use of programming language and complex systems. Human–computer interaction considers the challenges in making computers and computations useful, usable, and universally accessible to humans.
Click on any of the following blue hyperlinks for more about Computer Science:
An alternate, more succinct definition of computer science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems. See glossary of computer science.
Computer Science fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory (which explores the fundamental properties of computational and intractable problems), are highly abstract, while fields such as computer graphics emphasize real-world visual applications.
Other fields still focus on challenges in implementing computation. For example, programming language theory considers various approaches to the description of computation, while the study of computer programming itself investigates various aspects of the use of programming language and complex systems. Human–computer interaction considers the challenges in making computers and computations useful, usable, and universally accessible to humans.
Click on any of the following blue hyperlinks for more about Computer Science:
- History
- Etymology
- Philosophy
- Areas of computer science
- The great insights of computer science
- Academia
- Education
- See also:
- Main article: Outline of computer science
- Main article: Glossary of computer science
- Association for Computing Machinery
- Computer Science Teachers Association
- Informatics and Engineering informatics
- Information technology
- List of academic computer science departments
- List of computer scientists
- List of publications in computer science
- List of pioneers in computer science
- List of unsolved problems in computer science
- Outline of software engineering
- Technology transfer in computer science
- Turing Award
- Computer science – Wikipedia book
- Scholarly Societies in Computer Science
- What is Computer Science?
- Best Papers Awards in Computer Science since 1996
- Photographs of computer scientists by Bertrand Meyer
- EECS.berkeley.edu
- Bibliography and academic search engines
- CiteSeerx (article): search engine, digital library and repository for scientific and academic papers with a focus on computer and information science.
- DBLP Computer Science Bibliography (article): computer science bibliography website hosted at Universität Trier, in Germany.
- The Collection of Computer Science Bibliographies (article)
- Professional organizations
- Miscellanous:
- Computer Science—Stack Exchange: a community-run question-and-answer site for computer science
- What is computer science
- Is computer science science?
- Computer Science (Software) Must be Considered as an Independent Discipline.
High Tech, including Silicon Valley
YouTube Video: Inside Samsung's new Silicon Valley headquarters by C/Net
YouTube Video: Are Smart Homes the Future of High-Tech Living? by ABC News
Pictured: What tech campuses in Silicon Valley look like from above by CNBC
High technology, or "High Tech" is technology that is at the cutting edge: the most advanced technology available.
As of the onset of the 21st century, products considered high tech are often those that incorporate advanced computer electronics. However, there is no specific class of technology that is high tech—the definition shifts and evolves over time—so products hyped as high-tech in the past may now be considered to be everyday or outdated technology.
The opposite of high tech is low technology, referring to simple, often traditional or mechanical technology; for example, a calculator is a low-tech calculating device.
Click on any of the following blue hyperlinks for more about High Tech:
Silicon Valley is a nickname for the southern portion of the Bay Area, in the northern part of the U.S. state of California. The "valley" in its name refers to the Santa Clara Valley in Santa Clara County, which includes the city of San Jose and surrounding cities and towns, where the region has been traditionally centered. The region has expanded to include the southern half of the Peninsula in San Mateo County, and southern portions of the East Bay in Alameda County.
The word "silicon" originally referred to the large number of silicon chip innovators and manufacturers in the region, but the area is now the home to many of the world's largest high-tech corporations, including the headquarters of 39 businesses in the Fortune 1000, and thousands of startup companies.
Silicon Valley also accounts for one-third of all of the venture capital investment in the United States, which has helped it to become a leading hub and startup ecosystem for high-tech innovation and scientific development. It was in the Valley that the silicon-based integrated circuit, the microprocessor, and the microcomputer, among other key technologies, were developed. As of 2013, the region employed about a quarter of a million information technology workers.
As more high-tech companies were established across San Jose snd the Santa Clara Valley, and then north towards the Bay Area's two other major cities, San Francisco and Oakland, the "Silicon Valley" has come to have two definitions: a geographic one, referring to Santa Clara County, and a metonymical one, referring to all high-tech businesses in the Bay Area or even in the United States. The term is now generally used as a synecdoche for the American high-technology economic sector. The name also became a global synonym for leading high-tech research and enterprises, and thus inspired similar named locations, as well as research parks and technology centers with a comparable structure all around the world.
Click on any of the following blue hyperlinks for more about Silicon Valley:
As of the onset of the 21st century, products considered high tech are often those that incorporate advanced computer electronics. However, there is no specific class of technology that is high tech—the definition shifts and evolves over time—so products hyped as high-tech in the past may now be considered to be everyday or outdated technology.
The opposite of high tech is low technology, referring to simple, often traditional or mechanical technology; for example, a calculator is a low-tech calculating device.
Click on any of the following blue hyperlinks for more about High Tech:
- Origin of the term
- Economy
- High-tech society
- See also:
- Low technology
- Intermediate technology - sometimes used to mean technology between low and high technology
- Industrial design
- List of emerging technologies
Silicon Valley is a nickname for the southern portion of the Bay Area, in the northern part of the U.S. state of California. The "valley" in its name refers to the Santa Clara Valley in Santa Clara County, which includes the city of San Jose and surrounding cities and towns, where the region has been traditionally centered. The region has expanded to include the southern half of the Peninsula in San Mateo County, and southern portions of the East Bay in Alameda County.
The word "silicon" originally referred to the large number of silicon chip innovators and manufacturers in the region, but the area is now the home to many of the world's largest high-tech corporations, including the headquarters of 39 businesses in the Fortune 1000, and thousands of startup companies.
Silicon Valley also accounts for one-third of all of the venture capital investment in the United States, which has helped it to become a leading hub and startup ecosystem for high-tech innovation and scientific development. It was in the Valley that the silicon-based integrated circuit, the microprocessor, and the microcomputer, among other key technologies, were developed. As of 2013, the region employed about a quarter of a million information technology workers.
As more high-tech companies were established across San Jose snd the Santa Clara Valley, and then north towards the Bay Area's two other major cities, San Francisco and Oakland, the "Silicon Valley" has come to have two definitions: a geographic one, referring to Santa Clara County, and a metonymical one, referring to all high-tech businesses in the Bay Area or even in the United States. The term is now generally used as a synecdoche for the American high-technology economic sector. The name also became a global synonym for leading high-tech research and enterprises, and thus inspired similar named locations, as well as research parks and technology centers with a comparable structure all around the world.
Click on any of the following blue hyperlinks for more about Silicon Valley:
- Origin of the term
- History (before 1970s)
- History (1971 and later)
- Economy
- Demographics
- Municipalities
- Universities, colleges, and trade schools
- Art galleries and museums
- Media outlets
- Cultural references
- See also:
- BioValley
- List of attractions in Silicon Valley
- List of research parks around the world
- List of technology centers around the world
- Mega-Site, a type of land development by private developers, universities, or governments to promote business clusters
- Silicon Hills
- Silicon Wadi
- STEM fields
- Tech Valley
- Santa Clara County: California's Historic Silicon Valley—A National Park Service website
- Silicon Valley—An American Experience documentary broadcast in 2013
- Silicon Valley Cultures Project at the Wayback Machine (archived December 20, 2007) from San Jose State University
- Silicon Valley Historical Association
- The Birth of Silicon Valley
Information Technology (IT), including a List of the largest IT Companies based on Revenues
YouTube Video: What is I.T.? Information Technology
Pictured: Logos of the Three Largest IT Companies as (L-R): Samsung, Apple, and Amazon
Click here for a List of the Largest IT Companies based on Revenues.
Information technology (IT) is the application of computers to store, retrieve, transmit and manipulate data, often in the context of a business or other enterprise.
IT is considered a subset of information and communications technology (ICT). In 2012, Zuppo proposed an ICT hierarchy where each hierarchy level "contain some degree of commonality in that they are related to technologies that facilitate the transfer of information and various types of electronically mediated communications." Business/IT was one level of the ICT hierarchy.
The term is commonly used as a synonym for computers and computer networks, but it also encompasses other information distribution technologies such as television and telephones.
Several industries are associated with information technology, including,
Humans have been storing, retrieving, manipulating and communicating information since the Sumerians in Mesopotamia developed writing in about 3000 BC, but the term information technology in its modern sense first appeared in a 1958 article published in the Harvard Business Review; authors Harold J. Leavitt and Thomas L. Whisler commented that "the new technology does not yet have a single established name. We shall call it information technology (IT)."
Their definition consists of three categories: techniques for processing, the application of statistical and mathematical methods to decision-making, and the simulation of higher-order thinking through computer programs.
Based on the storage and processing technologies employed, it is possible to distinguish four distinct phases of IT development: pre-mechanical (3000 BC – 1450 AD), mechanical (1450–1840), electromechanical (1840–1940) electronic (1940–present), and moreover, IT as a service. This article focuses on the most recent period (electronic), which began in about 1940.
Click on any of the following for amplification:
Information technology (IT) is the application of computers to store, retrieve, transmit and manipulate data, often in the context of a business or other enterprise.
IT is considered a subset of information and communications technology (ICT). In 2012, Zuppo proposed an ICT hierarchy where each hierarchy level "contain some degree of commonality in that they are related to technologies that facilitate the transfer of information and various types of electronically mediated communications." Business/IT was one level of the ICT hierarchy.
The term is commonly used as a synonym for computers and computer networks, but it also encompasses other information distribution technologies such as television and telephones.
Several industries are associated with information technology, including,
- computer hardware,
- software,
- electronics,
- semiconductors,
- internet,
- telecom equipment,
- engineering,
- healthcare,
- e-commerce,
- and computer services.
Humans have been storing, retrieving, manipulating and communicating information since the Sumerians in Mesopotamia developed writing in about 3000 BC, but the term information technology in its modern sense first appeared in a 1958 article published in the Harvard Business Review; authors Harold J. Leavitt and Thomas L. Whisler commented that "the new technology does not yet have a single established name. We shall call it information technology (IT)."
Their definition consists of three categories: techniques for processing, the application of statistical and mathematical methods to decision-making, and the simulation of higher-order thinking through computer programs.
Based on the storage and processing technologies employed, it is possible to distinguish four distinct phases of IT development: pre-mechanical (3000 BC – 1450 AD), mechanical (1450–1840), electromechanical (1840–1940) electronic (1940–present), and moreover, IT as a service. This article focuses on the most recent period (electronic), which began in about 1940.
Click on any of the following for amplification:
Global Positioning System (GPS) and Ten Ways Your Smartphone Knows Where You Are: BUT, does GPS Work Well Enough? (New York Times 1/23/2021)
YouTube Video: The Truth About GPS: How it works (Courtesy of the U.S. Air Force Space Command)
Pictured: Left: Artist's conception of GPS Block II-F satellite in Earth orbit; Right: Ten ways your smartphone knows where you are
America Has a GPS Problem (NY Times Opinion 1/23/2021)
By Kate Murphy
Kate Murphy, a frequent contributor to The New York Times, is a commercial pilot and author of “You’re Not Listening: What You’re Missing and Why It Matters.”
Time was when nobody knew, or even cared, exactly what time it was. The movement of the sun, phases of the moon and changing seasons were sufficient indicators. But since the Industrial Revolution, we’ve become increasingly dependent on knowing the time, and with increasing accuracy. Not only does the time tell us when to sleep, wake, eat, work and play; it tells automated systems when to execute financial transactions, bounce data between cellular towers and throttle power on the electrical grid.
Coordinated Universal Time, or U.T.C., the global reference for timekeeping, is beamed down to us from extremely precise atomic clocks aboard Global Positioning System (GPS) satellites. The time it takes for GPS signals to reach receivers is also used to calculate location for air, land and sea navigation.
Owned and operated by the U.S. government, GPS is likely the least recognized, and least appreciated, part of our critical infrastructure. Indeed, most of our critical infrastructure would cease to function without it.
The problem is that GPS signals are incredibly weak, due to the distance they have to travel from space, making them subject to interference and vulnerable to jamming and what is known as spoofing, in which another signal is passed off as the original. And the satellites themselves could easily be taken out by hurtling space junk or the sun coughing up a fireball.
As intentional and unintentional GPS disruptions are on the rise, experts warn that our overreliance on the technology is courting disaster, but they are divided on what to do about it.
“If we don’t get good backups on line, then GPS is just a soft rib of ours, and we could be punched here very quickly,” said Todd Humphreys, an associate professor of aerospace engineering at the University of Texas in Austin. If GPS was knocked out, he said, you’d notice. Think widespread power outages, financial markets seizing up and the transportation system grinding to a halt. Grocers would be unable to stock their shelves, and Amazon would go dark. Emergency responders wouldn’t be able to find you, and forget about using your cellphone.
Mr. Humphreys got the attention of the U.S. Department of Defense and the Federal Aviation Administration about this issue back in 2008 when he published a paper showing he could spoof GPS receivers. At the time, he said he thought the threat came mainly from hackers with something to prove: “I didn’t even imagine that the level of interference that we’ve been seeing recently would be attributable to state actors.”
More than 10,000 incidents of GPS interference have been linked to China and Russia in the past five years. Ship captains have reported GPS errors showing them 20-120 miles inland when they were actually sailing off the coast of Russia in the Black Sea.
Also well documented are ships suddenly disappearing from navigation screens while maneuvering in the Port of Shanghai. After GPS disruptions at Tel Aviv’s Ben Gurion Airport in 2019, Israeli officials pointed to Syria, where Russia has been involved in the nation’s long-running civil war. And last summer, the United States Space Command accused Russia of testing antisatellite weaponry.
But it’s not just nation-states messing with GPS. Spoofing and jamming devices have gotten so inexpensive and easy to use that delivery drivers use them so their dispatchers won’t know they’re taking long lunch breaks or having trysts at Motel 6. Teenagers use them to foil their parents’ tracking apps and to cheat at Pokémon Go. More nefariously, drug cartels and human traffickers have spoofed border control drones. Dodgy freight forwarders may use GPS jammers or spoofers to cloak or change the time stamps on arriving cargo.
These disruptions not only affect their targets; they can also affect anyone using GPS in the vicinity.
“You might not think you’re a target, but you don’t have to be,” said Guy Buesnel, a position, navigation and timing specialist with the British network and cybersecurity firm Spirent. “We’re seeing widespread collateral or incidental effects.” In 2013 a New Jersey truck driver interfered with Newark Liberty International Airport’s satellite-based tracking system when he plugged a GPS jamming device into his vehicle’s cigarette lighter to hide his location from his employer.
The risk posed by our overdependency on GPS has been raised repeatedly at least since 2000, when its signals were fully opened to civilian use. Launched in 1978, GPS was initially reserved for military purposes, but after the signals became freely available, the commercial sector quickly realized their utility, leading to widespread adoption and innovation.
Nowadays, most people carry a GPS receiver everywhere they go — embedded in a mobile phone, tablet, watch or fitness tracker.
An emergency backup for GPS was mandated by the 2018 National Timing and Resilience Security Act. The legislation said a reliable alternate system needed to be operational within two years, but that hasn’t happened yet.
Part of the reason for the holdup, aside from a pandemic, is disagreement between government agencies and industry groups on what is the best technology to use, who should be responsible for it, which GPS capabilities must be backed up and with what degree of precision.
Of course, business interests that rely on GPS want a backup that’s just as good as the original, just as accessible and also free. Meanwhile, many government officials tend to think it shouldn’t be all their responsibility, particularly when the budget to manage and maintain GPS hit $1.7 billion in 2020.
“We’re becoming more nuanced in our approach,” said James Platt, the chief of strategic defense initiatives for the Cybersecurity and Infrastructure Security Agency, a division of the Department of Homeland Security. “We recognize some things are going to need to be backed up, but we’re also realizing that maybe some systems don’t need GPS to operate” and are designed around GPS only because it’s “easy and cheap.”
The 2018 National Defense Authorization Act included funding for the Departments of Defense, Homeland Security and Transportation to jointly conduct demonstrations of various alternatives to GPS, which were concluded last March.
Eleven potential systems were tested, including eLoran, a low-frequency, high-power timing and navigation system transmitted from terrestrial towers at Coast Guard facilities throughout the United States.
“China, Russia, Iran, South Korea and Saudi Arabia all have eLoran systems because they don’t want to be as vulnerable as we are to disruptions of signals from space,” said Dana Goward, the president of the Resilient Navigation and Timing Foundation, a nonprofit that advocates for the implementation of an eLoran backup for GPS.
Also under consideration by federal authorities are timing systems delivered via fiber optic network and satellite systems in a lower orbit than GPS, which therefore have a stronger signal, making them harder to hack. A report on the technologies was submitted to Congress last week.
Prior to the report’s submission, Karen Van Dyke, the director of the Office of Positioning, Navigation and Timing and Spectrum Management at the Department of Transportation, predicted that the recommendation would probably not be a one-size-fits-all approach but would embrace “multiple and diverse technologies” to spread out the risk.
Indicators are that the government is likely to develop standards for GPS backup systems and require their use in critical sectors, but not feel obliged to wholly fund or build such systems for public use.
Last February, Donald Trump signed an executive order titled Strengthening National Resilience Through Responsible Use of Positioning, Navigation and Timing Services that essentially put GPS users on notice that vital systems needed to be designed to cope with the increasing likelihood of outages or corrupted data and that they must have their own contingency plans should they occur.
“They think the critical infrastructure folks should figure out commercial services to support themselves in terms of timing and navigation,” said Mr. Goward. “I don’t know what they think first responders, ordinary citizens and small businesses are supposed to do.”
The fear is that debate and deliberation will continue, when time is running out.
[End of OpEd Piece]
___________________________________________________________________________
Ten Ways your Smartphone knows where you are by PC world April 6, 2012:
Also refer to above right Illustration:
"One of the most important capabilities that smartphones now have is knowing where they are. More than desktops, laptops, personal navigation devices or even tablets, which are harder to take with you, a smartphone can combine its location with many other pieces of data to make new services available.
"There's a gamification aspect, there's a social aspect, and there's a utilitarian aspect," said analyst Avi Greengart of Current Analysis. Greengart believes cellphone location is in its second stage, moving beyond basic mapping and directions to social and other applications.
The third stage may bring uses we haven't even foreseen.
Like other digital technologies, these new capabilities come with worries as well as benefits. Consumers are particularly concerned about privacy when it comes to location because knowing where you are has implications for physical safety from stalking or arrest, said Seth Schoen, senior staff technologist at the Electronic Frontier Foundation. Yet most people have embraced location-based services without thinking about dangers such as service providers handing over location data in lawsuits or hackers stealing it from app vendors.
"This transition has been so quick that people haven't exactly thought through the implications on a large scale," Schoen said. "Most people aren't even very clear on which location technologies are active and which are passive." Many app-provider practices are buried in long terms of service. Risk increases with the number of apps that you authorize to collect location data, according to Schoen, so consumers have at least one element of control.
There are at least 10 different systems in use or being developed that a phone could use to identify its location. In most cases, several are used in combination, with one stepping in where another becomes less effective:
#1 GPS: Global Positioning System:
GPS was developed by the U.S. Department of Defense and was first included in cellphones in the late 1990s. It's still the best-known way to find your location outdoors. GPS uses a constellation of satellites that send location and timing data from space directly to your phone.
If the phone can pick up signals from three satellites, it can show where you are on a flat map, and with four, it can also show your elevation. Other governments have developed their own systems similar to GPS, but rather than conflicting with it, they can actually make outdoor location easier. Russia's GLONASS is already live and China's Compass is in trials.
Europe's Galileo and Japan's Quasi-Zenith Satellite System are also on the way. Phone chip makers are developing processors that can use multiple satellite constellations to get a location fix faster.
#2 Assisted GPS:
This works well once your phone finds three or four satellites, but that may take a long time, or not happen at all if you're indoors or in an "urban canyon" of buildings that reflect satellite signals. Assisted GPS describes a collection of tools that help to solve that problem.
One reason for the wait is that when it first finds the satellites, the phone needs to download information about where they will be for the next four hours. The phone needs that information to keep tracking the satellites. As soon as the information reaches the phone, full GPS service starts.
Carriers can now send that data over a cellular or Wi-Fi network, which is a lot faster than a satellite link. This may cut GPS startup time from 45 seconds to 15 seconds or less, though it's still unpredictable, said Guylain Roy-MacHabee, CEO of location technology company RX Networks.
#3 Synthetic GPS:
The form of assisted GPS described above still requires an available data network and the time to transmit the satellite information. Synthetic GPS uses computing power to forecast satellites' locations days or weeks in advance.
This function began in data centers but increasingly can be carried out on phones themselves, according to Roy-MacHabee of RX, which specializes in this type of technology. With such a cache of satellite data on board, a phone often can identify its location in two seconds or less, he said.
#4 Cell ID:
However, all the technologies that speed up GPS still require the phone to find three satellites. Carriers already know how to locate phones without GPS, and they knew it before phones got the feature. Carriers figure out which cell a customer is using, and how far they are from the neighboring cells, with a technology called Cell ID.
By knowing which sector of which base station a given phone is using, and using a database of base-station identification numbers and locations, the carriers can associate the phone's location with that of the cell tower. This system tends to be more precise in urban areas with many small cells than in rural areas, where cells may cover an area several kilometers in diameter.
#5 Wi-Fi:
Wi-fi can do much the same thing as Cell ID, but with greater precision because Wi-Fi access points cover a smaller area. There are actually two ways Wi-Fi can be used to determine location.
The most common, called RSSI (received signal strength indication), takes the signals your phone detects from nearby access points and refers to a database of Wi-Fi networks. The database says where each uniquely identified access point is located. Using signal strength to determine distance, RSSI determines where you are (down to tens of meters) in relation to those known access points.
The other form of Wi-Fi location, wireless fingerprinting, uses profiles of given places that are based on the pattern of Wi-Fi signals found there. This technique is best for places that you or other cellphone users visit frequently. The fingerprint may be created and stored the first time you go there, or a service provider may send someone out to stand in certain spots in a building and record a fingerprint for each one.
Fingerprinting can identify your location to within just a few meters, said Charlie Abraham, vice president of engineering at Broadcom's GPS division, which makes chipsets that can use a variety of location mechanisms.
#6 Inertial Sensors:
If you go into a place where no wireless system works, inertial sensors can keep track of your location based on other inputs. Most smartphones now come with three inertial sensors: a compass (or magnetometer) to determine direction, an accelerometer to report how fast your phone is moving in that direction, and a gyroscope to sense turning motions.
Together, these sensors can determine your location with no outside inputs, but only for a limited time. They'll work for minutes, but not tens of minutes, Broadcom's Abraham said. The classic use case is driving into a tunnel: If the phone knows your location from the usual sources before you enter, it can then determine where you've gone from the speed and direction you're moving.
More commonly, these tools are used in conjunction with other location systems, sometimes compensating for them in areas where they are weak, Abraham said.
#7 Barometer:
Outdoor navigation on a sidewalk or street typically happens on one level, either going straight or making right or left turns. But indoors, it makes a difference what floor of the building you're on. GPS could read this, except that it's usually hard to get good GPS coverage indoors or even in urban areas, where the satellite signals bounce off tall buildings.
One way to determine elevation is a barometer, which uses the principle that air gets thinner the farther up you go. Some smartphones already have chips that can detect barometric pressure, but this technique isn't usually suited for use by itself, RX's Roy-MacHabee said.
To use it, the phone needs to pull down local weather data for a baseline figure on barometric pressure, and conditions inside a building such as heating or air-conditioning flows can affect the sensor's accuracy, he said.
A barometer works best with mobile devices that have been carefully calibrated for a specific building, so it might work in your own office but not in a public library, Roy-MacHabee said. Barometers are best used in combination with other tools, including GPS, Wi-Fi and short-range systems that register that you've gone past a particular spot.
#8 Ultrasonic:
Sometimes just detecting whether someone has entered a certain area says something about what they're doing. This can be done with short-range wireless systems, such as RFID (radio-frequency identification) with a badge. NFC (near-field communication) is starting to appear in phones and could be used for checkpoints, but manufacturers' main intention for NFC is payments.
However, shopper loyalty company Shopkick is already using a short-range system to verify that consumers have walked into a store. Instead of using a radio, Shopkick broadcasts ultrasonic tones just inside the doors of a shop.
If the customer has the Shopkick app running when they walk through the door, the phone will pick up the tone through its microphone and the app will tell Shopkick that they've entered.
The shopper can earn points, redeemable for gift cards and other rewards, just for walking into the store, and those show up immediately. Shopkick developed the ultrasonic system partly because the tones can't penetrate walls or windows, which would let people collect points just for walking by, CTO Aaron Emigh said. They travel about 150 feet (46 meters) inside the store.
Every location of every store has a unique set of tones, which are at too high a frequency for humans to hear. Dogs can hear them, but tests showed they don't mind, Emigh said.
#9 Bluetooth Beacons:
Very precise location can be achieved in a specific area, such as inside a retail store, using beacons that send out signals via Bluetooth. The beacons, smaller than a cellphone, are placed every few meters and can communicate with any mobile device equipped with Bluetooth 4.0, the newest version of the standard.
Using a technique similar to Wi-Fi fingerprinting, the venue owner can use signals from this dense network of transmitters to identify locations within the space, Broadcom's Abraham said. Nokia, which is participating in a live in-store trial of Bluetooth beacons, says the system can determine location to within 10 centimeters. With location sensing that specific, a store could tell when you were close to a specific product on a shelf and offer a promotion, according to Nokia.
#10 Terrestrial Transmitters:
Australian startup Locata is trying to overcome GPS' limitations by bringing it down to Earth. The company makes location transmitters that use the same principle as GPS but are mounted on buildings and cell towers.
Because they are stationary and provide a much stronger signal to receivers than satellites do from space, Locata's radios can pinpoint a user's location almost instantly to as close as 2 inches, according to Locata CEO Nunzio Gambale.
Locata networks are also more reliable than GPS, he said. The company's receivers currently cost about $2500 and are drawing interest from transportation, defense and public safety customers, but within a few years the technology could be an inexpensive add-on to phones, according to Gambale.
Then, service providers will be its biggest customers, he said. Another company in this field, NextNav, is building a network using licensed spectrum that it says can cover 93 percent of the U.S. population. NextNav's transmitters will be deployed in a ring around each city and take advantage of the long range of its 900MHz spectrum, said Chris Gates, vice president of strategy and development.
[End of Article]
___________________________________________________________________________
The Global Positioning System (GPS) is a space-based navigation system that provides location and time information in all weather conditions, anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites.
The system provides critical capabilities to military, civil, and commercial users around the world. The United States government created the system, maintains it, and makes it freely accessible to anyone with a GPS receiver.
The U.S. began the GPS project in 1973 to overcome the limitations of previous navigation systems, integrating ideas from several predecessors, including a number of classified engineering design studies from the 1960s.
The U.S. Department of Defense (DoD) developed the system, which originally used 24 satellites. It became fully operational in 1995. Roger L. Easton, Ivan A. Getting and Bradford Parkinson are credited with inventing it.
Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS and implement the next generation of GPS Block IIIA satellites and Next Generation Operational Control System (OCX). Announcements from Vice President Al Gore and the White House in 1998 initiated these changes. In 2000, the U.S. Congress authorized the modernization effort, GPS III.
In addition to GPS, other systems are in use or under development. The Russian Global Navigation Satellite System (GLONASS) was developed contemporaneously with GPS, but suffered from incomplete coverage of the globe until the mid-2000s.
There are also the planned European Union Galileo positioning system, China's BeiDou Navigation Satellite System, the Japanese Quasi-Zenith Satellite System, and India's Indian Regional Navigation Satellite System.
Click on any of the following blue hyperlinks for further amplification:
By Kate Murphy
Kate Murphy, a frequent contributor to The New York Times, is a commercial pilot and author of “You’re Not Listening: What You’re Missing and Why It Matters.”
Time was when nobody knew, or even cared, exactly what time it was. The movement of the sun, phases of the moon and changing seasons were sufficient indicators. But since the Industrial Revolution, we’ve become increasingly dependent on knowing the time, and with increasing accuracy. Not only does the time tell us when to sleep, wake, eat, work and play; it tells automated systems when to execute financial transactions, bounce data between cellular towers and throttle power on the electrical grid.
Coordinated Universal Time, or U.T.C., the global reference for timekeeping, is beamed down to us from extremely precise atomic clocks aboard Global Positioning System (GPS) satellites. The time it takes for GPS signals to reach receivers is also used to calculate location for air, land and sea navigation.
Owned and operated by the U.S. government, GPS is likely the least recognized, and least appreciated, part of our critical infrastructure. Indeed, most of our critical infrastructure would cease to function without it.
The problem is that GPS signals are incredibly weak, due to the distance they have to travel from space, making them subject to interference and vulnerable to jamming and what is known as spoofing, in which another signal is passed off as the original. And the satellites themselves could easily be taken out by hurtling space junk or the sun coughing up a fireball.
As intentional and unintentional GPS disruptions are on the rise, experts warn that our overreliance on the technology is courting disaster, but they are divided on what to do about it.
“If we don’t get good backups on line, then GPS is just a soft rib of ours, and we could be punched here very quickly,” said Todd Humphreys, an associate professor of aerospace engineering at the University of Texas in Austin. If GPS was knocked out, he said, you’d notice. Think widespread power outages, financial markets seizing up and the transportation system grinding to a halt. Grocers would be unable to stock their shelves, and Amazon would go dark. Emergency responders wouldn’t be able to find you, and forget about using your cellphone.
Mr. Humphreys got the attention of the U.S. Department of Defense and the Federal Aviation Administration about this issue back in 2008 when he published a paper showing he could spoof GPS receivers. At the time, he said he thought the threat came mainly from hackers with something to prove: “I didn’t even imagine that the level of interference that we’ve been seeing recently would be attributable to state actors.”
More than 10,000 incidents of GPS interference have been linked to China and Russia in the past five years. Ship captains have reported GPS errors showing them 20-120 miles inland when they were actually sailing off the coast of Russia in the Black Sea.
Also well documented are ships suddenly disappearing from navigation screens while maneuvering in the Port of Shanghai. After GPS disruptions at Tel Aviv’s Ben Gurion Airport in 2019, Israeli officials pointed to Syria, where Russia has been involved in the nation’s long-running civil war. And last summer, the United States Space Command accused Russia of testing antisatellite weaponry.
But it’s not just nation-states messing with GPS. Spoofing and jamming devices have gotten so inexpensive and easy to use that delivery drivers use them so their dispatchers won’t know they’re taking long lunch breaks or having trysts at Motel 6. Teenagers use them to foil their parents’ tracking apps and to cheat at Pokémon Go. More nefariously, drug cartels and human traffickers have spoofed border control drones. Dodgy freight forwarders may use GPS jammers or spoofers to cloak or change the time stamps on arriving cargo.
These disruptions not only affect their targets; they can also affect anyone using GPS in the vicinity.
“You might not think you’re a target, but you don’t have to be,” said Guy Buesnel, a position, navigation and timing specialist with the British network and cybersecurity firm Spirent. “We’re seeing widespread collateral or incidental effects.” In 2013 a New Jersey truck driver interfered with Newark Liberty International Airport’s satellite-based tracking system when he plugged a GPS jamming device into his vehicle’s cigarette lighter to hide his location from his employer.
The risk posed by our overdependency on GPS has been raised repeatedly at least since 2000, when its signals were fully opened to civilian use. Launched in 1978, GPS was initially reserved for military purposes, but after the signals became freely available, the commercial sector quickly realized their utility, leading to widespread adoption and innovation.
Nowadays, most people carry a GPS receiver everywhere they go — embedded in a mobile phone, tablet, watch or fitness tracker.
An emergency backup for GPS was mandated by the 2018 National Timing and Resilience Security Act. The legislation said a reliable alternate system needed to be operational within two years, but that hasn’t happened yet.
Part of the reason for the holdup, aside from a pandemic, is disagreement between government agencies and industry groups on what is the best technology to use, who should be responsible for it, which GPS capabilities must be backed up and with what degree of precision.
Of course, business interests that rely on GPS want a backup that’s just as good as the original, just as accessible and also free. Meanwhile, many government officials tend to think it shouldn’t be all their responsibility, particularly when the budget to manage and maintain GPS hit $1.7 billion in 2020.
“We’re becoming more nuanced in our approach,” said James Platt, the chief of strategic defense initiatives for the Cybersecurity and Infrastructure Security Agency, a division of the Department of Homeland Security. “We recognize some things are going to need to be backed up, but we’re also realizing that maybe some systems don’t need GPS to operate” and are designed around GPS only because it’s “easy and cheap.”
The 2018 National Defense Authorization Act included funding for the Departments of Defense, Homeland Security and Transportation to jointly conduct demonstrations of various alternatives to GPS, which were concluded last March.
Eleven potential systems were tested, including eLoran, a low-frequency, high-power timing and navigation system transmitted from terrestrial towers at Coast Guard facilities throughout the United States.
“China, Russia, Iran, South Korea and Saudi Arabia all have eLoran systems because they don’t want to be as vulnerable as we are to disruptions of signals from space,” said Dana Goward, the president of the Resilient Navigation and Timing Foundation, a nonprofit that advocates for the implementation of an eLoran backup for GPS.
Also under consideration by federal authorities are timing systems delivered via fiber optic network and satellite systems in a lower orbit than GPS, which therefore have a stronger signal, making them harder to hack. A report on the technologies was submitted to Congress last week.
Prior to the report’s submission, Karen Van Dyke, the director of the Office of Positioning, Navigation and Timing and Spectrum Management at the Department of Transportation, predicted that the recommendation would probably not be a one-size-fits-all approach but would embrace “multiple and diverse technologies” to spread out the risk.
Indicators are that the government is likely to develop standards for GPS backup systems and require their use in critical sectors, but not feel obliged to wholly fund or build such systems for public use.
Last February, Donald Trump signed an executive order titled Strengthening National Resilience Through Responsible Use of Positioning, Navigation and Timing Services that essentially put GPS users on notice that vital systems needed to be designed to cope with the increasing likelihood of outages or corrupted data and that they must have their own contingency plans should they occur.
“They think the critical infrastructure folks should figure out commercial services to support themselves in terms of timing and navigation,” said Mr. Goward. “I don’t know what they think first responders, ordinary citizens and small businesses are supposed to do.”
The fear is that debate and deliberation will continue, when time is running out.
[End of OpEd Piece]
___________________________________________________________________________
Ten Ways your Smartphone knows where you are by PC world April 6, 2012:
Also refer to above right Illustration:
"One of the most important capabilities that smartphones now have is knowing where they are. More than desktops, laptops, personal navigation devices or even tablets, which are harder to take with you, a smartphone can combine its location with many other pieces of data to make new services available.
"There's a gamification aspect, there's a social aspect, and there's a utilitarian aspect," said analyst Avi Greengart of Current Analysis. Greengart believes cellphone location is in its second stage, moving beyond basic mapping and directions to social and other applications.
The third stage may bring uses we haven't even foreseen.
Like other digital technologies, these new capabilities come with worries as well as benefits. Consumers are particularly concerned about privacy when it comes to location because knowing where you are has implications for physical safety from stalking or arrest, said Seth Schoen, senior staff technologist at the Electronic Frontier Foundation. Yet most people have embraced location-based services without thinking about dangers such as service providers handing over location data in lawsuits or hackers stealing it from app vendors.
"This transition has been so quick that people haven't exactly thought through the implications on a large scale," Schoen said. "Most people aren't even very clear on which location technologies are active and which are passive." Many app-provider practices are buried in long terms of service. Risk increases with the number of apps that you authorize to collect location data, according to Schoen, so consumers have at least one element of control.
There are at least 10 different systems in use or being developed that a phone could use to identify its location. In most cases, several are used in combination, with one stepping in where another becomes less effective:
#1 GPS: Global Positioning System:
GPS was developed by the U.S. Department of Defense and was first included in cellphones in the late 1990s. It's still the best-known way to find your location outdoors. GPS uses a constellation of satellites that send location and timing data from space directly to your phone.
If the phone can pick up signals from three satellites, it can show where you are on a flat map, and with four, it can also show your elevation. Other governments have developed their own systems similar to GPS, but rather than conflicting with it, they can actually make outdoor location easier. Russia's GLONASS is already live and China's Compass is in trials.
Europe's Galileo and Japan's Quasi-Zenith Satellite System are also on the way. Phone chip makers are developing processors that can use multiple satellite constellations to get a location fix faster.
#2 Assisted GPS:
This works well once your phone finds three or four satellites, but that may take a long time, or not happen at all if you're indoors or in an "urban canyon" of buildings that reflect satellite signals. Assisted GPS describes a collection of tools that help to solve that problem.
One reason for the wait is that when it first finds the satellites, the phone needs to download information about where they will be for the next four hours. The phone needs that information to keep tracking the satellites. As soon as the information reaches the phone, full GPS service starts.
Carriers can now send that data over a cellular or Wi-Fi network, which is a lot faster than a satellite link. This may cut GPS startup time from 45 seconds to 15 seconds or less, though it's still unpredictable, said Guylain Roy-MacHabee, CEO of location technology company RX Networks.
#3 Synthetic GPS:
The form of assisted GPS described above still requires an available data network and the time to transmit the satellite information. Synthetic GPS uses computing power to forecast satellites' locations days or weeks in advance.
This function began in data centers but increasingly can be carried out on phones themselves, according to Roy-MacHabee of RX, which specializes in this type of technology. With such a cache of satellite data on board, a phone often can identify its location in two seconds or less, he said.
#4 Cell ID:
However, all the technologies that speed up GPS still require the phone to find three satellites. Carriers already know how to locate phones without GPS, and they knew it before phones got the feature. Carriers figure out which cell a customer is using, and how far they are from the neighboring cells, with a technology called Cell ID.
By knowing which sector of which base station a given phone is using, and using a database of base-station identification numbers and locations, the carriers can associate the phone's location with that of the cell tower. This system tends to be more precise in urban areas with many small cells than in rural areas, where cells may cover an area several kilometers in diameter.
#5 Wi-Fi:
Wi-fi can do much the same thing as Cell ID, but with greater precision because Wi-Fi access points cover a smaller area. There are actually two ways Wi-Fi can be used to determine location.
The most common, called RSSI (received signal strength indication), takes the signals your phone detects from nearby access points and refers to a database of Wi-Fi networks. The database says where each uniquely identified access point is located. Using signal strength to determine distance, RSSI determines where you are (down to tens of meters) in relation to those known access points.
The other form of Wi-Fi location, wireless fingerprinting, uses profiles of given places that are based on the pattern of Wi-Fi signals found there. This technique is best for places that you or other cellphone users visit frequently. The fingerprint may be created and stored the first time you go there, or a service provider may send someone out to stand in certain spots in a building and record a fingerprint for each one.
Fingerprinting can identify your location to within just a few meters, said Charlie Abraham, vice president of engineering at Broadcom's GPS division, which makes chipsets that can use a variety of location mechanisms.
#6 Inertial Sensors:
If you go into a place where no wireless system works, inertial sensors can keep track of your location based on other inputs. Most smartphones now come with three inertial sensors: a compass (or magnetometer) to determine direction, an accelerometer to report how fast your phone is moving in that direction, and a gyroscope to sense turning motions.
Together, these sensors can determine your location with no outside inputs, but only for a limited time. They'll work for minutes, but not tens of minutes, Broadcom's Abraham said. The classic use case is driving into a tunnel: If the phone knows your location from the usual sources before you enter, it can then determine where you've gone from the speed and direction you're moving.
More commonly, these tools are used in conjunction with other location systems, sometimes compensating for them in areas where they are weak, Abraham said.
#7 Barometer:
Outdoor navigation on a sidewalk or street typically happens on one level, either going straight or making right or left turns. But indoors, it makes a difference what floor of the building you're on. GPS could read this, except that it's usually hard to get good GPS coverage indoors or even in urban areas, where the satellite signals bounce off tall buildings.
One way to determine elevation is a barometer, which uses the principle that air gets thinner the farther up you go. Some smartphones already have chips that can detect barometric pressure, but this technique isn't usually suited for use by itself, RX's Roy-MacHabee said.
To use it, the phone needs to pull down local weather data for a baseline figure on barometric pressure, and conditions inside a building such as heating or air-conditioning flows can affect the sensor's accuracy, he said.
A barometer works best with mobile devices that have been carefully calibrated for a specific building, so it might work in your own office but not in a public library, Roy-MacHabee said. Barometers are best used in combination with other tools, including GPS, Wi-Fi and short-range systems that register that you've gone past a particular spot.
#8 Ultrasonic:
Sometimes just detecting whether someone has entered a certain area says something about what they're doing. This can be done with short-range wireless systems, such as RFID (radio-frequency identification) with a badge. NFC (near-field communication) is starting to appear in phones and could be used for checkpoints, but manufacturers' main intention for NFC is payments.
However, shopper loyalty company Shopkick is already using a short-range system to verify that consumers have walked into a store. Instead of using a radio, Shopkick broadcasts ultrasonic tones just inside the doors of a shop.
If the customer has the Shopkick app running when they walk through the door, the phone will pick up the tone through its microphone and the app will tell Shopkick that they've entered.
The shopper can earn points, redeemable for gift cards and other rewards, just for walking into the store, and those show up immediately. Shopkick developed the ultrasonic system partly because the tones can't penetrate walls or windows, which would let people collect points just for walking by, CTO Aaron Emigh said. They travel about 150 feet (46 meters) inside the store.
Every location of every store has a unique set of tones, which are at too high a frequency for humans to hear. Dogs can hear them, but tests showed they don't mind, Emigh said.
#9 Bluetooth Beacons:
Very precise location can be achieved in a specific area, such as inside a retail store, using beacons that send out signals via Bluetooth. The beacons, smaller than a cellphone, are placed every few meters and can communicate with any mobile device equipped with Bluetooth 4.0, the newest version of the standard.
Using a technique similar to Wi-Fi fingerprinting, the venue owner can use signals from this dense network of transmitters to identify locations within the space, Broadcom's Abraham said. Nokia, which is participating in a live in-store trial of Bluetooth beacons, says the system can determine location to within 10 centimeters. With location sensing that specific, a store could tell when you were close to a specific product on a shelf and offer a promotion, according to Nokia.
#10 Terrestrial Transmitters:
Australian startup Locata is trying to overcome GPS' limitations by bringing it down to Earth. The company makes location transmitters that use the same principle as GPS but are mounted on buildings and cell towers.
Because they are stationary and provide a much stronger signal to receivers than satellites do from space, Locata's radios can pinpoint a user's location almost instantly to as close as 2 inches, according to Locata CEO Nunzio Gambale.
Locata networks are also more reliable than GPS, he said. The company's receivers currently cost about $2500 and are drawing interest from transportation, defense and public safety customers, but within a few years the technology could be an inexpensive add-on to phones, according to Gambale.
Then, service providers will be its biggest customers, he said. Another company in this field, NextNav, is building a network using licensed spectrum that it says can cover 93 percent of the U.S. population. NextNav's transmitters will be deployed in a ring around each city and take advantage of the long range of its 900MHz spectrum, said Chris Gates, vice president of strategy and development.
[End of Article]
___________________________________________________________________________
The Global Positioning System (GPS) is a space-based navigation system that provides location and time information in all weather conditions, anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites.
The system provides critical capabilities to military, civil, and commercial users around the world. The United States government created the system, maintains it, and makes it freely accessible to anyone with a GPS receiver.
The U.S. began the GPS project in 1973 to overcome the limitations of previous navigation systems, integrating ideas from several predecessors, including a number of classified engineering design studies from the 1960s.
The U.S. Department of Defense (DoD) developed the system, which originally used 24 satellites. It became fully operational in 1995. Roger L. Easton, Ivan A. Getting and Bradford Parkinson are credited with inventing it.
Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS and implement the next generation of GPS Block IIIA satellites and Next Generation Operational Control System (OCX). Announcements from Vice President Al Gore and the White House in 1998 initiated these changes. In 2000, the U.S. Congress authorized the modernization effort, GPS III.
In addition to GPS, other systems are in use or under development. The Russian Global Navigation Satellite System (GLONASS) was developed contemporaneously with GPS, but suffered from incomplete coverage of the globe until the mid-2000s.
There are also the planned European Union Galileo positioning system, China's BeiDou Navigation Satellite System, the Japanese Quasi-Zenith Satellite System, and India's Indian Regional Navigation Satellite System.
Click on any of the following blue hyperlinks for further amplification:
- History:
- Basic concept of GPS:
- Structure:
- Applications:
- Civilian including Restrictions on civilian use
- Military
- Communication:
- Navigation equations:
- Problem description
- Geometric interpretation:
- Solution methods:
- Error sources and analysis
- Accuracy enhancement and surveying:
- Regulatory spectrum issues concerning GPS receivers
- Other systems
- See also:
Intel Corporation (Intel)
YouTube Video: Inside the Intel Factory Making the Chips That 'Run the Internet' (reported by Bloomberg)
Pictured: Chip Showdown -- A Guide to the Best CPUs by PC World (June 21, 2010)
Intel Corporation (also known as Intel, stylized as intel) is an American multinational corporation and technology company headquartered in Santa Clara, California, in the Silicon Valley. It is the world's second largest and second highest valued semiconductor chip makers based on revenue after being overtaken by Samsung, and is the inventor of the x86 series of microprocessors, the processors found in most personal computers (PCs).
Intel supplies processors for computer system manufacturers such as Apple, Lenovo, HP, and Dell. Intel also manufactures motherboard chipsets, network interface controllers and integrated circuits, flash memory, graphics chips, embedded processors and other devices related to communications and computing.
Intel Corporation was founded on July 18, 1968, by semiconductor pioneers Robert Noyce and Gordon Moore (of Moore's law fame), and widely associated with the executive leadership and vision of Andrew Grove.
The company's name was conceived as portmanteau of the words integrated and electronics, with co-founder Noyce having been a key inventor of the integrated circuit (microchip). The fact that "intel" is the term for intelligence information also made the name appropriate.
Intel was an early developer of SRAM and DRAM memory chips, which represented the majority of its business until 1981. Although Intel created the world's first commercial microprocessor chip in 1971, it was not until the success of the personal computer (PC) that this became its primary business.
During the 1990s, Intel invested heavily in new microprocessor designs fostering the rapid growth of the computer industry. During this period Intel became the dominant supplier of microprocessors for PCs and was known for aggressive and anti-competitive tactics in defense of its market position, particularly against Advanced Micro Devices (AMD), as well as a struggle with Microsoft for control over the direction of the PC industry.
The Open Source Technology Center at Intel hosts PowerTOP and LatencyTOP, and supports other open-source projects such as Wayland, Intel Array Building Blocks, and Threading Building Blocks (TBB), and Xen.
Click on any of the following blue hyperlinks for more about Intel:
Intel supplies processors for computer system manufacturers such as Apple, Lenovo, HP, and Dell. Intel also manufactures motherboard chipsets, network interface controllers and integrated circuits, flash memory, graphics chips, embedded processors and other devices related to communications and computing.
Intel Corporation was founded on July 18, 1968, by semiconductor pioneers Robert Noyce and Gordon Moore (of Moore's law fame), and widely associated with the executive leadership and vision of Andrew Grove.
The company's name was conceived as portmanteau of the words integrated and electronics, with co-founder Noyce having been a key inventor of the integrated circuit (microchip). The fact that "intel" is the term for intelligence information also made the name appropriate.
Intel was an early developer of SRAM and DRAM memory chips, which represented the majority of its business until 1981. Although Intel created the world's first commercial microprocessor chip in 1971, it was not until the success of the personal computer (PC) that this became its primary business.
During the 1990s, Intel invested heavily in new microprocessor designs fostering the rapid growth of the computer industry. During this period Intel became the dominant supplier of microprocessors for PCs and was known for aggressive and anti-competitive tactics in defense of its market position, particularly against Advanced Micro Devices (AMD), as well as a struggle with Microsoft for control over the direction of the PC industry.
The Open Source Technology Center at Intel hosts PowerTOP and LatencyTOP, and supports other open-source projects such as Wayland, Intel Array Building Blocks, and Threading Building Blocks (TBB), and Xen.
Click on any of the following blue hyperlinks for more about Intel:
- Current operations
- Corporate history
- Acquisition table (2010–2017)
- Product and market history
- Corporate affairs
- Litigation and regulatory issues
- See also:
- 5 nm The Quantum tunneling leakage Wall
- ASCI Red
- Advanced Micro Devices
- Comparison of ATI Graphics Processing Units
- Comparison of Intel processors
- Comparison of Nvidia graphics processing units
- Cyrix
- Engineering sample (CPU)
- Graphics Processing Unit (GPU)
- Intel Driver Update Utility
- Intel Museum
- Intel Science Talent Search
- Intel Developer Zone (Intel DZ)
- Intel GMA (Graphics Media Accelerator)
- Intel HD and Iris Graphics
- List of Intel chipsets
- List of Intel CPU micro architectures
- List of Intel manufacturing sites
- List of Intel microprocessors
- List of Intel graphics processing units
- List of Semiconductor Fabrication Plants
- Wintel
- Official website
- Business data for Intel Corp.: Google Finance
- Intel related biographical articles on Wikipedia:
Advanced Micro Devices (AMD)
YouTube Video: AMD's Vision for the Future of Technology
Pictured: The Second Generation AMD Embedded BGA platform with the ASB2 socket features the AMD Turion II Neo or AMD Athlon II Neo processors.....
Advanced Micro Devices, Inc. (AMD) is an American multinational semiconductor company based in Sunnyvale, California, United States, that develops computer processors and related technologies for business and consumer markets.
While initially it manufactured its own processors, the company later outsourced its manufacturing, a practice known as fabless, after GlobalFoundries was spun off in 2009. AMD's main products include microprocessors, motherboard chipsets, embedded processors and graphics processors for servers, workstations and personal computers, and embedded systems applications.
AMD is the second-largest supplier and only significant rival to Intel in the market for x86-based microprocessors. Since acquiring ATI in 2006, AMD and its competitor Nvidia have dominated the discrete Graphics Processing Unit (GPU) market.
Click on any of the following blue hyperlinks for more about Advanced Micro Devices (AMD):
While initially it manufactured its own processors, the company later outsourced its manufacturing, a practice known as fabless, after GlobalFoundries was spun off in 2009. AMD's main products include microprocessors, motherboard chipsets, embedded processors and graphics processors for servers, workstations and personal computers, and embedded systems applications.
AMD is the second-largest supplier and only significant rival to Intel in the market for x86-based microprocessors. Since acquiring ATI in 2006, AMD and its competitor Nvidia have dominated the discrete Graphics Processing Unit (GPU) market.
Click on any of the following blue hyperlinks for more about Advanced Micro Devices (AMD):
- Company history
- Products
- Technologies
- Production and fabrication
- Corporate affairs
- See also:
- Bill Gaede
- List of AMD microprocessors
- List of AMD accelerated processing unit microprocessors
- List of AMD graphics processing units
- List of AMD chipsets
- List of ATI chipsets
- 3DNow!
- Cool'n'Quiet
- PowerNow!
- Official website
- AMD Developer Central
- How AMD Processors Work at HowStuffWorks
- Current Institutional Investor
Samsung Electronics
YouTube Video by Samsung Electronics: Samsung Galaxy S9 and S9+: Official Introduction
Pictured: Samsung Honored for Outstanding Design and Engineering with 36 CES 2018 Innovation Awards
Samsung Electronics Co., Ltd. is a South Korean multinational electronics company headquartered in Suwon, South Korea.
Through extremely complicated ownership structure with some circular ownership, it is the flagship company of the Samsung Group, accounting for 70% of the group's revenue in 2012. Samsung Electronics has assembly plants and sales networks in 80 countries and employs around 308,745 people. It is the world's second-largest information technology company by revenue. As of October 2017, Samsung Electronics' market cap stood at US$372.0 billion.
Samsung has long been a major manufacturer of electronic components such as lithium-ion batteries, semiconductors, chips, flash memory and hard drive devices for clients such as Apple, Sony, HTC and Nokia.
Samsung is the world's largest manufacturer of mobile phones and smartphones fueled by the popularity of its Samsung Galaxy line of devices. The company is also a major vendor of tablet computers, particularly its Android-powered Samsung Galaxy Tab collection, and is generally regarded as pioneering the phablet market through the Samsung Galaxy Note family of devices.
Samsung has been the world's largest television manufacturer since 2006, and the world's largest manufacturer of mobile phones since 2011. It is also the world's largest memory chips manufacturer. In July 2017, Samsung Electronics overtook Intel as the largest semiconductor chip maker in the world.
Samsung, like many other South Korean family-run chaebols, has been criticized for low dividend payouts and other governance practices that favor controlling shareholders at the expense of ordinary investors.
In 2012, Kwon Oh-hyun was appointed the company's CEO but announced in October 2017 that he would resign in March 2018, citing an "unprecedented crisis".
Click on any of the following blue hyperlinks for more about Samsung Electronics:
Through extremely complicated ownership structure with some circular ownership, it is the flagship company of the Samsung Group, accounting for 70% of the group's revenue in 2012. Samsung Electronics has assembly plants and sales networks in 80 countries and employs around 308,745 people. It is the world's second-largest information technology company by revenue. As of October 2017, Samsung Electronics' market cap stood at US$372.0 billion.
Samsung has long been a major manufacturer of electronic components such as lithium-ion batteries, semiconductors, chips, flash memory and hard drive devices for clients such as Apple, Sony, HTC and Nokia.
Samsung is the world's largest manufacturer of mobile phones and smartphones fueled by the popularity of its Samsung Galaxy line of devices. The company is also a major vendor of tablet computers, particularly its Android-powered Samsung Galaxy Tab collection, and is generally regarded as pioneering the phablet market through the Samsung Galaxy Note family of devices.
Samsung has been the world's largest television manufacturer since 2006, and the world's largest manufacturer of mobile phones since 2011. It is also the world's largest memory chips manufacturer. In July 2017, Samsung Electronics overtook Intel as the largest semiconductor chip maker in the world.
Samsung, like many other South Korean family-run chaebols, has been criticized for low dividend payouts and other governance practices that favor controlling shareholders at the expense of ordinary investors.
In 2012, Kwon Oh-hyun was appointed the company's CEO but announced in October 2017 that he would resign in March 2018, citing an "unprecedented crisis".
Click on any of the following blue hyperlinks for more about Samsung Electronics:
- History
- Logo
- Operations
- Products
- Management and board of directors
- Market share
- Major clients
- Design
- Environmental record
- Litigation and safety issues
- Sports clubs
- Slogans
- See also:
Apple Computer, Inc. ("Apple"), including the Macintosh Operating Systems
YouTube Video: Apple Unveils iPhone 8, iPhone X by the Associated Press Sept. 12, 2017
Pictured: L-R: Apple Series 3 Smartwatch; iPhone 8 Smartphone; Apple MacBook Pro with Touch Bar
Apple Inc. is an American multinational technology company headquartered in Cupertino, California that designs, develops, and sells consumer electronics, computer software, and online services.
The company's hardware products include the following:
Apple's consumer software includes:
Its online services include:
Apple was founded by Steve Jobs, Steve Wozniak, and Ronald Wayne in April 1976 to develop and sell Wozniak's Apple I personal computer. It was incorporated as Apple Computer, Inc. in January 1977, and sales of its computers, including the Apple II, saw significant momentum and revenue growth for the company.
Within a few years, Jobs and Wozniak had hired a staff of computer designers and had a production line. Apple went public in 1980 to instant financial success. Over the next few years, Apple shipped new computers featuring innovative graphical user interfaces, and Apple's marketing commercials for its products received widespread critical acclaim.
However, the high price tag of its products and limited software titles caused problems, as did power struggles between executives at the company. Jobs resigned from Apple and created his own company.
As the market for personal computers increased, Apple's computers saw diminishing sales due to lower-priced products from competitors, in particular those offered with the Microsoft Windows operating system. More executive job shuffles happened at Apple until then-CEO Gil Amelio in 1997 decided to buy Jobs' company to bring him back.
Jobs regained position as CEO, and began a process to rebuild Apple's status, which included opening Apple's own retail stores in 2001, making numerous acquisitions of software companies to create a portfolio of software titles, and changed some of the hardware technology used in its computers.
It again saw success and returned to profitability. In January 2007, Jobs announced that Apple Computer, Inc. would be renamed Apple Inc. to reflect its shifted focus toward consumer electronics and announced the iPhone, which saw critical acclaim and significant financial success.
In August 2011, Jobs resigned as CEO due to health complications, and Tim Cook became the new CEO. Two months later, Jobs died, marking the end of an era for the company.
Apple is the world's largest information technology company by revenue and the world's second-largest mobile phone manufacturer after Samsung.
In February 2015, Apple became the first U.S. company to be valued at over US$700 billion.
The company employs 123,000 full-time employees as of September 2017 and maintains 498 retail stores in 22 countries as of July 2017. It operates the iTunes Store, which is the world's largest music retailer. As of January 2016, more than one billion Apple products are actively in use worldwide.
Apple's worldwide annual revenue totaled $229 billion for the 2017 fiscal year. The company enjoys a high level of brand loyalty and has been repeatedly ranked as the world's most valuable brand. However, it receives significant criticism regarding the labor practices of its contractors and its environmental and business practices, including the origins of source materials.
Click on any of the following blue hyperlinks for more about Apple, Inc.:
___________________________________________________________________________
Macintosh Operating Systems
The family of Macintosh operating systems developed by Apple Inc. includes the graphical user interface-based operating systems it has designed for use with its Macintosh series of personal computers since 1984, as well as the related system software it once created for compatible third-party systems.
In 1984, Apple debuted the operating system that is now known as the "Classic" Mac OS with its release of the original Macintosh System Software.
The system, re-branded "Mac OS" in 1996, was preinstalled on every Macintosh until 2002 and offered on Macintosh clones for a short time in the 1990s. Noted for its ease of use, it was also criticized for its lack of modern technologies compared to its competitors.
The current Mac operating system is macOS, originally named "Mac OS X" until 2012 and then "OS X" until 2016. Developed between 1997 and 2001 after Apple's purchase of NeXT, Mac OS X brought an entirely new architecture based on NeXTSTEP, a Unix system, that eliminated many of the technical challenges that the classic Mac OS faced.
The current macOS is preinstalled with every Mac and is updated annually. It is the basis of Apple's current system software for its other devices – iOS, watchOS, tvOS, and audioOS.
Prior to the introduction of Mac OS X, Apple experimented with several other concepts, releasing different products designed to bring the Macintosh interface or applications to Unix-like systems or vice versa, A/UX, MAE, and MkLinux. Apple's effort to expand upon and develop a replacement for its classic Mac OS in the 1990s led to a few cancelled projects, code named Star Trek, Taligent, and Copland.
Although they have different architectures, the Macintosh operating systems share a common set of GUI principles, including a menu bar across the top of the screen; the Finder shell, featuring a desktop metaphor that represents files and applications using icons and relates concepts like directories and file deletion to real-world objects like folders and a trash can; and overlapping windows for multitasking.
Click on any of the following blue hyperlinks for more about Macintosh Operating Systems:
The company's hardware products include the following:
- iPhone smartphone,
- the iPad tablet computer,
- the Mac personal computer,
- the iPod portable media player,
- the Apple Watch smartwatch,
- the Apple TV digital media player,
- and the HomePod smart speaker.
Apple's consumer software includes:
- the macOS and iOS operating systems,
- the iTunes media player,
- the Safari web browser,
- and the iLife and iWork creativity and productivity suites.
Its online services include:
- the iTunes Store,
- the iOS App Store and Mac App Store,
- Apple Music,
- and iCloud.
Apple was founded by Steve Jobs, Steve Wozniak, and Ronald Wayne in April 1976 to develop and sell Wozniak's Apple I personal computer. It was incorporated as Apple Computer, Inc. in January 1977, and sales of its computers, including the Apple II, saw significant momentum and revenue growth for the company.
Within a few years, Jobs and Wozniak had hired a staff of computer designers and had a production line. Apple went public in 1980 to instant financial success. Over the next few years, Apple shipped new computers featuring innovative graphical user interfaces, and Apple's marketing commercials for its products received widespread critical acclaim.
However, the high price tag of its products and limited software titles caused problems, as did power struggles between executives at the company. Jobs resigned from Apple and created his own company.
As the market for personal computers increased, Apple's computers saw diminishing sales due to lower-priced products from competitors, in particular those offered with the Microsoft Windows operating system. More executive job shuffles happened at Apple until then-CEO Gil Amelio in 1997 decided to buy Jobs' company to bring him back.
Jobs regained position as CEO, and began a process to rebuild Apple's status, which included opening Apple's own retail stores in 2001, making numerous acquisitions of software companies to create a portfolio of software titles, and changed some of the hardware technology used in its computers.
It again saw success and returned to profitability. In January 2007, Jobs announced that Apple Computer, Inc. would be renamed Apple Inc. to reflect its shifted focus toward consumer electronics and announced the iPhone, which saw critical acclaim and significant financial success.
In August 2011, Jobs resigned as CEO due to health complications, and Tim Cook became the new CEO. Two months later, Jobs died, marking the end of an era for the company.
Apple is the world's largest information technology company by revenue and the world's second-largest mobile phone manufacturer after Samsung.
In February 2015, Apple became the first U.S. company to be valued at over US$700 billion.
The company employs 123,000 full-time employees as of September 2017 and maintains 498 retail stores in 22 countries as of July 2017. It operates the iTunes Store, which is the world's largest music retailer. As of January 2016, more than one billion Apple products are actively in use worldwide.
Apple's worldwide annual revenue totaled $229 billion for the 2017 fiscal year. The company enjoys a high level of brand loyalty and has been repeatedly ranked as the world's most valuable brand. However, it receives significant criticism regarding the labor practices of its contractors and its environmental and business practices, including the origins of source materials.
Click on any of the following blue hyperlinks for more about Apple, Inc.:
___________________________________________________________________________
Macintosh Operating Systems
The family of Macintosh operating systems developed by Apple Inc. includes the graphical user interface-based operating systems it has designed for use with its Macintosh series of personal computers since 1984, as well as the related system software it once created for compatible third-party systems.
In 1984, Apple debuted the operating system that is now known as the "Classic" Mac OS with its release of the original Macintosh System Software.
The system, re-branded "Mac OS" in 1996, was preinstalled on every Macintosh until 2002 and offered on Macintosh clones for a short time in the 1990s. Noted for its ease of use, it was also criticized for its lack of modern technologies compared to its competitors.
The current Mac operating system is macOS, originally named "Mac OS X" until 2012 and then "OS X" until 2016. Developed between 1997 and 2001 after Apple's purchase of NeXT, Mac OS X brought an entirely new architecture based on NeXTSTEP, a Unix system, that eliminated many of the technical challenges that the classic Mac OS faced.
The current macOS is preinstalled with every Mac and is updated annually. It is the basis of Apple's current system software for its other devices – iOS, watchOS, tvOS, and audioOS.
Prior to the introduction of Mac OS X, Apple experimented with several other concepts, releasing different products designed to bring the Macintosh interface or applications to Unix-like systems or vice versa, A/UX, MAE, and MkLinux. Apple's effort to expand upon and develop a replacement for its classic Mac OS in the 1990s led to a few cancelled projects, code named Star Trek, Taligent, and Copland.
Although they have different architectures, the Macintosh operating systems share a common set of GUI principles, including a menu bar across the top of the screen; the Finder shell, featuring a desktop metaphor that represents files and applications using icons and relates concepts like directories and file deletion to real-world objects like folders and a trash can; and overlapping windows for multitasking.
Click on any of the following blue hyperlinks for more about Macintosh Operating Systems:
Sony Corporation
YouTube Video of the Future of Sony by CNNMoney
Sony Corporation (often referred to simply as Sony) is a Japanese multinational conglomerate corporation headquartered in Kōnan, Minato, Tokyo.
Sony's s diversified business includes:
The company is one of the leading manufacturers of electronic products for the consumer and professional markets. Sony was ranked 105th on the 2017 list of Fortune Global 500.
Sony Corporation is the electronics business unit and the parent company of the Sony Group, which is engaged in business through its four operating components:
These make Sony one of the most comprehensive entertainment companies in the world. The group consists of:
Sony is among the semiconductor sales leaders and as of 2016, the fifth-largest television manufacturer in the world after Samsung Electronics, LG Electronics, TCL and Hisense.
The company's current slogan is BE MOVED. Their former slogans were The One and Only (1980–1982), It's a Sony (1982–2002), like.no.other (2005–2009), and make.believe (2009–2014).
Sony has a weak tie to the Sumitomo Mitsui Financial Group (SMFG) keiretsu, the successor to the Mitsui keiretsu.
Click on any of the following blue hyperlinks for more about Sony Corporation:
Sony's s diversified business includes:
- consumer and professional electronics,
- gaming,
- entertainment,
- and financial services.
The company is one of the leading manufacturers of electronic products for the consumer and professional markets. Sony was ranked 105th on the 2017 list of Fortune Global 500.
Sony Corporation is the electronics business unit and the parent company of the Sony Group, which is engaged in business through its four operating components:
- electronics (AV, IT & communication products, semiconductors, video games, network services and medical business),
- motion pictures (movies and TV shows),
- music (record labels and music publishing),
- and financial services (banking and insurance).
These make Sony one of the most comprehensive entertainment companies in the world. The group consists of:
- Sony Corporation,
- Sony Pictures,
- Sony Mobile,
- Sony Interactive Entertainment,
- Sony Music,
- Sony Financial Holdings,
- and others.
Sony is among the semiconductor sales leaders and as of 2016, the fifth-largest television manufacturer in the world after Samsung Electronics, LG Electronics, TCL and Hisense.
The company's current slogan is BE MOVED. Their former slogans were The One and Only (1980–1982), It's a Sony (1982–2002), like.no.other (2005–2009), and make.believe (2009–2014).
Sony has a weak tie to the Sumitomo Mitsui Financial Group (SMFG) keiretsu, the successor to the Mitsui keiretsu.
Click on any of the following blue hyperlinks for more about Sony Corporation:
Enterprise Software
YouTube Video: Digital Enterprise Software Suite – The Siemens answer to Industry 4.0 requirements
Pictured: Enterprise Software Integration is the service offered by the Team at 11 Media when our clients require cost-efficient IT solution integration between two or more software applications or services utilized across their business.
Enterprise software, also known as enterprise application software (EAS), is computer software used to satisfy the needs of an organization rather than individual users. Such organizations include businesses, schools, interest-based user groups, clubs, charities, and governments. Enterprise software is an integral part of a (computer-based) information system.
Services provided by enterprise software are typically business-oriented tools such as the following:
As enterprises have similar departments and systems in common, enterprise software is often available as a suite of customizable programs. Generally, the complexity of these tools requires specialist capabilities and specific knowledge.
Enterprise Software describes a collection of computer programs with common business applications, tools for modeling how the entire organization works, and development tools for building applications unique to the organization. The software is intended to solve an enterprise-wide problem, rather than a departmental problem. Enterprise level software aims to improve the enterprise's productivity and efficiency by providing business logic support functionality.
According to Martin Fowler, "Enterprise applications are about the display, manipulation, and storage of large amounts of often complex data and the support or automation of business processes with that data."
Although there is no single, widely accepted list of enterprise software characteristics, they generally include performance, scalability, and robustness. Furthermore, enterprise software typically has interfaces to other enterprise software (for example LDAP to directory services) and is centrally managed (a single admin page, for example).
Enterprise application software performs business functions such as order processing, procurement, production scheduling, customer information management, energy management, and accounting. It is typically hosted on servers and provides simultaneous services to a large number of users, typically over a computer network. This is in contrast to a single-user application that is executed on a user's personal computer and serves only one user at a time.
Types by Business Function:
Enterprise software can be categorized by business function. Each type of enterprise application can be considered a "system" due to the integration with a firm's business processes.
Categories of enterprise software may overlap due to this systemic interpretation. For example, IBM's Business Intelligence platform (Cognos), integrates with a predictive analytics platform (SPSS) and can obtain records from its database packages (Infosphere, DB2). Blurred lines between package functions make delimitation difficult, and in many ways larger software companies define these somewhat arbitrary categories.
Nevertheless, certain industry standard product categories have emerged, and these are shown below :
See also:
Services provided by enterprise software are typically business-oriented tools such as the following:
- online shopping and online payment processing,
- interactive product catalog,
- automated billing systems,
- security,
- enterprise content management,
- IT service management,
- customer relationship management,
- enterprise resource planning,
- business intelligence,
- project management,
- collaboration,
- human resource management,
- manufacturing,
- occupational health and safety,
- enterprise application integration,
- and enterprise forms automation.
As enterprises have similar departments and systems in common, enterprise software is often available as a suite of customizable programs. Generally, the complexity of these tools requires specialist capabilities and specific knowledge.
Enterprise Software describes a collection of computer programs with common business applications, tools for modeling how the entire organization works, and development tools for building applications unique to the organization. The software is intended to solve an enterprise-wide problem, rather than a departmental problem. Enterprise level software aims to improve the enterprise's productivity and efficiency by providing business logic support functionality.
According to Martin Fowler, "Enterprise applications are about the display, manipulation, and storage of large amounts of often complex data and the support or automation of business processes with that data."
Although there is no single, widely accepted list of enterprise software characteristics, they generally include performance, scalability, and robustness. Furthermore, enterprise software typically has interfaces to other enterprise software (for example LDAP to directory services) and is centrally managed (a single admin page, for example).
Enterprise application software performs business functions such as order processing, procurement, production scheduling, customer information management, energy management, and accounting. It is typically hosted on servers and provides simultaneous services to a large number of users, typically over a computer network. This is in contrast to a single-user application that is executed on a user's personal computer and serves only one user at a time.
Types by Business Function:
Enterprise software can be categorized by business function. Each type of enterprise application can be considered a "system" due to the integration with a firm's business processes.
Categories of enterprise software may overlap due to this systemic interpretation. For example, IBM's Business Intelligence platform (Cognos), integrates with a predictive analytics platform (SPSS) and can obtain records from its database packages (Infosphere, DB2). Blurred lines between package functions make delimitation difficult, and in many ways larger software companies define these somewhat arbitrary categories.
Nevertheless, certain industry standard product categories have emerged, and these are shown below :
- Accounting software
- Billing Management
- Business intelligence
- Business process management
- Content management system (CMS)
- Customer relationship management (CRM)
- Database
- Master data management (MDM)
- Enterprise resource planning (ERP)
- Enterprise asset management (EAM)
- Supply chain management (SCM)
- Backup software
See also:
- Business informatics
- Business software
- Enterprise architecture
- Enterprise forms automation
- Identity management
- Identity management system
- Information technology management
- Integrated business planning
- Management information system
- Operational risk management
- Retail software
- Strategic information system
Research and Development including a List of United States R&D Agencies
YouTube Video Pharmaceutical research and development: Framing the issues by Brookings Institution
Pictured: Product Stages and Players for Wikimedia Product Development/Product Development Process/Draft
Research and development (R&D), is a general term for activities in connection with corporate or governmental innovation. Research and development is a component of Innovation and is situated at the front end of the Innovation life cycle. Innovation builds on R&D and includes commercialization phases.
The activities that are classified as R&D differ from company to company, but there are two primary models, with an R&D department being either staffed by engineers and tasked with directly developing new products, or staffed with industrial scientists and tasked with applied research in scientific or technological fields which may facilitate future product development.
In either case, R&D differs from the vast majority of corporate activities in that it is not often intended to yield immediate profit, and generally carries greater risk and an uncertain return on investment.
Click on any of the following blue hyperlinks for further amplification:
___________________________________________________________________________
A List of United States R&D Agencies:
This is a list of United States federal agencies that are primarily devoted to research and development, including their notable subdivisions. These agencies are responsible for carrying out the science policy of the United States:
The activities that are classified as R&D differ from company to company, but there are two primary models, with an R&D department being either staffed by engineers and tasked with directly developing new products, or staffed with industrial scientists and tasked with applied research in scientific or technological fields which may facilitate future product development.
In either case, R&D differs from the vast majority of corporate activities in that it is not often intended to yield immediate profit, and generally carries greater risk and an uncertain return on investment.
Click on any of the following blue hyperlinks for further amplification:
___________________________________________________________________________
A List of United States R&D Agencies:
This is a list of United States federal agencies that are primarily devoted to research and development, including their notable subdivisions. These agencies are responsible for carrying out the science policy of the United States:
- Independent agencies
- Department of Agriculture
- Department of Commerce
- Department of Defense
- Department of Education
- Department of Energy
- Department of Health and Human Services
- Department of Homeland Security
- Department of the Interior
- Department of Justice
- Department of Transportation
- Veterans Affairs
- Multi-agency initiatives
- Judicial branch
- Legislative branch
Computer Programming Languages, including a List of Programming Languages by Type
YouTube Video about C++ Programming Video Tutorials for Beginners
Click here for a List of Programming Languages by Type.
A programming language is a formal language that specifies a set of instructions that can be used to produce various kinds of output. Programming languages generally consist of instructions for a computer. Programming languages can be used to create programs that implement specific algorithms.
The earliest known programmable machine that preceded the invention of the digital computer was the automatic flute player described in the 9th century by the brothers Musa in Baghdad, during the Islamic Golden Age. From the early 1800s, "programs" were used to direct the behavior of machines such as Jacquard looms and player pianos.
Thousands of different programming languages have been created, mainly in the computer field, and many more still are being created every year. Many programming languages require computation to be specified in an imperative form (i.e., as a sequence of operations to perform) while other languages use other forms of program specification such as the declarative form (i.e. the desired result is specified, not how to achieve it).
The description of a programming language is usually split into the two components of syntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, the C programming language is specified by an ISO Standard) while other languages (such as Perl) have a dominant implementation that is treated as a reference.
Some languages have both, with the basic language defined by a standard and extensions taken from the dominant implementation being common.
Definitions:
A programming language is a notation for writing programs, which are specifications of a computation or algorithm. Some, but not all, authors restrict the term "programming language" to those languages that can express all possible algorithms. Traits often considered important for what constitutes a programming language include:
Function and target:
A computer programming language is a language used to write computer programs, which involves a computer performing some kind of computation or algorithm and possibly control external devices such as printers, disk drives, robots, and so on.
For example, PostScript programs are frequently created by another program to control a computer printer or display. More generally, a programming language may describe computation on some, possibly abstract, machine. It is generally accepted that a complete specification for a programming language includes a description, possibly idealized, of a machine or processor for that language.
In most practical contexts, a programming language involves a computer; consequently, programming languages are usually defined and studied this way. Programming languages differ from natural languages in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines.
Abstractions:
Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. The practical necessity that a programming language support adequate abstractions is expressed by the abstraction principle; this principle is sometimes formulated as a recommendation to the programmer to make proper use of such abstractions.
Expressive power:
The theory of computation classifies languages by the computations they are capable of expressing. All Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, yet often called programming languages.
Markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages.
Programming languages may, however, share the syntax with markup languages if a computational semantics is defined. XSLT, for example, is a Turing complete XML dialect. Moreover, LaTeX, which is mostly used for structuring documents, also contains a Turing complete subset.
The term computer language is sometimes used interchangeably with programming language. However, the usage of both terms varies among authors, including the exact scope of each. One usage describes programming languages as a subset of computer languages. In this vein, languages used in computing that have a different goal than expressing computer programs are generically designated computer languages. For instance, markup languages are sometimes referred to as computer languages to emphasize that they are not meant to be used for programming.
Another usage regards programming languages as theoretical constructs for programming abstract machines, and computer languages as the subset thereof that runs on physical computers, which have finite hardware resources.
John C. Reynolds emphasizes that formal specification languages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.
Click on any of the following hyperlinks for more about Programming Languages:
A programming language is a formal language that specifies a set of instructions that can be used to produce various kinds of output. Programming languages generally consist of instructions for a computer. Programming languages can be used to create programs that implement specific algorithms.
The earliest known programmable machine that preceded the invention of the digital computer was the automatic flute player described in the 9th century by the brothers Musa in Baghdad, during the Islamic Golden Age. From the early 1800s, "programs" were used to direct the behavior of machines such as Jacquard looms and player pianos.
Thousands of different programming languages have been created, mainly in the computer field, and many more still are being created every year. Many programming languages require computation to be specified in an imperative form (i.e., as a sequence of operations to perform) while other languages use other forms of program specification such as the declarative form (i.e. the desired result is specified, not how to achieve it).
The description of a programming language is usually split into the two components of syntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, the C programming language is specified by an ISO Standard) while other languages (such as Perl) have a dominant implementation that is treated as a reference.
Some languages have both, with the basic language defined by a standard and extensions taken from the dominant implementation being common.
Definitions:
A programming language is a notation for writing programs, which are specifications of a computation or algorithm. Some, but not all, authors restrict the term "programming language" to those languages that can express all possible algorithms. Traits often considered important for what constitutes a programming language include:
Function and target:
A computer programming language is a language used to write computer programs, which involves a computer performing some kind of computation or algorithm and possibly control external devices such as printers, disk drives, robots, and so on.
For example, PostScript programs are frequently created by another program to control a computer printer or display. More generally, a programming language may describe computation on some, possibly abstract, machine. It is generally accepted that a complete specification for a programming language includes a description, possibly idealized, of a machine or processor for that language.
In most practical contexts, a programming language involves a computer; consequently, programming languages are usually defined and studied this way. Programming languages differ from natural languages in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines.
Abstractions:
Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. The practical necessity that a programming language support adequate abstractions is expressed by the abstraction principle; this principle is sometimes formulated as a recommendation to the programmer to make proper use of such abstractions.
Expressive power:
The theory of computation classifies languages by the computations they are capable of expressing. All Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, yet often called programming languages.
Markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages.
Programming languages may, however, share the syntax with markup languages if a computational semantics is defined. XSLT, for example, is a Turing complete XML dialect. Moreover, LaTeX, which is mostly used for structuring documents, also contains a Turing complete subset.
The term computer language is sometimes used interchangeably with programming language. However, the usage of both terms varies among authors, including the exact scope of each. One usage describes programming languages as a subset of computer languages. In this vein, languages used in computing that have a different goal than expressing computer programs are generically designated computer languages. For instance, markup languages are sometimes referred to as computer languages to emphasize that they are not meant to be used for programming.
Another usage regards programming languages as theoretical constructs for programming abstract machines, and computer languages as the subset thereof that runs on physical computers, which have finite hardware resources.
John C. Reynolds emphasizes that formal specification languages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.
Click on any of the following hyperlinks for more about Programming Languages:
- History
- Elements
- Design and implementation
- Proprietary languages
- Usage
- Taxonomies
- See also:
- Comparison of programming languages (basic instructions)
- Comparison of programming languages
- Computer programming
- Computer science and Outline of computer science
- Educational programming language
- Invariant based programming
- Lists of programming languages
- List of programming language researchers
- Programming languages used in most popular websites
- Literate programming
- Dialect (computing)
- Programming language theory
- Pseudocode
- Scientific programming language
- Software engineering and List of software engineering topics
Information Security
YouTube Video: What is Enterprise Information Security Management? (CA Technologies)
Pictured below:
TOP: Best practices for an information security assessment
BOTTOM: The California Community Colleges Information Security Center covers Vulnerability Assessment, Scanning, Server Monitoring, Information Security Mailing List, Security Awareness Training, SSL Certificates, Central Log Analysis, and Vulnerability Management Service
Information security, sometimes shortened to InfoSec, is the practice of preventing unauthorized access, use, disclosure, disruption, modification, inspection, recording or destruction of information. It is a general term that can be used regardless of the form the data may take (e.g., electronic, physical).
Information security's primary focus is the balanced protection of the confidentiality, integrity and availability of data (also known as the CIA triad) while maintaining a focus on efficient policy implementation, all without hampering organization productivity. This is largely achieved through a multi-step risk management process that identifies assets, threat sources, vulnerabilities, potential impacts, and possible controls, followed by assessment of the effectiveness of the risk management plan.
To standardize this discipline, academics and professionals collaborate and seek to set basic guidance, policies, and industry standards on the following:
This standardization may be further driven by a wide variety of laws and regulations that affect how data is accessed, processed, stored, and transferred. However, the implementation of any standards and guidance within an entity may have limited effect if a culture of continual improvement isn't adopted.
Click on any of the following blue hyperlinks for more about Information Security:
Information security's primary focus is the balanced protection of the confidentiality, integrity and availability of data (also known as the CIA triad) while maintaining a focus on efficient policy implementation, all without hampering organization productivity. This is largely achieved through a multi-step risk management process that identifies assets, threat sources, vulnerabilities, potential impacts, and possible controls, followed by assessment of the effectiveness of the risk management plan.
To standardize this discipline, academics and professionals collaborate and seek to set basic guidance, policies, and industry standards on the following:
- password,
- antivirus software,
- firewall,
- encryption software,
- legal liability
- and user/administrator training standards.
This standardization may be further driven by a wide variety of laws and regulations that affect how data is accessed, processed, stored, and transferred. However, the implementation of any standards and guidance within an entity may have limited effect if a culture of continual improvement isn't adopted.
Click on any of the following blue hyperlinks for more about Information Security:
- Overview
- Threats including Responses to threats
- History
- Definitions
- Basic principles
- Risk management
- Process
- Business continuity
- Laws and regulations
- Information security culture
- Sources of standards
- See also:
- Computer security portal
- Backup
- Data breach
- Data-centric security
- Enterprise information security architecture
- Identity-based security
- Information infrastructure
- Information security audit
- Information security indicators
- Information security standards
- Information technology security audit
- IT risk
- ITIL security management
- Kill chain
- List of Computer Security Certifications
- Mobile security
- Network Security Services
- Privacy engineering
- Privacy software
- Privacy-enhancing technologies
- Security bug
- Security information management
- Security level management
- Security of Information Act
- Security service (telecommunication)
- Single sign-on
- Verification and validation
- DoD IA Policy Chart on the DoD Information Assurance Technology Analysis Center web site.
- patterns & practices Security Engineering Explained
- Open Security Architecture- Controls and patterns to secure IT systems
- IWS – Information Security Chapter
- Ross Anderson's book "Security Engineering"
- Related security categories
- Threats
- Defenses
Nanotechnology
YouTube Video about Nanotechnology
Pictured below: Nanotechnology in (TOP) Aerospace; (BOTTOM) Electronics
Nanotechnology ("nanotech") is manipulation of matter on an atomic, molecular, and supramolecular scale.
The earliest, widespread description of nanotechnology referred to the particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology.
A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this quantum-realm scale, and so the definition shifted from a particular technological goal to a research category inclusive of all types of research and technologies that deal with the special properties of matter which occur below the given size threshold.
It is therefore common to see the plural form "nanotechnologies" as well as "nanoscale technologies" to refer to the broad range of research and applications whose common trait is size. Because of the variety of potential applications (including industrial and military), governments have invested billions of dollars in nanotechnology research.
Through 2012, the USA has invested $3.7 billion using its National Nanotechnology Initiative, the European Union has invested $1.2 billion, and Japan has invested $750 million.
Nanotechnology as defined by size is naturally very broad, including fields of science as diverse as the following:
The associated research and applications are equally diverse, ranging from extensions of conventional device physics to completely new approaches based upon molecular self-assembly, from developing new materials with dimensions on the nanoscale to direct control of matter on the atomic scale.
Scientists currently debate the future implications of nanotechnology. Nanotechnology may be able to create many new materials and devices with a vast range of applications, such as in:
On the other hand, nanotechnology raises many of the same issues as any new technology, including concerns about the toxicity and environmental impact of nanomaterials, and their potential effects on global economics, as well as speculation about various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.
Click on any of the following blue hyperlinks for more about Nanotechnology:
The earliest, widespread description of nanotechnology referred to the particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology.
A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this quantum-realm scale, and so the definition shifted from a particular technological goal to a research category inclusive of all types of research and technologies that deal with the special properties of matter which occur below the given size threshold.
It is therefore common to see the plural form "nanotechnologies" as well as "nanoscale technologies" to refer to the broad range of research and applications whose common trait is size. Because of the variety of potential applications (including industrial and military), governments have invested billions of dollars in nanotechnology research.
Through 2012, the USA has invested $3.7 billion using its National Nanotechnology Initiative, the European Union has invested $1.2 billion, and Japan has invested $750 million.
Nanotechnology as defined by size is naturally very broad, including fields of science as diverse as the following:
- surface science,
- organic chemistry,
- molecular biology,
- semiconductor physics,
- energy storage,
- microfabrication,
- molecular engineering,
- etc.
The associated research and applications are equally diverse, ranging from extensions of conventional device physics to completely new approaches based upon molecular self-assembly, from developing new materials with dimensions on the nanoscale to direct control of matter on the atomic scale.
Scientists currently debate the future implications of nanotechnology. Nanotechnology may be able to create many new materials and devices with a vast range of applications, such as in:
- nanomedicine,
- nanoelectronics,
- biomaterials energy production,
- and consumer products.
On the other hand, nanotechnology raises many of the same issues as any new technology, including concerns about the toxicity and environmental impact of nanomaterials, and their potential effects on global economics, as well as speculation about various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.
Click on any of the following blue hyperlinks for more about Nanotechnology:
- Origins
- Fundamental concepts
- Current research
- Tools and techniques
- Applications
- Implications
- Regulation
- See also:
- Main article: Outline of nanotechnology
- Bionanoscience
- Carbon nanotube
- Electrostatic deflection (molecular physics/nanotechnology)
- Energy applications of nanotechnology
- Ion implantation-induced nanoparticle formation
- Gold nanobeacon
- Gold nanoparticle
- List of emerging technologies
- List of nanotechnology organizations
- List of software for nanostructures modeling
- Magnetic nanochains
- Materiomics
- Nano-thermite
- Molecular design software
- Molecular mechanics
- Nanobiotechnology
- Nanoelectromechanical relay
- Nanoengineering
- Nanofluidics
- NanoHUB
- Nanometrology
- Nanoparticle
- Nanoscale networks
- Nanotechnology education
- Nanotechnology in fiction
- Nanotechnology in water treatment
- Nanoweapons
- National Nanotechnology Initiative
- Self-assembly of nanoparticles
- Top-down and bottom-up
- Translational research
- Wet nanotechnology
Personal Computers, including History of Personal Computers and List of PC Manufacturers by Market Share
YouTube Video about The History of IBM: the Personal Computer to Watson by WatchMojo
A personal computer (PC) is a multi-purpose computer whose size, capabilities, and price make it feasible for individual use.
PCs are intended to be operated directly by an end user, rather than by a computer expert or technician. Computer time-sharing models that were typically used with larger, more expensive minicomputer and mainframe systems, to enable them be used by many people at the same time, are not used with PCs.
Early computer owners in the 1960s, invariably institutional or corporate, had to write their own programs to do any useful work with the machines.
In the 2010s, personal computer users have access to a wide range of commercial software, free software ("freeware") and free and open-source software, which are provided in ready-to-run form.
Software for personal computers is typically developed and distributed independently from the hardware or OS manufacturers. Many personal computer users no longer need to write their own programs to make any use of a personal computer, although end-user programming is still feasible.
This contrasts with mobile systems, where software is often only available through a manufacturer-supported channel, and end-user program development may be discouraged by lack of support by the manufacturer.
Since the early 1990s, Microsoft operating systems and Intel hardware have dominated much of the personal computer market, first with MS-DOS and then with Windows.
Alternatives to Microsoft's Windows operating systems occupy a minority share of the industry. These include Apple's macOS and free and open-source Unix-like operating systems such as Linux. Advanced Micro Devices (AMD) provides the main alternative to Intel's processors.
Click on any of the following blue hyperlinks for more about Personal Computers: ___________________________________________________________________________
The history of the personal computer as a mass-market consumer electronic device began with the microcomputer revolution of the 1980s.
The 1981 launch of the IBM Personal Computer coined both the term Personal Computer and PC. A personal computer is one intended for interactive individual use, as opposed to a mainframe computer where the end user's requests are filtered through operating staff, or a time-sharing system in which one large processor is shared by many individuals.
After the development of the microprocessor, individual personal computers were low enough in cost that they eventually became affordable consumer goods. Early personal computers – generally called microcomputers – were sold often in electronic kit form and in limited numbers, and were of interest mostly to hobbyists and technicians.
Click on any of the following blue hyperlinks for more about the History of the Personal Computer:
Market share of personal computer vendors:
The annual worldwide market share of personal computer vendors includes desktop computers, laptop computers and netbooks, but excludes mobile devices, such as tablet computers that do not fall under the category of 2-in-1 PCs.
Click on any of the following blue hyperlinks for more about the Market Share of Personal Computer Vendors:
PCs are intended to be operated directly by an end user, rather than by a computer expert or technician. Computer time-sharing models that were typically used with larger, more expensive minicomputer and mainframe systems, to enable them be used by many people at the same time, are not used with PCs.
Early computer owners in the 1960s, invariably institutional or corporate, had to write their own programs to do any useful work with the machines.
In the 2010s, personal computer users have access to a wide range of commercial software, free software ("freeware") and free and open-source software, which are provided in ready-to-run form.
Software for personal computers is typically developed and distributed independently from the hardware or OS manufacturers. Many personal computer users no longer need to write their own programs to make any use of a personal computer, although end-user programming is still feasible.
This contrasts with mobile systems, where software is often only available through a manufacturer-supported channel, and end-user program development may be discouraged by lack of support by the manufacturer.
Since the early 1990s, Microsoft operating systems and Intel hardware have dominated much of the personal computer market, first with MS-DOS and then with Windows.
Alternatives to Microsoft's Windows operating systems occupy a minority share of the industry. These include Apple's macOS and free and open-source Unix-like operating systems such as Linux. Advanced Micro Devices (AMD) provides the main alternative to Intel's processors.
Click on any of the following blue hyperlinks for more about Personal Computers: ___________________________________________________________________________
The history of the personal computer as a mass-market consumer electronic device began with the microcomputer revolution of the 1980s.
The 1981 launch of the IBM Personal Computer coined both the term Personal Computer and PC. A personal computer is one intended for interactive individual use, as opposed to a mainframe computer where the end user's requests are filtered through operating staff, or a time-sharing system in which one large processor is shared by many individuals.
After the development of the microprocessor, individual personal computers were low enough in cost that they eventually became affordable consumer goods. Early personal computers – generally called microcomputers – were sold often in electronic kit form and in limited numbers, and were of interest mostly to hobbyists and technicians.
Click on any of the following blue hyperlinks for more about the History of the Personal Computer:
- Etymology
- Overview
- The beginnings of the personal computer industry
- 1977 and the emergence of the "Trinity"
- Home computers
- The IBM PC
- IBM PC clones
- Apple Lisa and Macintosh
- PC clones dominate
- 1990s and 2000s
- 2010s
- Market size
- See also:
- Timeline of electrical and electronic engineering
- Computer museum and Personal Computer Museum
- Expensive Desk Calculator
- MIT Computer Science and Artificial Intelligence Laboratory
- Educ-8 a 1974 pre-microprocessor "micro-computer"
- Mark-8, a 1974 microprocessor-based microcomputer
- Programma 101, a 1965 programmable calculator with some attributes of a personal computer
- SCELBI, another 1974 microcomputer
- Simon (computer), a 1949 demonstration of computing principles
- A history of the personal computer: the people and the technology (PDF)
- BlinkenLights Archaeological Insititute – Personal Computer Milestones
- Personal Computer Museum – A publicly viewable museum in Brantford, Ontario, Canada
- Old Computers Museum – Displaying over 100 historic machines.
- Chronology of Personal Computers – a chronology of computers from 1947 on
- "Total share: 30 years of personal computer market share figures"
- Obsolete Technology – Old Computers
Market share of personal computer vendors:
The annual worldwide market share of personal computer vendors includes desktop computers, laptop computers and netbooks, but excludes mobile devices, such as tablet computers that do not fall under the category of 2-in-1 PCs.
Click on any of the following blue hyperlinks for more about the Market Share of Personal Computer Vendors:
Timeline of Computer Technology
YouTube Video: How a Central Processing Unit (CPU) Works
YouTube Video: Inspiring the Next Generation of Computer Scientists
Pictured below: The Growth in CPUs based on Transistor Count = 1950 (1) to 2010s (8 Billion) or a factor of 8 billion!
Timeline of computing presents events in the history of computing organized by year and grouped into six topic areas: predictions and concepts, first use and inventions, hardware systems and processors, operating systems, programming languages, and new application areas.
Detailed computing timelines:
Detailed computing timelines:
- See also:
- Graphical Timeline
- History of compiler construction
- History of computing hardware – up to third generation (1960s)
- History of computing hardware (1960s–present) – third generation and later
- History of the graphical user interface
- History of the Internet
- List of pioneers in computer science
- Visual History of Computing
Computer Networks including Local Area Networks (LAN) and Wide Area Networks (WAN), as well as the Client-Server Model that makes such Networking Possible
YouTube Video: Introduction to Enterprise Wide Area Networks (WANs)
Pictured Below: A Basic Enterprise LAN Network Architecture – Block Diagram and Components
A computer network, or data network, is a digital telecommunications network which allows nodes to share resources.
In computer networks, computing devices exchange data with each other using connections between nodes (data links.) These data links are established over cable media such as wires or optic cables, or wireless media such as WiFi.
Network computer devices that originate, route and terminate the data are called network nodes. Nodes can include hosts such as personal computers, phones, servers as well as networking hardware.
Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other.
In most cases, application-specific communications protocols are layered (i.e. carried as payload) over other more general communications protocols. This formidable collection of information technology requires skilled network management to keep it all running reliably.
Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers, printers, and fax machines, and use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, topology, traffic control mechanism and organizational intent. The best-known computer network is the Internet.
Click on any of the following blue hyperlinks for more about Computer Networking:
Local Area Network (LAN) is a computer network that interconnects computers within a limited area such as a residence, school, laboratory, university campus or office building. By contrast, a wide area network (WAN: see next topic) not only covers a larger geographic distance, but also generally involves leased telecommunication circuits.
Ethernet and Wi-Fi are the two most common technologies in use for local area networks. Historical technologies include ARCNET, Token ring, and AppleTalk.
Click on any of the following blue hyperlinks for more about Local Area Networks (LAN): ___________________________________________________________________________
A Wide Area Network (WAN) is a telecommunications network or computer network that extends over a large geographical distance/place. Wide area networks are often established with leased telecommunication circuits.
Business, education and government entities use wide area networks to relay data to staff, students, clients, buyers, and suppliers from various locations across the world. In essence, this mode of telecommunication allows a business to effectively carry out its daily function regardless of location. The Internet may be considered a WAN.
Related terms for other types of networks are personal area networks (PANs), local area networks (LANs), campus area networks (CANs), or metropolitan area networks(MANs) which are usually limited to a room, building, campus or specific metropolitan area respectively.
Click on any of the following blue hyperlinks for more about Wide Area Networks (WAN):
The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients.
Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs which share their resources with clients.
A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await incoming requests. Examples of computer applications that use the client–server model are Email, network printing, and the World Wide Web.
Click on any of the following blue hyperlinks for more about The Client-Server Model:
In computer networks, computing devices exchange data with each other using connections between nodes (data links.) These data links are established over cable media such as wires or optic cables, or wireless media such as WiFi.
Network computer devices that originate, route and terminate the data are called network nodes. Nodes can include hosts such as personal computers, phones, servers as well as networking hardware.
Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other.
In most cases, application-specific communications protocols are layered (i.e. carried as payload) over other more general communications protocols. This formidable collection of information technology requires skilled network management to keep it all running reliably.
Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers, printers, and fax machines, and use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, topology, traffic control mechanism and organizational intent. The best-known computer network is the Internet.
Click on any of the following blue hyperlinks for more about Computer Networking:
- History
- Properties
- Network packet
- Network topology
- Communication protocols
- Geographic scale
- Organizational scope
- Routing
- Network service
- Network performance
- Security
- Views of networks
- See also:
Local Area Network (LAN) is a computer network that interconnects computers within a limited area such as a residence, school, laboratory, university campus or office building. By contrast, a wide area network (WAN: see next topic) not only covers a larger geographic distance, but also generally involves leased telecommunication circuits.
Ethernet and Wi-Fi are the two most common technologies in use for local area networks. Historical technologies include ARCNET, Token ring, and AppleTalk.
Click on any of the following blue hyperlinks for more about Local Area Networks (LAN): ___________________________________________________________________________
A Wide Area Network (WAN) is a telecommunications network or computer network that extends over a large geographical distance/place. Wide area networks are often established with leased telecommunication circuits.
Business, education and government entities use wide area networks to relay data to staff, students, clients, buyers, and suppliers from various locations across the world. In essence, this mode of telecommunication allows a business to effectively carry out its daily function regardless of location. The Internet may be considered a WAN.
Related terms for other types of networks are personal area networks (PANs), local area networks (LANs), campus area networks (CANs), or metropolitan area networks(MANs) which are usually limited to a room, building, campus or specific metropolitan area respectively.
Click on any of the following blue hyperlinks for more about Wide Area Networks (WAN):
- Design options
- Connection technology
- See also:
- Cell switching
- Internet area network (IAN)
- Label switching
- Low Power Wide Area Network (LPWAN)
- Wide area application services
- Wide area file services
- Wireless WAN
- Cisco - Introduction to WAN Technologies
- "What is WAN (wide area network)? - Definition from WhatIs.com". SearchEnterpriseWAN. Retrieved 2017-04-21.
The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients.
Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs which share their resources with clients.
A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await incoming requests. Examples of computer applications that use the client–server model are Email, network printing, and the World Wide Web.
Click on any of the following blue hyperlinks for more about The Client-Server Model:
- Design options
- Connection technology
- See also:
Cloud Computing
YouTube Video of the Top 10 Advantages of Cloud Computing
YouTube Video: How Cloud Computing Works
Pictured: IT Benefits to Businesses that employ Cloud Computing Companies
What Is Cloud Computing? by BY ERIC GRIFFITH (MAY 3, 2016 issue of) PC Magazine*
* -- PC Magazine
"What cloud computing is not about is your hard drive. When you store data on or run programs from the hard drive, that's called local storage and computing. Everything you need is physically close to you, which means accessing your data is fast and easy, for that one computer, or others on the local network. Working off your hard drive is how the computer industry functioned for decades; some would argue it's still superior to cloud computing, for reasons I'll explain shortly.
The cloud is also not about having a dedicated network attached storage (NAS) hardware or server in residence. Storing data on a home or office network does not count as utilizing the cloud. (However, some NAS will let you remotely access things over the Internet, and there's at least one brand from Western Digital named "My Cloud," just to keep things confusing.)
For it to be considered "cloud computing," you need to access your data or your programs over the Internet, or at the very least, have that data synced with other information over the Web. In a big business, you may know all there is to know about what's on the other side of the connection; as an individual user, you may never have any idea what kind of massive data processing is happening on the other end. The end result is the same: with an online connection, cloud computing can be done anywhere, anytime.
Consumer vs. Business
Let's be clear here. We're talking about cloud computing as it impacts individual consumers—those of us who sit back at home or in small-to-medium offices and use the Internet on a regular basis.
There is an entirely different "cloud" when it comes to business. Some businesses choose to implement Software-as-a-Service (SaaS), where the business subscribes to an application it accesses over the Internet. (Think Salesforce.com.)
There's also Platform-as-a-Service (PaaS), where a business can create its own custom applications for use by all in the company. And don't forget the mighty Infrastructure-as-a-Service (IaaS), where players like Amazon, Microsoft, Google, and Rackspace provide a backbone that can be "rented out" by other companies. (For example, Netflix provides services to you because it's a customer of the cloud services at Amazon.)
Of course, cloud computing is big business: The market generated $100 billion a year in 2012, which could be $127 billion by 2017 and $500 billion by 2020.
For the rest of the article by PC Magazine, click here.
___________________________________________________________________________
Cloud computing is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand.
It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services) which can be rapidly provisioned and released with minimal management effort.
Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers that may be located far from the user–ranging in distance from across a city to across the world.
Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network.
Advocates claim that cloud computing allows companies to avoid up-front infrastructure costs (e.g., purchasing servers). As well, it enables organizations to focus on their core businesses instead of spending time and money on computer infrastructure.
Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables Information technology (IT) teams to more rapidly adjust resources to meet fluctuating and unpredictable business demand. Cloud providers typically use a "pay as you go" model. This will lead to unexpectedly high charges if administrators do not adapt to the cloud pricing model.
In 2009, the availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture, and autonomic and utility computing led to a growth in cloud computing.
Companies can scale up as computing needs increase and then scale down again as demands decrease. In 2013, it was reported that cloud computing had become a highly demanded service or utility due to the advantages of high computing power, cheap cost of services, high performance, scalability, accessibility as well as availability. Some cloud vendors are experiencing growth rates of 50% per year, but being still in a stage of infancy, it has pitfalls that need to be addressed to make cloud computing services more reliable and user friendly.
Click on any of the following blue hyperlinks for more about Cloud Computing:
* -- PC Magazine
"What cloud computing is not about is your hard drive. When you store data on or run programs from the hard drive, that's called local storage and computing. Everything you need is physically close to you, which means accessing your data is fast and easy, for that one computer, or others on the local network. Working off your hard drive is how the computer industry functioned for decades; some would argue it's still superior to cloud computing, for reasons I'll explain shortly.
The cloud is also not about having a dedicated network attached storage (NAS) hardware or server in residence. Storing data on a home or office network does not count as utilizing the cloud. (However, some NAS will let you remotely access things over the Internet, and there's at least one brand from Western Digital named "My Cloud," just to keep things confusing.)
For it to be considered "cloud computing," you need to access your data or your programs over the Internet, or at the very least, have that data synced with other information over the Web. In a big business, you may know all there is to know about what's on the other side of the connection; as an individual user, you may never have any idea what kind of massive data processing is happening on the other end. The end result is the same: with an online connection, cloud computing can be done anywhere, anytime.
Consumer vs. Business
Let's be clear here. We're talking about cloud computing as it impacts individual consumers—those of us who sit back at home or in small-to-medium offices and use the Internet on a regular basis.
There is an entirely different "cloud" when it comes to business. Some businesses choose to implement Software-as-a-Service (SaaS), where the business subscribes to an application it accesses over the Internet. (Think Salesforce.com.)
There's also Platform-as-a-Service (PaaS), where a business can create its own custom applications for use by all in the company. And don't forget the mighty Infrastructure-as-a-Service (IaaS), where players like Amazon, Microsoft, Google, and Rackspace provide a backbone that can be "rented out" by other companies. (For example, Netflix provides services to you because it's a customer of the cloud services at Amazon.)
Of course, cloud computing is big business: The market generated $100 billion a year in 2012, which could be $127 billion by 2017 and $500 billion by 2020.
For the rest of the article by PC Magazine, click here.
___________________________________________________________________________
Cloud computing is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand.
It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services) which can be rapidly provisioned and released with minimal management effort.
Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers that may be located far from the user–ranging in distance from across a city to across the world.
Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network.
Advocates claim that cloud computing allows companies to avoid up-front infrastructure costs (e.g., purchasing servers). As well, it enables organizations to focus on their core businesses instead of spending time and money on computer infrastructure.
Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables Information technology (IT) teams to more rapidly adjust resources to meet fluctuating and unpredictable business demand. Cloud providers typically use a "pay as you go" model. This will lead to unexpectedly high charges if administrators do not adapt to the cloud pricing model.
In 2009, the availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture, and autonomic and utility computing led to a growth in cloud computing.
Companies can scale up as computing needs increase and then scale down again as demands decrease. In 2013, it was reported that cloud computing had become a highly demanded service or utility due to the advantages of high computing power, cheap cost of services, high performance, scalability, accessibility as well as availability. Some cloud vendors are experiencing growth rates of 50% per year, but being still in a stage of infancy, it has pitfalls that need to be addressed to make cloud computing services more reliable and user friendly.
Click on any of the following blue hyperlinks for more about Cloud Computing:
- History
- Similar concepts
- Characteristics
- Service models
- Cloud clients
- Deployment models
- Architecture including Cloud engineering
- Security and privacy
- Limitations and disadvantages
- Emerging trends
- See also:
Record shares of Americans now own smartphones, have home broadband (January 12, 2017)*
And: 10 facts about smartphones as the iPhone turns 10 (June 28, 2017)*
* -- as reported by PEW Research Center
YouTube Video: Why are Smartphones so Addictive?
Record Shares of Americans Now Own Smartphones, Have Home Broadband.
Nearly nine-in-ten Americans today are online, up from about half in the early 2000s. Pew Research Center has chronicled this trend and others through more than 15 years of surveys on internet and technology use. On Thursday, we released a new set of fact sheets that will be updated as we collect new data and can serve as a one-stop shop for anyone looking for information on key trends in digital technology.
To mark the occasion, here are four key trends illustrating the current technology landscape in the U.S.
Ten facts about smartphones as the iPhone turns 10.
Nearly nine-in-ten Americans today are online, up from about half in the early 2000s. Pew Research Center has chronicled this trend and others through more than 15 years of surveys on internet and technology use. On Thursday, we released a new set of fact sheets that will be updated as we collect new data and can serve as a one-stop shop for anyone looking for information on key trends in digital technology.
To mark the occasion, here are four key trends illustrating the current technology landscape in the U.S.
- Roughly three-quarters of Americans (77%) now own a smartphone, with lower-income Americans and those ages 50 and older exhibiting a sharp uptick in ownership over the past year, according a Pew Research Center survey conducted in November 2016. Smartphone adoption has more than doubled since the Center began surveying on this topic in 2011: That year, 35% of Americans reported that they owned a smartphone of some kind. Smartphones are nearly ubiquitous among younger adults, with 92% of 18- to 29-year-olds owning one. But growth in smartphone ownership over the past year has been especially pronounced among Americans 50 and older. Nearly three-quarters (74%) of Americans ages 50-64 are now smartphone owners (a 16-percentage-point increase compared with 2015), as are 42% of those 65 and older (up 12 points from 2015). There has also been a 12-point increase in smartphone ownership among households earning less than $30,000 per year: 64% of these lower-income Americans now own a smartphone.
- After a modest decline between 2013 and 2015, the share of Americans with broadband service at home increased by 6 percentage points in 2016. Between 2013 and 2015, the share of Americans with home broadband service decreased slightly – from 70% to 67%. But in the past year, broadband adoption rates have returned to an upward trajectory. As of November 2016, nearly three-quarters (73%) of Americans indicate that they have broadband service at home. But although broadband adoption has increased to its highest level since the Center began tracking this topic in early 2000, not all Americans have shared in these gains. For instance, those who have not graduated from high school are nearly three times less likely than college graduates to have home broadband service (34% vs. 91%). Broadband adoption also varies by factors such as age, household income, geographic location and racial and ethnic background. Even as broadband adoption has been on the rise, 12% of Americans say they are “smartphone dependent” when it comes to their online access – meaning they own a smartphone but lack traditional broadband service at home. The share of Americans who are smartphone dependent has increased 4 percentage points since 2013, and smartphone reliance is especially pronounced among young adults, nonwhites and those with relatively low household incomes.
- Nearly seven-in-ten Americans now use social media. When the Center started tracking social media adoption in 2005, just 5% of Americans said they used these platforms. Today, 69% of U.S. adults are social media users. Social media is especially popular among younger adults, as 86% of 18- to 29-year-olds are social media users. But a substantial majority of those ages 30-49 (80%) and 50-64 (64%) use social media as well. Only about one-third (34%) of Americans 65 and older currently use social media, but that figure has grown dramatically in recent years: As recently as 2010, only around one-in-ten Americans age 65 and older used social media.
- Half the public now owns a tablet computer. Though less widespread than smartphones, tablet computers have also become highly common in a very short period of time. When the Center first began tracking tablet ownership in 2010, just 3% of Americans owned a tablet of some kind. That figure has risen to 51% as of November 2016.
Ten facts about smartphones as the iPhone turns 10.
The iPhone turns 10 on June 29, and the moment warrants a look back at the broader story about the ways mobile devices have changed how people interact.
Here are 10 findings about these devices, based on Pew Research Center surveys:
Note: The figures and map on global smartphone adoption were updated June 29, 2017, to include more recent data.
Here are 10 findings about these devices, based on Pew Research Center surveys:
- About three-quarters of U.S. adults (77%) say they own a smartphone, up from 35% in 2011, making the smartphone one of the most quickly adopted consumer technologies in recent history. Smartphone ownership is more common among those who are younger or more affluent. For example, 92% of 18- to 29-year-olds say they own a smartphone, compared with 42% of those who are ages 65 and older. Still, adoption rates have risen rapidly among older and lower-income Americans in recent years. From 2013 to 2016, the share of adults 65 and older who report owning a smartphone has risen 24 percentage points (from 18% to 42%). There has also been a 12-point increase in smartphone ownership among households earning less than $30,000 per year: 64% of these lower-income Americans now own a smartphone.
- Half of younger adults live in a household with three or more smartphones. More than nine-in-ten 18- to 29-year-olds (96%) say they live in a household with at least one smartphone, and 51% of young adults say their home contains three or more such devices. Still, many older adults also live in households with multiple smartphones. For example, 39% of 30- to 49-year-olds and 29% of 50- to 64-year-olds say their home contains three or more smartphones. This is far less common, however, among those 65 and older, with just 11% saying it applies to their household.
- Mobile devices aren’t just for calling or texting. Americans are using their phones for a variety of nontraditional phone activities, such as looking for a job, finding a date or reading a book. Some 28% of U.S. adults said in a 2015 Pew Research Center survey that they have used a smartphone as part of a job search. This is especially common among younger adults, with 53% of 18- to 29-year-olds reporting doing this. Other Pew Research Center data show that 9% of U.S. adults say they have used mobile dating apps, while the share of Americans who say they read an e-book using a cellphone within the past year increased from 5% in 2011 to 13% in 2016.
- The smartphone is becoming an important tool for shoppers. While around half of U.S. adults (51%) report making online purchases via their smartphone, many are also turning to their phones while in a physical store. In a 2015 Pew Research Center survey, 59% of U.S. adults say that they have used their cellphone to call or text someone while inside a store to discuss purchases they are thinking of making. Just under half (45%) have used their phones while inside a store to look up online reviews or to try and find a better price online for something they are thinking of purchasing. And a relatively small share of Americans (12%) have used their cellphones to physically pay for in-store purchases.
- Growing shares of Americans – especially those who are lower-income – rely on smartphones to access the internet. Overall, 12% of U.S. adults were “smartphone-only” internet users in 2016 – meaning they owned a smartphone but did not have broadband internet at home. This represents an increase from 8% in 2013. Reliance on smartphones to go online varies greatly by income. One-in-five adults whose annual household income falls below $30,000 are smartphone-only internet users, compared with only 4% of those living in households earning $100,000 or more.
- More than half of smartphone owners say they get news alerts on their phones, but few get these alerts frequently.Some 55% of smartphone owners say they ever get news alerts on their phones’ screens, according to a 2016 Pew Research Center survey. However, few users say they receive these types of alerts often, with just 13% of smartphone owners reporting doing this.
- While smartphones are becoming more integrated into our lives, many users aren’t taking the necessary steps to secure their devices. A 2016 Pew Research Center survey found that 28% of U.S. smartphone owners say they do not use a screen lock or other features to secure their phone. Although a majority of smartphone users say they have updated their phone’s apps or operating system, around four-in-ten say they only update when it’s convenient for them. But some smartphone users forgo updating their phones altogether: 14% say they never update their phone’s operating system, while 10% say they don’t update the apps on their phone.
- Smartphone ownership is climbing in developing nations, but the digital divide remains. Median smartphone adoption in developing nations rose to 37% in 2015, up from 21% in 2013, according to a Pew Research Center survey of 21 emerging and developing nations conducted in 2015. But advanced economies still have considerably higher rates of smartphone adoption, with the highest rates among surveyed countries found in South Korea, Sweden, Australia, the Netherlands and Spain. Around the globe – including in advanced economies – a digital divide in smartphone ownership still exists between the young and old, and between more educated and less educated people.
- Americans have different views about where it is and isn’t appropriate to use a cellphone. In a 2014 Pew Research Center survey, roughly three-quarters of adults said it was OK for people to use their phones while walking down the street, riding public transit or waiting in line, but far fewer found it acceptable to use cellphones during a meeting, at the movies or in church. Regardless of how they feel about the appropriateness of using a phone in social settings, an overwhelming majority of mobile phone owners (89%) say they did use their phones during their most recent social gathering.
- The smartphone is essential for many owners, but a slight majority says it’s not always needed. Some 46% of smartphone owners said their smartphone is something “they couldn’t live without,” compared with 54% who said in a 2014 Pew Research Center survey that their phone is “not always needed.” Perhaps surprisingly, smartphone owners who depend on their mobile device for internet access are not significantly more inclined than those who have multiple options for going online to say they couldn’t live without their phone (49% vs. 46%). In addition to being essential for many, smartphone owners are much more likely to have positive views of these devices. For instance, they are much more likely to say smartphones are more helpful than annoying, represent freedom rather than represent a leash, enable connecting rather than being distracting and are worth the cost rather than being a financial burden.
Note: The figures and map on global smartphone adoption were updated June 29, 2017, to include more recent data.
Computer Platforms
YouTube Video: Trusted Computing Platforms
Pictured: Screenshot of Windows 10 (July 2015 Release), showing the Action Center and Start Menu
A computing platform is, in the most general sense, whatever a pre-existing piece of computer software or code object is designed to run within, obeying its constraints, and making use of its facilities.
The term computing platform can refer to different abstraction levels, including a certain hardware architecture, an operating system (OS), and runtime libraries.
In total it can be said to be the stage on which computer programs can run.
Binary executables have to be compiled for a specific hardware platform, since different central processor units have different machine codes.
In addition, operating systems and runtime libraries allow re-use of code and provide abstraction layers which allow the same high-level source code to run on differently configured hardware. For example, there are many kinds of data storage device, and any individual computer can have a different configuration of storage devices; but the application is able to call a generic save or write function provided by the OS and runtime libraries, which then handle the details themselves.
A platform can be seen both as a constraint on the application development process – the application is written for such-and-such a platform – and an assistance to the development process, in that they provide low-level functionality ready-made.
Components:
Platforms may also include:
Operating system examples: For more details on this topic, see List of operating systems.
Mobile:
Software frameworks: For more details on this topic, see Software framework.
Hardware examples: For more details on this topic, see Lists of computers. Ordered roughly, from more common types to less common types:
Software examples:
1 Windows 7 20.04%
2 iOS 9 19.47%
3 Android Lolipop 14.05%
4 Android 4.0 12.59%
5 Windows 10 8.52%
6 Windows 8.1 5.02%
7 Mac OS X 4.19%
8 iOS 8 3.76%
9 Windows XP 2.36%
10 Linux 1.76%
First party software:
Software is considered first party if it is originated by the platform vendor. Software from other vendors is considered Third party.
See also:
The term computing platform can refer to different abstraction levels, including a certain hardware architecture, an operating system (OS), and runtime libraries.
In total it can be said to be the stage on which computer programs can run.
Binary executables have to be compiled for a specific hardware platform, since different central processor units have different machine codes.
In addition, operating systems and runtime libraries allow re-use of code and provide abstraction layers which allow the same high-level source code to run on differently configured hardware. For example, there are many kinds of data storage device, and any individual computer can have a different configuration of storage devices; but the application is able to call a generic save or write function provided by the OS and runtime libraries, which then handle the details themselves.
A platform can be seen both as a constraint on the application development process – the application is written for such-and-such a platform – and an assistance to the development process, in that they provide low-level functionality ready-made.
Components:
Platforms may also include:
- Hardware alone, in the case of small embedded systems. Embedded systems can access hardware directly, without an OS (see bare metal (computing)).
- A browser in the case of web-based software. The browser itself runs on a hardware+OS platform, but this is not relevant to software running within the browser.
- An application, such as a spreadsheet or word processor, which hosts software written in an application-specific scripting language, such as an Excel macro. This can be extended to writing fully-fledged applications with the Microsoft Office suite as a platform.
- Software frameworks that provide ready-made functionality.
- Cloud computing and Platform as a Service. Extending the idea of a software framework, these allow application developers to build software out of components that are hosted not by the developer, but by the provider, with internet communication linking them together. The social networking sites Twitter and Facebook are also considered development platforms.
- A virtual machine (VM) such as the Java virtual machine. or .NET CLR. Applications are compiled into a format similar to machine code, known as bytecode, which is then executed by the VM.
- A virtualized version of a complete system, including virtualized hardware, OS, software and storage. These allow, for instance, a typical Windows program to run on what is physically a Mac.
Operating system examples: For more details on this topic, see List of operating systems.
- AmigaOS, AmigaOS 4
- FreeBSD, NetBSD, OpenBSD
- Linux
- Microsoft Windows
- OpenVMS
- OS X (Mac OS)
- OS/2
- Solaris
- Tru64 UNIX
- VM
- QNX
Mobile:
- Android
- Bada
- BlackBerry OS
- Firefox OS
- iOS
- Embedded Linux
- Palm OS
- Symbian
- Tizen
- WebOS
- Windows Mobile
- Windows Phone
Software frameworks: For more details on this topic, see Software framework.
- Adobe AIR
- Adobe Flash
- Adobe Shockwave
- Binary Runtime Environment for Wireless (BREW)
- Cocoa (API)
- Cocoa Touch
- Java platform
- Microsoft XNA
- Mono
- Mozilla Prism, XUL and XULRunner
- .NET Framework
- Silverlight
- Open Web Platform
- Oracle Database
- Qt
- SAP NetWeaver
- Smartface
- Vexi
- Windows Runtime
Hardware examples: For more details on this topic, see Lists of computers. Ordered roughly, from more common types to less common types:
- Commodity computing platforms
- Wintel, that is, Intel x86 or compatible personal computer hardware with Windows operating system
- Macintosh, custom Apple Computer hardware and Mac OS operating system, now migrated to x86
- ARM architecture used in mobile devices
- Gumstix or Raspberry Pi full function miniature computers with Linux
- x86 with Unix-like systems such as BSD variants
- CP/M computers based on the S-100 bus, maybe the earliest microcomputer platform
- Video game consoles, any variety
- 3DO Interactive Multiplayer, that was licensed to manufacturers
- Apple Pippin, a Multimedia player platform for video game console development
- RISC processor based machines running Unix variants
- Midrange computers with their custom operating systems, such as IBM OS/400
- Mainframe computers with their custom operating systems, such as IBM z/OS
- Supercomputer architectures
Software examples:
1 Windows 7 20.04%
2 iOS 9 19.47%
3 Android Lolipop 14.05%
4 Android 4.0 12.59%
5 Windows 10 8.52%
6 Windows 8.1 5.02%
7 Mac OS X 4.19%
8 iOS 8 3.76%
9 Windows XP 2.36%
10 Linux 1.76%
First party software:
Software is considered first party if it is originated by the platform vendor. Software from other vendors is considered Third party.
See also:
Computer-Generated- Imagery (CGI)
YouTube Video: "Avatar Pushes Limits of Visual Effects"
Pictured: In the 2009 movie “Avatar”, James Cameron pioneered a specially designed camera built into a 6-inch boom that allowed the facial expressions of the actors to be captured and digitally recorded for the animators to use later (Courtesy of Wikipedia.org)
Computer-generated imagery (CGI for short) is the application of computer graphics to create or contribute to images in art, printed media, video games, films, television programs, shorts, commercials, videos, and simulators.
The visual scenes may be dynamic or static, and may be two-dimensional (2D), though the term "CGI" is most commonly used to refer to 3D computer graphics used for creating scenes or special effects in films and television.
Additionally, the use of 2D CGI is often mistakenly referred to as "traditional animation", most often in the case when dedicated animation software such as Adobe Flash or Toon Boom is not used or the CGI is hand drawn using a tablet and mouse.
The term 'CGI animation' refers to dynamic CGI rendered as a movie. The term virtual world refers to agent-based, interactive environments. Computer graphics software is used to make computer-generated imagery for films, etc.
Availability of CGI software and increased computer speeds have allowed individual artists and small companies to produce professional-grade films, games, and fine art from their home computers. This has brought about an Internet subculture with its own set of global celebrities, clichés, and technical vocabulary. The evolution of CGI led to the emergence of virtual cinematography in the 1990s where runs of the simulated camera are not constrained by the laws of physics.
Static images and landscapes:
Not only do animated images form part of computer-generated imagery, natural looking landscapes (such as fractal landscapes) are also generated via computer algorithms. A simple way to generate fractal surfaces is to use an extension of the triangular mesh method, relying on the construction of some special case of a de Rham curve, e.g. midpoint displacement.
For instance, the algorithm may start with a large triangle, then recursively zoom in by dividing it into four smaller Sierpinski triangles, then interpolate the height of each point from its nearest neighbors.
The creation of a Brownian surface may be achieved not only by adding noise as new nodes are created, but by adding additional noise at multiple levels of the mesh. Thus a topographical map with varying levels of height can be created using relatively straightforward fractal algorithms. Some typical, easy-to-program fractals used in CGI are the plasma fractal and the more dramatic fault fractal.
The large number of specific techniques have been researched and developed to produce highly focused computer-generated effects — e.g. the use of specific models to represent the chemical weathering of stones to model erosion and produce an "aged appearance" for a given stone-based surface.
Architectural Scenes:
Modern architects use services from computer graphic firms to create 3-dimensional models for both customers and builders. These computer generated models can be more accurate than traditional drawings. Architectural animation (which provides animated movies of buildings, rather than interactive images) can also be used to see the possible relationship a building will have in relation to the environment and its surrounding buildings. The rendering of architectural spaces without the use of paper and pencil tools is now a widely accepted practice with a number of computer-assisted architectural design systems.
Architectural modelling tools allow an architect to visualize a space and perform "walk-throughs" in an interactive manner, thus providing "interactive environments" both at the urban and building levels.
Specific applications in architecture not only include the specification of building structures (such as walls and windows) and walk-throughs, but the effects of light and how sunlight will affect a specific design at different times of the day.
Architectural modelling tools have now become increasingly internet-based. However, the quality of internet-based systems still lags behind those of sophisticated inhouse modelling systems.
In some applications, computer-generated images are used to "reverse engineer" historical buildings. For instance, a computer-generated reconstruction of the monastery at Georgenthal in Germany was derived from the ruins of the monastery, yet provides the viewer with a "look and feel" of what the building would have looked like in its day.
Anatomical models
See also: Medical imaging, Visible Human Project, Google Body, and Living Human Project.
Computer generated models used in skeletal animation are not always anatomically correct. However, organizations such as the Scientific Computing and Imaging Institute have developed anatomically correct computer-based models. Computer generated anatomical models can be used both for instructional and operational purposes.
To date, a large body of artist produced medical images continue to be used by medical students, such as images by Frank Netter, e.g. Cardiac images. However, a number of online anatomical models are becoming available.
A single patient X-ray is not a computer generated image, even if digitized. However, in applications which involve CT scans a three dimensional model is automatically produced from a large number of single slice x-rays, producing "computer generated image".
Applications involving magnetic resonance imaging also bring together a number of "snapshots" (in this case via magnetic pulses) to produce a composite, internal image.
In modern medical applications, patient specific models are constructed in 'computer assisted surgery'. For instance, in total knee replacement, the construction of a detailed patient specific model can be used to carefully plan the surgery.
These three dimensional models are usually extracted from multiple CT scans of the appropriate parts of the patient's own anatomy. Such models can also be used for planning aortic valve implantations, one of the common procedures for treating heart disease. Given that the shape, diameter and position of the coronary openings can vary greatly from patient to patient, the extraction (from CT scans) of a model that closely resembles a patient's valve anatomy can be highly beneficial in planning the procedure.
Generating cloth and skin images:
Models of cloth generally fall into three groups:
To date, making the clothing of a digital character automatically fold in a natural way remains a challenge for many animators.
In addition to their use in film, advertising and other modes of public display, computer generated images of clothing are now routinely used by top fashion design firms.
The challenge in rendering human skin images involves three levels of realism:
The finest visible features such as fine wrinkles and skin pores are size of about 100 µm or 0.1 millimeters. Skin can be modelled as a 7-dimensional bidirectional texture function (BTF) or a collection of bidirectional scattering distribution function (BSDF) over the target's surfaces.
Interactive simulation and visualization:
Interactive visualization is a general term that applies to the rendering of data that may vary dynamically and allowing a user to view the data from multiple perspectives. The applications areas may vary significantly, ranging from the visualization of the flow patterns in fluid dynamics to specific computer aided design applications. The data rendered may correspond to specific visual scenes that change as the user interacts with the system — e.g. simulators, such as flight simulators, make extensive use of CGI techniques for representing the world.
At the abstract level an interactive visualization process involves a "data pipeline" in which the raw data is managed and filtered to a form that makes it suitable for rendering. This is often called the "visualization data".
The visualization data is then mapped to a "visualization representation" that can be fed to a rendering system. This is usually called a "renderable representation". This representation is then rendered as a displayable image.
As the user interacts with the system (e.g. by using joystick controls to change their position within the virtual world) the raw data is fed through the pipeline to create a new rendered image, often making real-time computational efficiency a key consideration in such applications.
Computer animation:
While computer generated images of landscapes may be static, the term computer animation only applies to dynamic images that resemble a movie. However, in general the term computer animation refers to dynamic images that do not allow user interaction, and the term virtual world is used for the interactive animated environments.
Computer animation is essentially a digital successor to the art of stop motion animation of 3D models and frame-by-frame animation of 2D illustrations.
Computer generated animations are more controllable than other more physically based processes, such as constructing miniatures for effects shots or hiring extras for crowd scenes, and because it allows the creation of images that would not be feasible using any other technology.
It can also allow a single graphic artist to produce such content without the use of actors, expensive set pieces, or props.
To create the illusion of movement, an image is displayed on the computer screen and repeatedly replaced by a new image which is similar to the previous image, but advanced slightly in the time domain (usually at a rate of 24 or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures.
Virtual worlds
A virtual world is a simulated environment, which allows user to interact with animated characters, or interact with other users through the use of animated characters known as avatars. Virtual worlds are intended for its users to inhabit and interact, and the term today has become largely synonymous with interactive 3D virtual environments, where the users take the form of avatars visible to others graphically.
These avatars are usually depicted as textual, two-dimensional, or three-dimensional graphical representations, although other forms are possible (auditory and touch sensations for example). Some, but not all, virtual worlds allow for multiple users.
In courtrooms:
Computer-generated imagery has been used in courtrooms, primarily since the early 2000s. However, some experts have argued that it is prejudicial. They are used to help judges or the jury to better visualize the sequence of events, evidence or hypothesis.
However, a 1997 study showed that people are poor intuitive physicists and easily influenced by computer generated images.
Thus it is important that jurors and other legal decision-makers be made aware that such exhibits are merely a representation of one potential sequence of events.
See Also:
The visual scenes may be dynamic or static, and may be two-dimensional (2D), though the term "CGI" is most commonly used to refer to 3D computer graphics used for creating scenes or special effects in films and television.
Additionally, the use of 2D CGI is often mistakenly referred to as "traditional animation", most often in the case when dedicated animation software such as Adobe Flash or Toon Boom is not used or the CGI is hand drawn using a tablet and mouse.
The term 'CGI animation' refers to dynamic CGI rendered as a movie. The term virtual world refers to agent-based, interactive environments. Computer graphics software is used to make computer-generated imagery for films, etc.
Availability of CGI software and increased computer speeds have allowed individual artists and small companies to produce professional-grade films, games, and fine art from their home computers. This has brought about an Internet subculture with its own set of global celebrities, clichés, and technical vocabulary. The evolution of CGI led to the emergence of virtual cinematography in the 1990s where runs of the simulated camera are not constrained by the laws of physics.
Static images and landscapes:
Not only do animated images form part of computer-generated imagery, natural looking landscapes (such as fractal landscapes) are also generated via computer algorithms. A simple way to generate fractal surfaces is to use an extension of the triangular mesh method, relying on the construction of some special case of a de Rham curve, e.g. midpoint displacement.
For instance, the algorithm may start with a large triangle, then recursively zoom in by dividing it into four smaller Sierpinski triangles, then interpolate the height of each point from its nearest neighbors.
The creation of a Brownian surface may be achieved not only by adding noise as new nodes are created, but by adding additional noise at multiple levels of the mesh. Thus a topographical map with varying levels of height can be created using relatively straightforward fractal algorithms. Some typical, easy-to-program fractals used in CGI are the plasma fractal and the more dramatic fault fractal.
The large number of specific techniques have been researched and developed to produce highly focused computer-generated effects — e.g. the use of specific models to represent the chemical weathering of stones to model erosion and produce an "aged appearance" for a given stone-based surface.
Architectural Scenes:
Modern architects use services from computer graphic firms to create 3-dimensional models for both customers and builders. These computer generated models can be more accurate than traditional drawings. Architectural animation (which provides animated movies of buildings, rather than interactive images) can also be used to see the possible relationship a building will have in relation to the environment and its surrounding buildings. The rendering of architectural spaces without the use of paper and pencil tools is now a widely accepted practice with a number of computer-assisted architectural design systems.
Architectural modelling tools allow an architect to visualize a space and perform "walk-throughs" in an interactive manner, thus providing "interactive environments" both at the urban and building levels.
Specific applications in architecture not only include the specification of building structures (such as walls and windows) and walk-throughs, but the effects of light and how sunlight will affect a specific design at different times of the day.
Architectural modelling tools have now become increasingly internet-based. However, the quality of internet-based systems still lags behind those of sophisticated inhouse modelling systems.
In some applications, computer-generated images are used to "reverse engineer" historical buildings. For instance, a computer-generated reconstruction of the monastery at Georgenthal in Germany was derived from the ruins of the monastery, yet provides the viewer with a "look and feel" of what the building would have looked like in its day.
Anatomical models
See also: Medical imaging, Visible Human Project, Google Body, and Living Human Project.
Computer generated models used in skeletal animation are not always anatomically correct. However, organizations such as the Scientific Computing and Imaging Institute have developed anatomically correct computer-based models. Computer generated anatomical models can be used both for instructional and operational purposes.
To date, a large body of artist produced medical images continue to be used by medical students, such as images by Frank Netter, e.g. Cardiac images. However, a number of online anatomical models are becoming available.
A single patient X-ray is not a computer generated image, even if digitized. However, in applications which involve CT scans a three dimensional model is automatically produced from a large number of single slice x-rays, producing "computer generated image".
Applications involving magnetic resonance imaging also bring together a number of "snapshots" (in this case via magnetic pulses) to produce a composite, internal image.
In modern medical applications, patient specific models are constructed in 'computer assisted surgery'. For instance, in total knee replacement, the construction of a detailed patient specific model can be used to carefully plan the surgery.
These three dimensional models are usually extracted from multiple CT scans of the appropriate parts of the patient's own anatomy. Such models can also be used for planning aortic valve implantations, one of the common procedures for treating heart disease. Given that the shape, diameter and position of the coronary openings can vary greatly from patient to patient, the extraction (from CT scans) of a model that closely resembles a patient's valve anatomy can be highly beneficial in planning the procedure.
Generating cloth and skin images:
Models of cloth generally fall into three groups:
- The geometric-mechanical structure at yarn crossing
- The mechanics of continuous elastic sheets
- The geometric macroscopic features of cloth.
To date, making the clothing of a digital character automatically fold in a natural way remains a challenge for many animators.
In addition to their use in film, advertising and other modes of public display, computer generated images of clothing are now routinely used by top fashion design firms.
The challenge in rendering human skin images involves three levels of realism:
- Photo realism in resembling real skin at the static level
- Physical realism in resembling its movements
- Function realism in resembling its response to actions.
The finest visible features such as fine wrinkles and skin pores are size of about 100 µm or 0.1 millimeters. Skin can be modelled as a 7-dimensional bidirectional texture function (BTF) or a collection of bidirectional scattering distribution function (BSDF) over the target's surfaces.
Interactive simulation and visualization:
Interactive visualization is a general term that applies to the rendering of data that may vary dynamically and allowing a user to view the data from multiple perspectives. The applications areas may vary significantly, ranging from the visualization of the flow patterns in fluid dynamics to specific computer aided design applications. The data rendered may correspond to specific visual scenes that change as the user interacts with the system — e.g. simulators, such as flight simulators, make extensive use of CGI techniques for representing the world.
At the abstract level an interactive visualization process involves a "data pipeline" in which the raw data is managed and filtered to a form that makes it suitable for rendering. This is often called the "visualization data".
The visualization data is then mapped to a "visualization representation" that can be fed to a rendering system. This is usually called a "renderable representation". This representation is then rendered as a displayable image.
As the user interacts with the system (e.g. by using joystick controls to change their position within the virtual world) the raw data is fed through the pipeline to create a new rendered image, often making real-time computational efficiency a key consideration in such applications.
Computer animation:
While computer generated images of landscapes may be static, the term computer animation only applies to dynamic images that resemble a movie. However, in general the term computer animation refers to dynamic images that do not allow user interaction, and the term virtual world is used for the interactive animated environments.
Computer animation is essentially a digital successor to the art of stop motion animation of 3D models and frame-by-frame animation of 2D illustrations.
Computer generated animations are more controllable than other more physically based processes, such as constructing miniatures for effects shots or hiring extras for crowd scenes, and because it allows the creation of images that would not be feasible using any other technology.
It can also allow a single graphic artist to produce such content without the use of actors, expensive set pieces, or props.
To create the illusion of movement, an image is displayed on the computer screen and repeatedly replaced by a new image which is similar to the previous image, but advanced slightly in the time domain (usually at a rate of 24 or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures.
Virtual worlds
A virtual world is a simulated environment, which allows user to interact with animated characters, or interact with other users through the use of animated characters known as avatars. Virtual worlds are intended for its users to inhabit and interact, and the term today has become largely synonymous with interactive 3D virtual environments, where the users take the form of avatars visible to others graphically.
These avatars are usually depicted as textual, two-dimensional, or three-dimensional graphical representations, although other forms are possible (auditory and touch sensations for example). Some, but not all, virtual worlds allow for multiple users.
In courtrooms:
Computer-generated imagery has been used in courtrooms, primarily since the early 2000s. However, some experts have argued that it is prejudicial. They are used to help judges or the jury to better visualize the sequence of events, evidence or hypothesis.
However, a 1997 study showed that people are poor intuitive physicists and easily influenced by computer generated images.
Thus it is important that jurors and other legal decision-makers be made aware that such exhibits are merely a representation of one potential sequence of events.
See Also:
- 3D modeling
- Anime Studio
- Animation database
- List of computer-animated films
- Blender (software) - DIY CGI
- Digital image
- Parallel rendering
- Photoshop is the industry standard commercial digital photo editing tool. Its FLOSS counterpart is GIMP.
- Poser DIY CGI optimized for soft models
- Ray tracing (graphics)
- Real-time computer graphics
- Shader
- Virtual human
- Virtual Physiological Human
- Maya, 3ds Max, Cinema4D, E-on Vue, Poser, and Blender are popular software packages that allow 3D modeling and CGI creation - see also the list of 3D computer graphics software.
Computer Software, including a List of Software
YouTube Video: Computer Software Basics
Illustration below: Computer Software Development Hierarchy, as
TOP: the Operating Systems they run under;
CENTER: the programming languages they require;
BOTTOM: in order to serve various industry sectors.
Click here for a List of Software.
Computer software, or simply software, is a generic term that refers to a collection of data or computer instructions that tell the computer how to work, in contrast to the physical hardware from which the system is built, that actually performs the work.
In computer science and software engineering, computer software is all information processed by computer systems, programs and data. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. Computer hardware and software require each other and neither can be realistically used on its own.
At the lowest level, executable code consists of machine language instructions specific to an individual processor—typically a central processing unit (CPU). A machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state.
For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also (indirectly) cause something to appear on a display of the computer system—a state change which should be visible to the user.
The processor carries out the instructions in the order they are provided, unless it is instructed to "jump" to a different instruction, or is interrupted by the operating system.(By now multi-core processors are dominant, where each core can run instructions in order; then, however, each application software runs only on one core by default, but some software has been made to run on many).
The majority of software is written in high-level programming languages that are easier and more efficient for programmers to use because they are closer than machine languages to natural languages. High-level languages are translated into machine language using a compiler or an interpreter or a combination of the two.
Software may also be written in a low-level assembly language, which has strong correspondence to the computer's machine language instructions and is translated into machine language using an assembler.
Click on any of the following blue hyperlinks for more about Computer Software.
Computer software, or simply software, is a generic term that refers to a collection of data or computer instructions that tell the computer how to work, in contrast to the physical hardware from which the system is built, that actually performs the work.
In computer science and software engineering, computer software is all information processed by computer systems, programs and data. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. Computer hardware and software require each other and neither can be realistically used on its own.
At the lowest level, executable code consists of machine language instructions specific to an individual processor—typically a central processing unit (CPU). A machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state.
For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also (indirectly) cause something to appear on a display of the computer system—a state change which should be visible to the user.
The processor carries out the instructions in the order they are provided, unless it is instructed to "jump" to a different instruction, or is interrupted by the operating system.(By now multi-core processors are dominant, where each core can run instructions in order; then, however, each application software runs only on one core by default, but some software has been made to run on many).
The majority of software is written in high-level programming languages that are easier and more efficient for programmers to use because they are closer than machine languages to natural languages. High-level languages are translated into machine language using a compiler or an interpreter or a combination of the two.
Software may also be written in a low-level assembly language, which has strong correspondence to the computer's machine language instructions and is translated into machine language using an assembler.
Click on any of the following blue hyperlinks for more about Computer Software.
Computer Hardware including a List of Computer Hardware Companies
YouTube Video about Computer Basics: Hardware
YouTube Video: How a CPU is made
Pictured below: Hardware components
Click here for a List of Computer Hardware Companies.
Computer hardware are the physical parts or components of a computer, such as the central processing unit, monitor, keyboard, computer data storage, graphic card, sound card and motherboard. By contrast, software is instructions that can be stored and ran by hardware.
Hardware is directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system.
Von Neumann architecture:
Main article: Von Neumann architecture
The template for all modern computers is the Von Neumann architecture, detailed in a 1945 paper by Hungarian mathematician John von Neumann.
This describes a design architecture for an electronic digital computer with subdivisions of a processing unit consisting of an arithmetic logic unit and processor registers, a control unit containing an instruction register and program counter, a memory to store both data and instructions, external mass storage, and input and output mechanisms.
The meaning of the term has evolved to mean a stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus. This is referred to as the Von Neumann bottleneck and often limits the performance of the system.
Sales:
For the third consecutive year, U.S. business-to-business channel sales (sales through distributors and commercial resellers) increased, ending 2013 up nearly 6 percent at $61.7 billion. The growth was the fastest sales increase since the end of the recession. Sales growth accelerated in the second half of the year peaking in fourth quarter with a 6.9 percent increase over the fourth quarter of 2012.
Different Systems:
There are a number of different types of computer system in use today.
Personal computer
The personal computer, also known as the PC, is one of the most common types of computer due to its versatility and relatively low price. Laptops are generally very similar, although they may use lower-power or reduced size components, thus lower performance.
Case:
Main article: Computer case
The computer case encloses most of the components of the system. It provides mechanical support and protection for internal elements such as the motherboard, disk drives, and power supplies, and controls and directs the flow of cooling air over internal components.
The case is also part of the system to control electromagnetic interference radiated by the computer, and protects internal parts from electrostatic discharge. Large tower cases provide extra internal space for multiple disk drives or other peripherals and usually stand on the floor, while desktop cases provide less expansion room.
All-in-one style designs from Apple, namely the iMac, and similar types, include a video display built into the same case. Portable and laptop computers require cases that provide impact protection for the unit. A current development in laptop computers is a detachable keyboard, which allows the system to be configured as a touch-screen tablet. Hobbyists may decorate the cases with colored lights, paint, or other features, in an activity called case modding.
Power Supply:
Main article: Power supply unit (computer)
A power supply unit (PSU) converts alternating current (AC) electric power to low-voltage DC power for the internal components of the computer. Laptops are capable of running from a built-in battery, normally for a period of hours.
Motherboard:
Main article: Motherboard
The motherboard is the main component of a computer. It is a board with integrated circuitry that connects the other parts of the computer including the CPU, the RAM, the disk drives (CD, DVD, hard disk, or any others) as well as any peripherals connected via the ports or the expansion slots.
Components directly attached to or to part of the motherboard include:
Expansion Cards:
Main article: Expansion card
An expansion card in computing is a printed circuit board that can be inserted into an expansion slot of a computer motherboard or backplane to add functionality to a computer system via the expansion bus. Expansions cards can be used to obtain or expand on features not offered by the motherboard.
Storage Devices:
Main article: Computer data storage
A storage device is any computing hardware and digital media that is used for storing, porting and extracting data files and objects. It can hold and store information both temporarily and permanently, and can be internal or external to a computer, server or any similar computing device. Data storage is a core function and fundamental component of computers.
Fixed Media:
Data is stored by a computer using a variety of media. Hard disk drives are found in virtually all older computers, due to their high capacity and low cost, but solid-state drives are faster and more power efficient, although currently more expensive than hard drives in terms of dollar per gigabyte, so are often found in personal computers built post-2007. Some systems may use a disk array controller for greater performance or reliability.
Removable Media:
To transfer data between computers, a USB flash drive or optical disc may be used. Their usefulness depends on being readable by other systems; the majority of machines have an optical disk drive, and virtually all have at least one USB port.
Input and Output Peripherals:
Main article: Peripheral
Input and output devices are typically housed externally to the main computer chassis. The following are either standard or very common to many computer systems.
As Input: Input devices allow the user to enter information into the system, or control its operation. Most personal computers have a mouse and keyboard, but laptop systems typically use a touchpad instead of a mouse. Other input devices include webcams, microphones, joysticks, and image scanners.
As Output: Output devices display information in a human readable form. Such devices could include printers, speakers, monitors or a Braille embosser.
Click on any of the following blue hyperlinks for more about Computer Hardware:
Computer hardware are the physical parts or components of a computer, such as the central processing unit, monitor, keyboard, computer data storage, graphic card, sound card and motherboard. By contrast, software is instructions that can be stored and ran by hardware.
Hardware is directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system.
Von Neumann architecture:
Main article: Von Neumann architecture
The template for all modern computers is the Von Neumann architecture, detailed in a 1945 paper by Hungarian mathematician John von Neumann.
This describes a design architecture for an electronic digital computer with subdivisions of a processing unit consisting of an arithmetic logic unit and processor registers, a control unit containing an instruction register and program counter, a memory to store both data and instructions, external mass storage, and input and output mechanisms.
The meaning of the term has evolved to mean a stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus. This is referred to as the Von Neumann bottleneck and often limits the performance of the system.
Sales:
For the third consecutive year, U.S. business-to-business channel sales (sales through distributors and commercial resellers) increased, ending 2013 up nearly 6 percent at $61.7 billion. The growth was the fastest sales increase since the end of the recession. Sales growth accelerated in the second half of the year peaking in fourth quarter with a 6.9 percent increase over the fourth quarter of 2012.
Different Systems:
There are a number of different types of computer system in use today.
Personal computer
The personal computer, also known as the PC, is one of the most common types of computer due to its versatility and relatively low price. Laptops are generally very similar, although they may use lower-power or reduced size components, thus lower performance.
Case:
Main article: Computer case
The computer case encloses most of the components of the system. It provides mechanical support and protection for internal elements such as the motherboard, disk drives, and power supplies, and controls and directs the flow of cooling air over internal components.
The case is also part of the system to control electromagnetic interference radiated by the computer, and protects internal parts from electrostatic discharge. Large tower cases provide extra internal space for multiple disk drives or other peripherals and usually stand on the floor, while desktop cases provide less expansion room.
All-in-one style designs from Apple, namely the iMac, and similar types, include a video display built into the same case. Portable and laptop computers require cases that provide impact protection for the unit. A current development in laptop computers is a detachable keyboard, which allows the system to be configured as a touch-screen tablet. Hobbyists may decorate the cases with colored lights, paint, or other features, in an activity called case modding.
Power Supply:
Main article: Power supply unit (computer)
A power supply unit (PSU) converts alternating current (AC) electric power to low-voltage DC power for the internal components of the computer. Laptops are capable of running from a built-in battery, normally for a period of hours.
Motherboard:
Main article: Motherboard
The motherboard is the main component of a computer. It is a board with integrated circuitry that connects the other parts of the computer including the CPU, the RAM, the disk drives (CD, DVD, hard disk, or any others) as well as any peripherals connected via the ports or the expansion slots.
Components directly attached to or to part of the motherboard include:
- The CPU (central processing unit), which performs most of the calculations which enable a computer to function, and is sometimes referred to as the brain of the computer. It is usually cooled by a heatsink and fan, or water-cooling system. Most newer CPUs include an on-die Graphics Processing Unit (GPU). The clock speed of CPUs governs how fast it executes instructions, and is measured in GHz; typical values lie between 1 GHz and 5 GHz. Many modern computers have the option to overclock the CPU which enhances performance at the expense of greater thermal output and thus a need for improved cooling.
- The chipset, which includes the north bridge, mediates communication between the CPU and the other components of the system, including main memory.
- Random-access memory (RAM), which stores the code and data that are being actively accessed by the CPU. For example, when a web browser is opened on the computer it takes up memory; this is stored in the RAM until the web browser is closed. RAM usually comes on DIMMs in the sizes 2GB, 4GB, and 8GB, but can be much larger.
- Read-only memory (ROM), which stores the BIOS that runs when the computer is powered on or otherwise begins execution, a process known as Bootstrapping, or "booting" or "booting up". The BIOS (Basic Input Output System) includes boot firmware and power management firmware. Newer motherboards use Unified Extensible Firmware Interface (UEFI) instead of BIOS.
- Buses that connect the CPU to various internal components and to expand cards for graphics and sound.
- The CMOS battery, which powers the memory for date and time in the BIOS chip. This battery is generally a watch battery.
- The video card (also known as the graphics card), which processes computer graphics. More powerful graphics cards are better suited to handle strenuous tasks, such as playing intensive video games.
Expansion Cards:
Main article: Expansion card
An expansion card in computing is a printed circuit board that can be inserted into an expansion slot of a computer motherboard or backplane to add functionality to a computer system via the expansion bus. Expansions cards can be used to obtain or expand on features not offered by the motherboard.
Storage Devices:
Main article: Computer data storage
A storage device is any computing hardware and digital media that is used for storing, porting and extracting data files and objects. It can hold and store information both temporarily and permanently, and can be internal or external to a computer, server or any similar computing device. Data storage is a core function and fundamental component of computers.
Fixed Media:
Data is stored by a computer using a variety of media. Hard disk drives are found in virtually all older computers, due to their high capacity and low cost, but solid-state drives are faster and more power efficient, although currently more expensive than hard drives in terms of dollar per gigabyte, so are often found in personal computers built post-2007. Some systems may use a disk array controller for greater performance or reliability.
Removable Media:
To transfer data between computers, a USB flash drive or optical disc may be used. Their usefulness depends on being readable by other systems; the majority of machines have an optical disk drive, and virtually all have at least one USB port.
Input and Output Peripherals:
Main article: Peripheral
Input and output devices are typically housed externally to the main computer chassis. The following are either standard or very common to many computer systems.
As Input: Input devices allow the user to enter information into the system, or control its operation. Most personal computers have a mouse and keyboard, but laptop systems typically use a touchpad instead of a mouse. Other input devices include webcams, microphones, joysticks, and image scanners.
As Output: Output devices display information in a human readable form. Such devices could include printers, speakers, monitors or a Braille embosser.
Click on any of the following blue hyperlinks for more about Computer Hardware:
- Mainframe computer
- Departmental computing
- Supercomputer
- Hardware upgrade
- Recycling
- See also:
- Computer architecture
- Electronic hardware
- Glossary of computer hardware terms
- History of computing hardware
- List of computer hardware manufacturers
- Open-source computing hardware
- Media related to Computer hardware at Wikimedia Commons
- Learning materials related to Computer hardware at Wikiversity
IEEE Computer Society -- Official Web Site
YouTube Video: Welcome to the IEEE Computer Society*
(* -- The IEEE Computer Society is the world's leading organization of computing professionals. We foster technology innovation for the benefit of humanity.)
IEEE Computer Society (sometimes abbreviated Computer Society or CS) is a professional society of IEEE. Its purpose and scope is "to advance the theory, practice, and application of computer and information processing science and technology" and the "professional standing of its members." The CS is the largest of 39 technical societies organized under the IEEE Technical Activities Board.
The Computer Society sponsors workshops and conferences, publishes a variety of peer-reviewed literature, operates technical committees, and develops IEEE computing standards.
The society supports more than 200 chapters worldwide and participates in educational activities at all levels of the profession, including distance learning, accreditation of higher education programs in computer science, and professional certification in software engineering.
The IEEE Computer Society is also a member organization of the Federation of Enterprise Architecture Professional Organizations (a worldwide association of professional organizations which have come together to provide a forum to standardize, professionalize, and otherwise advance the discipline of Enterprise Architecture).
Click on any of the following blue hyperlinks for more about the IEEE Computer Society:
The Computer Society sponsors workshops and conferences, publishes a variety of peer-reviewed literature, operates technical committees, and develops IEEE computing standards.
The society supports more than 200 chapters worldwide and participates in educational activities at all levels of the profession, including distance learning, accreditation of higher education programs in computer science, and professional certification in software engineering.
The IEEE Computer Society is also a member organization of the Federation of Enterprise Architecture Professional Organizations (a worldwide association of professional organizations which have come together to provide a forum to standardize, professionalize, and otherwise advance the discipline of Enterprise Architecture).
Click on any of the following blue hyperlinks for more about the IEEE Computer Society:
- History
- Main activities
- See also:
Adobe Systems Pictured: A Sampling of Products that Adobe Systems offers to web developers
Adobe Inc. is an American multinational computer software company headquartered in San Jose, California. It has historically focused upon the creation of multimedia and creativity software products, with a more recent foray towards digital marketing software.
Adobe is best known for its deprecated Adobe Flash web software ecosystem, Photoshop image editing software, Acrobat Reader, the Portable Document Format (PDF), and Adobe Creative Suite, as well as its successor Adobe Creative Cloud.
Adobe was founded in December 1982 by John Warnock and Charles Geschke, who established the company after leaving Xerox PARC in order to develop and sell the PostScript page description language.
In 1985, Apple Computer licensed PostScript for use in its LaserWriter printers, which helped spark the desktop publishing revolution.
As of 2018, Adobe has about 19,000 employees worldwide, about 40% of whom work in San Jose. Adobe also has major development operations in the following:
Adobe has also major development operations in Noida and Bangalore in India.
Products:
Main article: List of Adobe software
Graphic design software:
Web design programs:
Video editing, animation, and visual effects:
Audio editing software:
eLearning software:
Digital Marketing Management Software:
Server software:
Formats:
Web-hosted services:
Adobe Stock:
A microstock agency that presently provides over 57 million high-resolution, royalty-free images and videos available to license (via subscription or credit purchase methods). On December 11, 2014, Adobe announced it was buying Fotolia for $800 million in cash, aiming at integrating the service to its Creative Cloud solution. The purchase was completed in January 2015. It is run as a stand-alone website.
Adobe Experience Platform:
In March 2019 Adobe released its Adobe Experience Platform, which consists family of content, development, and customer relationship management products, with what it’s calling the “next generation” of its Sensei artificial intelligence and machine learning framework.
Click on any of the following blue hyperlinks for more about Adobe Systems:
Adobe is best known for its deprecated Adobe Flash web software ecosystem, Photoshop image editing software, Acrobat Reader, the Portable Document Format (PDF), and Adobe Creative Suite, as well as its successor Adobe Creative Cloud.
Adobe was founded in December 1982 by John Warnock and Charles Geschke, who established the company after leaving Xerox PARC in order to develop and sell the PostScript page description language.
In 1985, Apple Computer licensed PostScript for use in its LaserWriter printers, which helped spark the desktop publishing revolution.
As of 2018, Adobe has about 19,000 employees worldwide, about 40% of whom work in San Jose. Adobe also has major development operations in the following:
- Newton, Massachusetts;
- New York City, New York;
- Minneapolis, Minnesota;
- Lehi, Utah;
- Seattle, Washington;
- and San Francisco, California in the United States.
Adobe has also major development operations in Noida and Bangalore in India.
Products:
Main article: List of Adobe software
Graphic design software:
- Adobe Photoshop,
- Adobe Pagemaker,
- Adobe Lightroom,
- Adobe InDesign,
- Adobe InCopy,
- Adobe ImageReady,
- Adobe Illustrator,
- Adobe Freehand,
- Adobe FrameMaker,
- Adobe Fireworks,
- Adobe Acrobat,
- Adobe XD
Web design programs:
- Adobe Muse,
- Adobe GoLive,
- Adobe Flash Builder,
- Adobe Flash,
- Adobe Edge,
- Adobe Dreamweaver,
- Adobe Contribute
Video editing, animation, and visual effects:
- Adobe Ultra,
- Adobe Spark Video,
- Adobe Premiere Pro,
- Adobe Premiere Elements,
- Adobe Prelude,
- Adobe Encore,
- Adobe Director,
- Adobe Animate,
- Adobe After Effects,
- Adobe Character Animator
Audio editing software:
eLearning software:
- Adobe Captivate Prime (LMS platform),
- Adobe Captivate,
- Adobe Presenter Video Express and Adobe Connect (also a web-conferencing platform)
Digital Marketing Management Software:
- Adobe Marketing Cloud,
- Adobe Experience Manager (AEM 6.2),
- XML Documentation add-on (for AEM),
- Mixamo
Server software:
Formats:
- Portable Document Format (PDF),
- PDF's predecessor PostScript,
- ActionScript,
- Shockwave Flash (SWF),
- Flash Video (FLV),
- and Filmstrip (.flm)
Web-hosted services:
- Adobe Color,
- Photoshop Express,
- Acrobat.com, and Adobe Spark
Adobe Stock:
A microstock agency that presently provides over 57 million high-resolution, royalty-free images and videos available to license (via subscription or credit purchase methods). On December 11, 2014, Adobe announced it was buying Fotolia for $800 million in cash, aiming at integrating the service to its Creative Cloud solution. The purchase was completed in January 2015. It is run as a stand-alone website.
Adobe Experience Platform:
In March 2019 Adobe released its Adobe Experience Platform, which consists family of content, development, and customer relationship management products, with what it’s calling the “next generation” of its Sensei artificial intelligence and machine learning framework.
Click on any of the following blue hyperlinks for more about Adobe Systems:
- History
- Finances
- Reception
- Criticisms
- See also:
- Official website
- Business data for Adobe Inc.:
- "Adobe Logo History".
- "Adobe timeline" (PDF).
- The Real Lesson in the Adobe Announcement - Lenswork Daily: Podcast
- "Patents owned by Adobe Systems". US Patent & Trademark Office. Retrieved December 8, 2005.
- San Jose Semaphore on Adobe's building
- Adobe Inc. portal
- Adobe MAX
- Adobe Solutions Network
- Digital rights management (DRM)
- List of acquisitions by Adobe Systems
- List of Adobe software
- US v. ElcomSoft Sklyarov
Microsoft Corporation
- YouTube Video: Top 20 Windows 10 Tips and Tricks
- YouTube Video: Top 25 Excel 2016 Tips and Tricks
- YouTube Video: Mastering Microsoft Word
Microsoft Corporation (MS) is an American multinational technology company with headquarters in Redmond, Washington. It develops, manufactures, licenses, supports and sells computer software, consumer electronics, personal computers, and related services.
Microsoft's best known software products are the Microsoft Windows line of operating systems, the Microsoft Office suite, and the Internet Explorer and Edge web browsers.
Microsoft's flagship hardware products are the Xbox video game consoles and the Microsoft Surface lineup of touchscreen personal computers. As of 2016, it is the world's largest software maker by revenue, and one of the world's most valuable companies.
The word "Microsoft" is a portmanteau of "microcomputer" and "software". Microsoft is ranked No. 30 in the 2018 Fortune 500 rankings of the largest United States corporations by total revenue.
Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800. It rose to dominate the personal computer operating system market with MS-DOS in the mid-1980s, followed by Microsoft Windows.
The company's 1986 initial public offering (IPO), and subsequent rise in its share price, created three billionaires and an estimated 12,000 millionaires among Microsoft employees.
Since the 1990s, it has increasingly diversified from the operating system market and has made a number of corporate acquisitions, their largest being the acquisition of LinkedIn for $26.2 billion in December 2016, followed by their acquisition of Skype Technologies for $8.5 billion in May 2011.
As of 2015, Microsoft is market-dominant in the IBM PC-compatible operating system market and the office software suite market, although it has lost the majority of the overall operating system market to Android.
The company also produces a wide range of other consumer and enterprise software for desktops and servers, including Internet search (with Bing), the digital services market (through MSN), mixed reality (HoloLens), cloud computing (Azure) and software development (Visual Studio).
Steve Ballmer replaced Gates as CEO in 2000, and later envisioned a "devices and services" strategy. This began with the acquisition of Danger Inc. in 2008, entering the personal computer production market for the first time in June 2012 with the launch of the Microsoft Surface line of tablet computers; and later forming Microsoft Mobile through the acquisition of Nokia's devices and services division.
Since Satya Nadella took over as CEO in 2014, the company has scaled back on hardware and has instead focused on cloud computing, a move that helped the company's shares reach its highest value since December 1999.
In 2018, Microsoft surpassed Apple as the most valuable publicly traded company in the world after being dethroned by the tech giant in 2010.
Click on any of the following blue hyperlinks for more about Microsoft:
Microsoft's best known software products are the Microsoft Windows line of operating systems, the Microsoft Office suite, and the Internet Explorer and Edge web browsers.
Microsoft's flagship hardware products are the Xbox video game consoles and the Microsoft Surface lineup of touchscreen personal computers. As of 2016, it is the world's largest software maker by revenue, and one of the world's most valuable companies.
The word "Microsoft" is a portmanteau of "microcomputer" and "software". Microsoft is ranked No. 30 in the 2018 Fortune 500 rankings of the largest United States corporations by total revenue.
Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800. It rose to dominate the personal computer operating system market with MS-DOS in the mid-1980s, followed by Microsoft Windows.
The company's 1986 initial public offering (IPO), and subsequent rise in its share price, created three billionaires and an estimated 12,000 millionaires among Microsoft employees.
Since the 1990s, it has increasingly diversified from the operating system market and has made a number of corporate acquisitions, their largest being the acquisition of LinkedIn for $26.2 billion in December 2016, followed by their acquisition of Skype Technologies for $8.5 billion in May 2011.
As of 2015, Microsoft is market-dominant in the IBM PC-compatible operating system market and the office software suite market, although it has lost the majority of the overall operating system market to Android.
The company also produces a wide range of other consumer and enterprise software for desktops and servers, including Internet search (with Bing), the digital services market (through MSN), mixed reality (HoloLens), cloud computing (Azure) and software development (Visual Studio).
Steve Ballmer replaced Gates as CEO in 2000, and later envisioned a "devices and services" strategy. This began with the acquisition of Danger Inc. in 2008, entering the personal computer production market for the first time in June 2012 with the launch of the Microsoft Surface line of tablet computers; and later forming Microsoft Mobile through the acquisition of Nokia's devices and services division.
Since Satya Nadella took over as CEO in 2014, the company has scaled back on hardware and has instead focused on cloud computing, a move that helped the company's shares reach its highest value since December 1999.
In 2018, Microsoft surpassed Apple as the most valuable publicly traded company in the world after being dethroned by the tech giant in 2010.
Click on any of the following blue hyperlinks for more about Microsoft:
- History
- 1972–1985: The founding of Microsoft
- 1985–1994: Windows and Office
- 1995–2007: Foray into the Web, Windows 95, Windows XP, and Xbox
- 2007–2011: Microsoft Azure, Windows Vista, Windows 7, and Microsoft Stores
- 2011–2014: Windows 8/8.1, Xbox One, Outlook.com, and Surface devices
- 2014–present: Windows 10, Microsoft Edge and HoloLens
- Corporate affairs
- Corporate identity
- See also:
- Official website
- Business data for Microsoft Corporation:
- Microsoft companies grouped at OpenCorporates
- List of mergers and acquisitions by Microsoft
- Microsoft engineering groups
- Microsoft Enterprise Agreement
Media Player Software
- YouTube Video: Top 5 Best FREE Video Players for Windows
- YouTube Video: Top 10 Best Media Player For PC Windows And MAC - 2018
A media player is a computer program/software for playing multimedia files like audios, videos, movies and music.
Media players commonly display standard media control icons known from physical devices such as tape recorders and CD players, such as play, pause, fast-forward, back-forward, and stop buttons. In addition, they generally have progress bars (or "playback bars") to locate the current position in the duration of the media file.
Mainstream operating systems have at least one built-in media player. For example:
Functionality Focus:
Different media players may have different goals and feature sets. Video players are a group of media players that have their features geared more towards playing digital video.
For example, Windows DVD Player exclusively plays DVD-Video discs and nothing else. Media Player Classic can play individual audio and video files but many of its features such as color correction, picture sharpening, zooming, set of hotkeys, DVB support and subtitle support are only useful for video material such as films and cartoons. Audio players, on the other hand, specialize in digital audio.
For example, AIMP exclusively plays audio formats. MediaMonkey can play both audio and video format but many of its features including media library, lyric discovery, music visualization, online radio, audiobook indexing and tag editing are geared toward consumption of audio material. In addition, watching video files on it can be a trying feat.
General-purpose media players also do exist. For example, Windows Media Player has exclusive features for both audio and video material, although it cannot match the feature set of Media Player Classic and MediaMonkey combined.
3D Video Players:
3D video players are used to play 2D video in 3D format. A high-quality three-dimensional video presentation requires that each frame of a motion picture be embedded with information on the depth of objects present in the scene.
This process involves shooting the video with special equipment from two distinct perspectives or modelling and rendering each frame as a collection of objects composed of 3D vertices and textures, much like in any modern video game, to achieve special effects.
Tedious and costly, this method is only used in a small fraction of movies produced worldwide, while most movies remain in the form of traditional 2D images. It is, however, possible to give an otherwise two-dimensional picture the appearance of depth.
Using a technique known as anaglyph processing a "flat" picture can be transformed so as to give an illusion of depth when viewed through anaglyph glasses (usually red-cyan). An image viewed through anaglyph glasses appears to have both protruding and deeply embedded objects in it, at the expense of somewhat distorted colours.
The method itself is old enough, dating back to mid-19th century, but it is only with recent advances in computer technology that it has become possible to apply this kind of transformation to a series of frames in a motion picture reasonably fast or even in real time, i.e. as the video is being played back.
Several implementations exist in the form of 3D video players that render conventional 2D video in anaglyph 3D, as well as in the form of 3D video converters that transform video into stereoscopic anaglyph and transcode it for playback with regular software or hardware video players.
Home Theater PC:
Main article: Home theater PC
A home theater PC or media center computer is a convergence device that combines some or all the capabilities of a personal computer with a software application that supports video, photo, audio playback, and sometimes video recording functionality.
Although computers with some of these capabilities were available from the late 1980s, the "Home Theater PC" term first appeared in mainstream press in 1996. Since 2007, other types of consumer electronics, including gaming systems and dedicated media devices have crossed over to manage video and music content.
The term "media center" also refers to specialized computer programs designed to run on standard personal computers.
See also:
Media players commonly display standard media control icons known from physical devices such as tape recorders and CD players, such as play, pause, fast-forward, back-forward, and stop buttons. In addition, they generally have progress bars (or "playback bars") to locate the current position in the duration of the media file.
Mainstream operating systems have at least one built-in media player. For example:
- Windows comes with Windows Media Player
- while macOS comes with QuickTime Player and iTunes.
- Linux distributions may also come with a media player, such as:
- Android OS comes with Google Play Music as default media player and many apps like Poweramp and VLC Media Player.
Functionality Focus:
Different media players may have different goals and feature sets. Video players are a group of media players that have their features geared more towards playing digital video.
For example, Windows DVD Player exclusively plays DVD-Video discs and nothing else. Media Player Classic can play individual audio and video files but many of its features such as color correction, picture sharpening, zooming, set of hotkeys, DVB support and subtitle support are only useful for video material such as films and cartoons. Audio players, on the other hand, specialize in digital audio.
For example, AIMP exclusively plays audio formats. MediaMonkey can play both audio and video format but many of its features including media library, lyric discovery, music visualization, online radio, audiobook indexing and tag editing are geared toward consumption of audio material. In addition, watching video files on it can be a trying feat.
General-purpose media players also do exist. For example, Windows Media Player has exclusive features for both audio and video material, although it cannot match the feature set of Media Player Classic and MediaMonkey combined.
3D Video Players:
3D video players are used to play 2D video in 3D format. A high-quality three-dimensional video presentation requires that each frame of a motion picture be embedded with information on the depth of objects present in the scene.
This process involves shooting the video with special equipment from two distinct perspectives or modelling and rendering each frame as a collection of objects composed of 3D vertices and textures, much like in any modern video game, to achieve special effects.
Tedious and costly, this method is only used in a small fraction of movies produced worldwide, while most movies remain in the form of traditional 2D images. It is, however, possible to give an otherwise two-dimensional picture the appearance of depth.
Using a technique known as anaglyph processing a "flat" picture can be transformed so as to give an illusion of depth when viewed through anaglyph glasses (usually red-cyan). An image viewed through anaglyph glasses appears to have both protruding and deeply embedded objects in it, at the expense of somewhat distorted colours.
The method itself is old enough, dating back to mid-19th century, but it is only with recent advances in computer technology that it has become possible to apply this kind of transformation to a series of frames in a motion picture reasonably fast or even in real time, i.e. as the video is being played back.
Several implementations exist in the form of 3D video players that render conventional 2D video in anaglyph 3D, as well as in the form of 3D video converters that transform video into stereoscopic anaglyph and transcode it for playback with regular software or hardware video players.
Home Theater PC:
Main article: Home theater PC
A home theater PC or media center computer is a convergence device that combines some or all the capabilities of a personal computer with a software application that supports video, photo, audio playback, and sometimes video recording functionality.
Although computers with some of these capabilities were available from the late 1980s, the "Home Theater PC" term first appeared in mainstream press in 1996. Since 2007, other types of consumer electronics, including gaming systems and dedicated media devices have crossed over to manage video and music content.
The term "media center" also refers to specialized computer programs designed to run on standard personal computers.
See also:
Open Source (Code) Model vs. Proprietary Software
Top: What Is the iPhone OS (iOS)?
Bottom: Some Excellent Features of Android
- YouTube Video: Android vs. iOS - Differences That Matter
- YouTube Video: 10 Reasons Android Phones are Better than iPhones (2018)
- YouTube Video: 10 Reasons why Apple iPhone is Better than Android (2018)
Top: What Is the iPhone OS (iOS)?
Bottom: Some Excellent Features of Android
The open-source model is a decentralized software development model that encourages open collaboration. A main principle of open-source software development is peer production, with products such as source code, blueprints, and documentation freely available to the public.
The open-source movement in software began as a response to the limitations of proprietary code. The model is used for projects such as in open-source appropriate technology, and open-source drug discovery.
Open source promotes universal access via an open-source or free license to a product's design or blueprint, and universal redistribution of that design or blueprint. Before the phrase open source became widely adopted, developers and producers used a variety of other terms. Open source gained hold with the rise of the Internet. The open-source software movement arose to clarify copyright, licensing, domain, and consumer issues.
Generally, open source refers to a computer program in which the source code is available to the general public for use or modification from its original design. Open-source code is meant to be a collaborative effort, where programmers improve upon the source code and share the changes within the community.
Code is released under the terms of a software license. Depending on the license terms, others may then download, modify, and publish their version (fork) back to the community.
Many large formal institutions have sprung up to support the development of the open-source movement, including the Apache Software Foundation, which supports community projects such as the open-source framework Apache Hadoop and the open-source HTTP server Apache HTTP.
Click on any of the following blue hyperlinks for more about Open Source Model:
Proprietary software, also known as "closed-source software", is a non-free computer software for which the software's publisher or another person retains intellectual property rights—usually copyright of the source code, but sometimes patent rights.
Until the late 1960s computers—large and expensive mainframe computers, machines in specially air-conditioned computer rooms—were leased to customers rather than sold.
Service and all software available were usually supplied by manufacturers without separate charge until 1969. Computer vendors usually provided the source code for installed software to customers. Customers who developed software often made it available to others without charge.
Closed source means computer programs whose source code is not published. It is available to be edited only by the organization that developed it.
In 1969, IBM, which had antitrust lawsuits pending against it, led an industry change by starting to charge separately for mainframe software and services, by un-bundling hardware and software.
Bill Gates' "Open Letter to Hobbyists" in 1976 decried computer hobbyists' rampant copyright infringement of software, particularly Microsoft's Altair BASIC interpreter, and reminded his audience that their theft from programmers hindered his ability to produce quality software.
According to Brewster Kahle the legal characteristic of software changed also due to the U.S. Copyright Act of 1976.
Starting in February 1983 IBM adopted an "object-code-only" model for a growing list of their software and stopped shipping source code.
In 1983, binary software became copyrightable in the United States as well by the Apple vs. Franklin law decision, before which only source code was copyrightable.
Additionally, the growing availability of millions of computers based on the same microprocessor architecture created for the first time an unfragmented and big enough market for binary distributed software.
Click on any of the following blue hyperlinks for more about Proprietary Software:
The open-source movement in software began as a response to the limitations of proprietary code. The model is used for projects such as in open-source appropriate technology, and open-source drug discovery.
Open source promotes universal access via an open-source or free license to a product's design or blueprint, and universal redistribution of that design or blueprint. Before the phrase open source became widely adopted, developers and producers used a variety of other terms. Open source gained hold with the rise of the Internet. The open-source software movement arose to clarify copyright, licensing, domain, and consumer issues.
Generally, open source refers to a computer program in which the source code is available to the general public for use or modification from its original design. Open-source code is meant to be a collaborative effort, where programmers improve upon the source code and share the changes within the community.
Code is released under the terms of a software license. Depending on the license terms, others may then download, modify, and publish their version (fork) back to the community.
Many large formal institutions have sprung up to support the development of the open-source movement, including the Apache Software Foundation, which supports community projects such as the open-source framework Apache Hadoop and the open-source HTTP server Apache HTTP.
Click on any of the following blue hyperlinks for more about Open Source Model:
- History
- Economics
- Open-source applications
- Society and culture
- See also:
- Lists
- Terms based on open source
- Other
- Open Sources: Voices from the Open Source Revolution (book)
- Business models for open-source software
- Collaborative intelligence
- Commons-based peer production
- Commercial open-source applications
- Community source
- Digital freedom
- Diseconomy of scale
- Embrace, extend and extinguish
- Free Beer
- Free software
- Gift economy
- Halloween Documents
- Linux
- Mass collaboration
- Network effect
- Open access (publishing)
- Open content
- Open data
- Open-design movement
- Open format
- Open implementation
- Open innovation
- OpenJDK
- Open research
- Open security
- OpenSolaris
- Open Source Ecology
- Open Source Lab (book)
- Comparison of open source and closed source
- Open system (computing)
- Open standard
- OpenDWG
- Openness
- Peer production
- Proprietary software
- Shared source
- Sharing economy
- Vendor lock-in
- Web literacy (Open Practices)
- Open Source Malaria
- Open Source Pharma
- Open Source Tuberculosis
- What is open source? (opensource.com)
- "An open-source shot in the arm?" The Economist, Jun 10th 2004
- Google-O'Reilly Open Source Awards
- UNU/IIST Open Source Software Certification
- Open Source Open World – Open Standards Throughout the Globe
- The Changelog, a podcast and blog that covers what's fresh and new in Open Source (essentially covering "the changelog" of open source projects)
- Can We Open Source Everything? The Future of the Open Philosophy. University of Cambridge.
Proprietary software, also known as "closed-source software", is a non-free computer software for which the software's publisher or another person retains intellectual property rights—usually copyright of the source code, but sometimes patent rights.
Until the late 1960s computers—large and expensive mainframe computers, machines in specially air-conditioned computer rooms—were leased to customers rather than sold.
Service and all software available were usually supplied by manufacturers without separate charge until 1969. Computer vendors usually provided the source code for installed software to customers. Customers who developed software often made it available to others without charge.
Closed source means computer programs whose source code is not published. It is available to be edited only by the organization that developed it.
In 1969, IBM, which had antitrust lawsuits pending against it, led an industry change by starting to charge separately for mainframe software and services, by un-bundling hardware and software.
Bill Gates' "Open Letter to Hobbyists" in 1976 decried computer hobbyists' rampant copyright infringement of software, particularly Microsoft's Altair BASIC interpreter, and reminded his audience that their theft from programmers hindered his ability to produce quality software.
According to Brewster Kahle the legal characteristic of software changed also due to the U.S. Copyright Act of 1976.
Starting in February 1983 IBM adopted an "object-code-only" model for a growing list of their software and stopped shipping source code.
In 1983, binary software became copyrightable in the United States as well by the Apple vs. Franklin law decision, before which only source code was copyrightable.
Additionally, the growing availability of millions of computers based on the same microprocessor architecture created for the first time an unfragmented and big enough market for binary distributed software.
Click on any of the following blue hyperlinks for more about Proprietary Software:
- Legal basis
- Exclusive rights
- Interoperability with software and hardware
- Abandonment by owners
- Formerly open-source software
- Pricing and economics
- Examples
- See also:
Microsoft Windows Operating Systems
- YouTube Video: Operating System Basics
- YouTube Video: Microsoft Windows: Entire History in 3 Minutes
- YouTube Video: Mac vs. PC: The Windows 10 Edition
Microsoft Windows is a group of several graphical operating system families, all of which are developed, marketed, and sold by Microsoft. Each family caters to a certain sector of the computing industry.
Active Windows families include Windows NT and Windows Embedded; these may encompass subfamilies, e.g. Windows Embedded Compact (Windows CE) or Windows Server.
Defunct Windows families include Windows 9x, Windows Mobile and Windows Phone.
Microsoft introduced an operating environment named Windows on November 20, 1985, as a graphical operating system shell for MS-DOS in response to the growing interest in graphical user interfaces (GUIs). Microsoft Windows came to dominate the world's personal computer (PC) market with over 90% market share, overtaking Mac OS, which had been introduced in 1984.
Apple came to see Windows as an unfair encroachment on their innovation in GUI development as implemented on products such as the Lisa and Macintosh (eventually settled in court in Microsoft's favor in 1993).
On PCs, Windows is still the most popular operating system. However, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones.
In 2014, the number of Windows devices sold was less than 25% that of Android devices sold. This comparison however may not be fully relevant, as the two operating systems traditionally target different platforms. Still, numbers for server use of Windows (that are comparable to competitors) show one third market share, similar to that for end user use.
As of October 2018, the most recent version of Windows for PCs, tablets, smartphones and embedded devices is Windows 10. The most recent versions for server computers is Windows Server 2019. A specialized version of Windows runs on the Xbox One video game console.
Click on any of the following hyperlinks for more about Microsoft Windows:
Active Windows families include Windows NT and Windows Embedded; these may encompass subfamilies, e.g. Windows Embedded Compact (Windows CE) or Windows Server.
Defunct Windows families include Windows 9x, Windows Mobile and Windows Phone.
Microsoft introduced an operating environment named Windows on November 20, 1985, as a graphical operating system shell for MS-DOS in response to the growing interest in graphical user interfaces (GUIs). Microsoft Windows came to dominate the world's personal computer (PC) market with over 90% market share, overtaking Mac OS, which had been introduced in 1984.
Apple came to see Windows as an unfair encroachment on their innovation in GUI development as implemented on products such as the Lisa and Macintosh (eventually settled in court in Microsoft's favor in 1993).
On PCs, Windows is still the most popular operating system. However, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones.
In 2014, the number of Windows devices sold was less than 25% that of Android devices sold. This comparison however may not be fully relevant, as the two operating systems traditionally target different platforms. Still, numbers for server use of Windows (that are comparable to competitors) show one third market share, similar to that for end user use.
As of October 2018, the most recent version of Windows for PCs, tablets, smartphones and embedded devices is Windows 10. The most recent versions for server computers is Windows Server 2019. A specialized version of Windows runs on the Xbox One video game console.
Click on any of the following hyperlinks for more about Microsoft Windows:
- Genealogy
- Version history
- Version control system
- Timeline of releases
- Usage share and device sales
- Security
- Alternative implementations
- See also:
- Official website
- Microsoft Developer Network
- Windows Client Developer Resources
- Microsoft Windows History Timeline
- Pearson Education, InformIT – History of Microsoft Windows
- Microsoft Windows 7 for Government
- Architecture of Windows NT
- Wintel
- De facto standard
- Dominant design
- Azure Sphere, Microsoft's Linux-based operations system
- Windows Subsystem for Linux, a subsystem in Windows 10, not using the Linux kernel.
Microsoft Office Software Suite including Microsoft Outlook
- YouTube Video: MS Office Word Tutorial
- YouTube Video: MS Office Excel Tutorial
- YouTube Video: MS Office Powerpoint Tutorial
Microsoft Office (or simply Office) is a family of client software, server software, and services developed by Microsoft. It was first announced by Bill Gates on August 1, 1988, at COMDEX in Las Vegas.
Initially a marketing term for an office suite (bundled set of productivity applications), the first version of Office contained Microsoft Word, Microsoft Excel, and Microsoft PowerPoint.
Over the years, Office applications have grown substantially closer with shared features such as a common spell checker, OLE data integration and Visual Basic for Applications scripting language.
Microsoft also positions Office as a development platform for line-of-business software under the Office Business Applications brand. On July 10, 2012, Softpedia reported that Office is used by over a billion people worldwide.
Office is produced in several versions targeted towards different end-users and computing environments. The original, and most widely used version, is the desktop version, available for PCs running the Windows and macOS operating systems. Office Online is a version of the software that runs within a web browser, while Microsoft also maintains Office apps for Android and iOS.
Since Office 2013, Microsoft has promoted Office 365 as the primary means of obtaining Microsoft Office: it allows use of the software and other services on a subscription business model, and users receive free feature updates to the software for the lifetime of the subscription, including new features and cloud computing integration that are not necessarily included in the "on-premises" releases of Office sold under conventional license terms. In 2017, revenue from Office 365 overtook conventional license sales.
The current on-premises, desktop version of Office is Office 2019, released on September 24, 2018.
Click on any of the following blue hyperlinks for more about Microsoft Office:
Microsoft Outlook is a personal information manager from Microsoft, available as a part of the Microsoft Office suite (see above). Although often used mainly as an email application, it also includes a calendar, task manager, contact manager, note taking, journal, and web browsing.
Outlook can be used as a stand-alone application, or can work with Microsoft Exchange Server and Microsoft SharePoint Server for multiple users in an organization, such as shared mailboxes and calendars, Exchange public folders, SharePoint lists, and meeting schedules.
Microsoft has also released mobile applications for most mobile platforms, including iOS and Android. Developers can also create their own custom software that works with Outlook and Office components using Microsoft Visual Studio. In addition, Windows Phone devices can synchronize almost all Outlook data to Outlook Mobile.
Click on any of the following blue hyperlinks for more about Microsoft Outlook:
Initially a marketing term for an office suite (bundled set of productivity applications), the first version of Office contained Microsoft Word, Microsoft Excel, and Microsoft PowerPoint.
Over the years, Office applications have grown substantially closer with shared features such as a common spell checker, OLE data integration and Visual Basic for Applications scripting language.
Microsoft also positions Office as a development platform for line-of-business software under the Office Business Applications brand. On July 10, 2012, Softpedia reported that Office is used by over a billion people worldwide.
Office is produced in several versions targeted towards different end-users and computing environments. The original, and most widely used version, is the desktop version, available for PCs running the Windows and macOS operating systems. Office Online is a version of the software that runs within a web browser, while Microsoft also maintains Office apps for Android and iOS.
Since Office 2013, Microsoft has promoted Office 365 as the primary means of obtaining Microsoft Office: it allows use of the software and other services on a subscription business model, and users receive free feature updates to the software for the lifetime of the subscription, including new features and cloud computing integration that are not necessarily included in the "on-premises" releases of Office sold under conventional license terms. In 2017, revenue from Office 365 overtook conventional license sales.
The current on-premises, desktop version of Office is Office 2019, released on September 24, 2018.
Click on any of the following blue hyperlinks for more about Microsoft Office:
- Components
- Office Mobile
- Common features
- File formats and metadata
- Extensibility
- Password protection
- Support policies
- Platforms
- Pricing model and editions
- Discontinued applications and features
- Criticism
- Tables of versions
- Version history
- See also:
Microsoft Outlook is a personal information manager from Microsoft, available as a part of the Microsoft Office suite (see above). Although often used mainly as an email application, it also includes a calendar, task manager, contact manager, note taking, journal, and web browsing.
Outlook can be used as a stand-alone application, or can work with Microsoft Exchange Server and Microsoft SharePoint Server for multiple users in an organization, such as shared mailboxes and calendars, Exchange public folders, SharePoint lists, and meeting schedules.
Microsoft has also released mobile applications for most mobile platforms, including iOS and Android. Developers can also create their own custom software that works with Outlook and Office components using Microsoft Visual Studio. In addition, Windows Phone devices can synchronize almost all Outlook data to Outlook Mobile.
Click on any of the following blue hyperlinks for more about Microsoft Outlook:
- Versions
- Internet standards compliance
- Security concerns
- Outlook add-ins
- Importing from other email clients
- See also:
- Official website
- Outlook Developer Portal
- Address book
- Calendar (Apple)—iCal
- Comparison of email clients
- Comparison of feed aggregators
- Comparison of office suites
- Evolution (software)
- Kontact
- List of applications with iCalendar support
- List of personal information managers
- Personal Storage Table (.pst file)
- Windows Contacts
Corel including its Software Suite
- YouTube Video: CorelDRAW - Full Tutorial for Beginners
- YouTube Video: Corel Painter Tutorial: Painting from a Photograph
- YouTube Video: Getting Started with VideoStudio
Corel Corporation (from the abbreviation "Cowpland Research Laboratory") is a Canadian software company headquartered in Ottawa, Ontario, specializing in graphics processing.
Corel is known for producing software titles such as CorelDRAW, and for acquiring PaintShop Pro, Painter, Video Studio and WordPerfect.
Products:
Acquired Products:
Click on any of the following blue hyperlinks for more about Corel:
Corel is known for producing software titles such as CorelDRAW, and for acquiring PaintShop Pro, Painter, Video Studio and WordPerfect.
Products:
- Corel Chess - using a chess engine developed by Don Dailey and Larry Kaufman
- Corel Designer – Formerly Micrografx Designer, professional technical illustration software.
- Corel Digital Studio – a set of four applications: PaintShop Photo Express (a light version of Paint Shop Pro), VideoStudio Express (video-editing software), DVD Factory (DVD burning and converting software), WinDVD (DVD player software).
- CorelDRAW – A vector graphics editor.
- Corel Graphics Suite – Combination of CorelDRAW, PhotoPaint, and Capture.
- Corel Home Office – an office suite based on Ability Office 5 and also bundling Corel's WinZip software. It is incompatible with Corel's own WordPerfect file formats.
- Corel KnockOut – Professional image masking plug-in.
- Corel Paint It! Touch – Drawing and painting software created specifically for the Windows 8 touchscreen PCs.
- Corel Painter – a program that emulates natural media – paint, crayons, brushes etc (formerly Fractal Painter).
- Corel Photo Album – A sophisticated program for organizing digital photographs, inherited from Jasc Software.
- Corel Photo-Paint – A bitmap graphics program comparable to Adobe Photoshop, bundled with the CorelDRAW Graphics Suite.
- Corel SnapFire – A digital photo management suite, positioned to compete with Google's Picasa offering, later developed and marketed as Corel MediaOne.
- Corel Ventura – Desktop publishing software that had a large and loyal following for its DOS version when Corel acquired it in the early 1990s. It was briefly revived in 2002.
- Corel Linux OS (discontinued) – One of the first GUI-based distributions of Linux incorporating an automatic installation program in 1999.
- CorelCAD – 2D and 3D computer-aided drafting software.
Acquired Products:
- AfterShot Pro – Photo management software, based on Bibble after the acquisition of Bibble Labs in 2012.
- Avid Studio – A video and audio editor specializing in production technology. Avid Studio was renamed Pinnacle Studio in September 2012.
- Bryce – Software for creating 3D landscapes. Sold in 2004 to DAZ Productions.
- Click and Create – A game development tool created by Clickteam that was also sold as The Games Factory. Click and Create 2 was sold to IMSI who released it as Multimedia Fusion.
- MindJet MindManager – In August 2016, Corel purchased Mindjet as per announcement.
- PaintShop Pro – In October 2004, Corel purchased Jasc Software, developer of this budget-priced bitmap graphics editing program.
- Paradox – A relational database acquired from Borland and bundled with WordPerfect Office Professional Edition.
- Parallels – a range of virtualization products, sold as Parallels Desktop for Mac, Parallels Server for Mac, Parallels Workstation, and Parallels RAS.
- Quattro Pro – A spreadsheet program acquired from Borland and bundled with WordPerfect Office.
- VideoStudio – A digital video editing program originally developed by Ulead Systems which remains a distribution of Ulead Systems. The software was rebranded Corel VideoStudio since Corel acquired Ulead and it became a working division of Corel.
- WinDVD – A video and music player software, acquired in 2006 from Corel's purchase of InterVideo.
- WinZip – A file archiver and compressor, acquired in 2006 from Corel's purchase of WinZip Computing.
- WordPerfect – A word processing program acquired from Novell, and originally produced by WordPerfect Corporation.
- XMetaL – An XML editor acquired in the takeover of SoftQuad in 2001 and then sold to Blast Radius in 2004.
- Gravit Designer – A cross-platform vector graphics editor acquired in the takeover of Gravit GmbH in 2018.
Click on any of the following blue hyperlinks for more about Corel:
- History
- Corel World Design Contest
- See also:
- Corel website
- CorelDRAW website
- Bridgeman Art Library v. Corel Corp
Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.
The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but larger and cheaper options farther away.
Generally the fast volatile technologies (which lose data when off power) are referred to as "memory", while slower persistent technologies are referred to as "storage".
In the Von Neumann architecture, the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data.
Functionality:
Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices.
Von Neumann machines differ in having a memory in which they store their operating instructions and data.
Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.
Data organization and representation:
A modern digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0.
The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes (40 million bits) with one byte per character.
Data are encoded by assigning a bit pattern to each character, digit, or multimedia object.
Many standards exist for encoding (e.g., character encodings like ASCII, image encodings like JPEG, video encodings like MPEG-4).
By adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. Errors generally occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in storage of its ability to maintain a distinguishable value (0 or 1), or due to errors in inter or intra-computer communication.
A random bit flip (e.g., due to random radiation) is typically corrected upon detection. A bit, or a group of malfunctioning physical bits (not always the specific defective bit is known; group definition depends on specific storage device) is typically automatically fenced-out, taken out of use by the device, and replaced with another functioning equivalent group in the device, where the corrected bit values are restored (if possible). The cyclic redundancy check (CRC) method is typically used in communications and storage for error detection. A detected error is then retried.
Data compression methods allow in many cases (such as a database) to represent a string of bits by a shorter bit string ("compress") and reconstruct the original string ("decompress") when needed. This utilizes substantially less storage (tens of percents) for many types of data at the cost of more computation (compress and decompress when needed).
Analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not.
For security reasons certain types of data (e.g., credit-card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots.
Hierarchy of storage:
Main article: Memory hierarchy
Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary and off-line storage is also guided by cost per bit.
In contemporary usage, "memory" is usually semiconductor storage read-write random-access memory, typically DRAM (dynamic RAM) or other forms of fast but temporary storage.
"Storage" consists of storage devices and their media not directly accessible by the CPU (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down).
Historically, memory has been called core memory, main memory, real storage or internal memory. Meanwhile, non-volatile storage devices have been referred to as secondary storage, external memory or auxiliary/peripheral storage.
Primary storage:
Main article: Computer memory
Primary storage (also known as main memory, internal memory or prime memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.
Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive.
This led to modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. (The particular types of RAM used for primary storage are also volatile, i.e. they lose the information when not powered).
As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:
Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data in the memory cells using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.
As the RAM types used for primary storage are volatile (uninitialized at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer.
Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it.
A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).
Many types of "ROM" are not literally read only, as updates to them are possible; however it is slow and memory must be erased in large portions before it can be re-written.
Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, and rather, use large capacities of secondary storage, which is non-volatile as well, and not as costly.
Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage.
Secondary Storage:
Secondary storage (also known as external memory or auxiliary storage), differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage.
Secondary storage is non-volatile (retaining data when power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive.
In modern computers, hard disk drives (HDDs) or solid-state drives (SSDs) are usually used as secondary storage. The access time per byte for HDDs or SSDs is typically measured in milliseconds (one thousandth seconds), while the access time per byte for primary storage is measured in nanoseconds (one billionth seconds). Thus, secondary storage is significantly slower than primary storage.
Rotating optical storage devices, such as CD and DVD drives, have even longer access times.
Other examples of secondary storage technologies include:
Once the disk read/write head on HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. To reduce the seek time and rotational latency, data are transferred to and from disks in large contiguous blocks.
Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based upon sequential and block access.
Another way to reduce the I/O bottleneck is to use multiple disks in parallel in order to increase the bandwidth between primary and secondary memory.
Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information.
Most computer operating systems use the concept of virtual memory, allowing utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded.
Tertiary storage:
See also: Nearline storage and Cloud storage
Tertiary storage or tertiary memory is a level below secondary storage. Typically, it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; such data are often copied to secondary storage before use. It is primarily used for archiving rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1–10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes.
When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library.
Tertiary storage is also known as nearline storage because it is "near to online". The formal distinction between online, nearline, and offline storage is:
For example, always-on spinning hard disk drives are online storage, while spinning drives that spin down automatically, such as in massive arrays of idle disks (MAID), are nearline storage. Removable media such as tape cartridges that can be automatically loaded, as in tape libraries, are nearline storage, while tape cartridges that must be manually loaded are offline storage.
Off-line storage:
Off-line storage is a computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction.
Off-line storage is used to transfer information, since the detached medium can be easily physically transported. Additionally, in case a disaster, for example a fire, destroys the original data, a medium in a remote location will probably be unaffected, enabling disaster recovery.
Off-line storage increases general information security, since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is rarely accessed, off-line storage is less expensive than tertiary storage.
In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are most popular, and to much lesser extent removable hard disk drives. In enterprise uses, magnetic tape is predominant. Older examples are floppy disks, Zip disks, or punched cards.
Click on any of the following blue hyperlinks for more about Computer Data Storage:
The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but larger and cheaper options farther away.
Generally the fast volatile technologies (which lose data when off power) are referred to as "memory", while slower persistent technologies are referred to as "storage".
In the Von Neumann architecture, the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data.
Functionality:
Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices.
Von Neumann machines differ in having a memory in which they store their operating instructions and data.
Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.
Data organization and representation:
A modern digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0.
The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes (40 million bits) with one byte per character.
Data are encoded by assigning a bit pattern to each character, digit, or multimedia object.
Many standards exist for encoding (e.g., character encodings like ASCII, image encodings like JPEG, video encodings like MPEG-4).
By adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. Errors generally occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in storage of its ability to maintain a distinguishable value (0 or 1), or due to errors in inter or intra-computer communication.
A random bit flip (e.g., due to random radiation) is typically corrected upon detection. A bit, or a group of malfunctioning physical bits (not always the specific defective bit is known; group definition depends on specific storage device) is typically automatically fenced-out, taken out of use by the device, and replaced with another functioning equivalent group in the device, where the corrected bit values are restored (if possible). The cyclic redundancy check (CRC) method is typically used in communications and storage for error detection. A detected error is then retried.
Data compression methods allow in many cases (such as a database) to represent a string of bits by a shorter bit string ("compress") and reconstruct the original string ("decompress") when needed. This utilizes substantially less storage (tens of percents) for many types of data at the cost of more computation (compress and decompress when needed).
Analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not.
For security reasons certain types of data (e.g., credit-card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots.
Hierarchy of storage:
Main article: Memory hierarchy
Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary and off-line storage is also guided by cost per bit.
In contemporary usage, "memory" is usually semiconductor storage read-write random-access memory, typically DRAM (dynamic RAM) or other forms of fast but temporary storage.
"Storage" consists of storage devices and their media not directly accessible by the CPU (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down).
Historically, memory has been called core memory, main memory, real storage or internal memory. Meanwhile, non-volatile storage devices have been referred to as secondary storage, external memory or auxiliary/peripheral storage.
Primary storage:
Main article: Computer memory
Primary storage (also known as main memory, internal memory or prime memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.
Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive.
This led to modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. (The particular types of RAM used for primary storage are also volatile, i.e. they lose the information when not powered).
As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:
- Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic logic unit to perform various calculations or other operations on this data (or with the help of it). Registers are the fastest of all forms of computer data storage.
- Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It was introduced solely to improve the performance of computers. Most actively used information in the main memory is just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand, main memory is much slower, but has a much greater storage capacity than processor registers. Multi-level hierarchical cache setup is also commonly used--primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower.
Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data in the memory cells using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.
As the RAM types used for primary storage are volatile (uninitialized at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer.
Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it.
A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).
Many types of "ROM" are not literally read only, as updates to them are possible; however it is slow and memory must be erased in large portions before it can be re-written.
Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, and rather, use large capacities of secondary storage, which is non-volatile as well, and not as costly.
Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage.
Secondary Storage:
Secondary storage (also known as external memory or auxiliary storage), differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage.
Secondary storage is non-volatile (retaining data when power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive.
In modern computers, hard disk drives (HDDs) or solid-state drives (SSDs) are usually used as secondary storage. The access time per byte for HDDs or SSDs is typically measured in milliseconds (one thousandth seconds), while the access time per byte for primary storage is measured in nanoseconds (one billionth seconds). Thus, secondary storage is significantly slower than primary storage.
Rotating optical storage devices, such as CD and DVD drives, have even longer access times.
Other examples of secondary storage technologies include:
Once the disk read/write head on HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. To reduce the seek time and rotational latency, data are transferred to and from disks in large contiguous blocks.
Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based upon sequential and block access.
Another way to reduce the I/O bottleneck is to use multiple disks in parallel in order to increase the bandwidth between primary and secondary memory.
Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information.
Most computer operating systems use the concept of virtual memory, allowing utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded.
Tertiary storage:
See also: Nearline storage and Cloud storage
Tertiary storage or tertiary memory is a level below secondary storage. Typically, it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; such data are often copied to secondary storage before use. It is primarily used for archiving rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1–10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes.
When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library.
Tertiary storage is also known as nearline storage because it is "near to online". The formal distinction between online, nearline, and offline storage is:
- Online storage is immediately available for I/O.
- Nearline storage is not immediately available, but can be made online quickly without human intervention.
- Offline storage is not immediately available, and requires some human intervention to become online.
For example, always-on spinning hard disk drives are online storage, while spinning drives that spin down automatically, such as in massive arrays of idle disks (MAID), are nearline storage. Removable media such as tape cartridges that can be automatically loaded, as in tape libraries, are nearline storage, while tape cartridges that must be manually loaded are offline storage.
Off-line storage:
Off-line storage is a computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction.
Off-line storage is used to transfer information, since the detached medium can be easily physically transported. Additionally, in case a disaster, for example a fire, destroys the original data, a medium in a remote location will probably be unaffected, enabling disaster recovery.
Off-line storage increases general information security, since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is rarely accessed, off-line storage is less expensive than tertiary storage.
In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are most popular, and to much lesser extent removable hard disk drives. In enterprise uses, magnetic tape is predominant. Older examples are floppy disks, Zip disks, or punched cards.
Click on any of the following blue hyperlinks for more about Computer Data Storage:
Computer Applications Software
- YouTube Video: Ten Free Programs You Need on Your New PC!
- YouTube Video: What is an API?
- YouTube Video about Computer Basics: Understanding Applications
Application software (app for short) is software designed to perform a group of coordinated functions, tasks, or activities for the benefit of the user. Examples of an application include the following:
The collective noun application software refers to all applications collectively. This contrasts with system software, which is mainly involved with running the computer.
Applications may be bundled with the computer and its system software or published separately, and may be coded as proprietary, open-source or university projects. Apps built for mobile platforms are called mobile apps.
In information technology, an application (app), application program or software application is a computer program designed to help people perform an activity.
An application thus differs from an operating system (which runs a computer), a utility (which performs maintenance or general-purpose chores), and a programming tool (with which computer programs are created).
Depending on the activity for which it was designed, an application can manipulate text, numbers, audio, graphics, or a combination of these elements. Some application packages focus on a single task, such as word processing; others, called integrated software include several applications.
User-written software tailors systems to meet the user's specific needs. User-written software includes spreadsheet templates, word processor macros, scientific simulations, audio, graphics and animation scripts. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is.
The delineation between system software such as operating systems and application software is not exact, however, and is occasionally the object of controversy. For example, one of the key questions in the United States v. Microsoft Corp. antitrust trial was whether Microsoft's Internet Explorer web browser was part of its Windows operating system or a separable piece of application software.
As another example, the GNU/Linux naming controversy is, in part, due to disagreement about the relationship between the Linux kernel and the operating systems built over this kernel. In some types of embedded systems, the application software and the operating system software may be indistinguishable to the user, as in the case of software used to control a VCR, DVD player or microwave oven.
The above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an app: see Application Portfolio Management.
The word "application", once used as an adjective, is not restricted to the "of or pertaining to application software" meaning. For example, concepts such as the following apply to all computer programs alike, not just application software:
Click on any of the following blue hyperlinks for more about Applications Software:
- word processor,
- spreadsheet,
- accounting application,
- web browser,
- email client,
- media player,
- file viewer,
- aeronautical flight simulator,
- console game,
- or a photo editor.
The collective noun application software refers to all applications collectively. This contrasts with system software, which is mainly involved with running the computer.
Applications may be bundled with the computer and its system software or published separately, and may be coded as proprietary, open-source or university projects. Apps built for mobile platforms are called mobile apps.
In information technology, an application (app), application program or software application is a computer program designed to help people perform an activity.
An application thus differs from an operating system (which runs a computer), a utility (which performs maintenance or general-purpose chores), and a programming tool (with which computer programs are created).
Depending on the activity for which it was designed, an application can manipulate text, numbers, audio, graphics, or a combination of these elements. Some application packages focus on a single task, such as word processing; others, called integrated software include several applications.
User-written software tailors systems to meet the user's specific needs. User-written software includes spreadsheet templates, word processor macros, scientific simulations, audio, graphics and animation scripts. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is.
The delineation between system software such as operating systems and application software is not exact, however, and is occasionally the object of controversy. For example, one of the key questions in the United States v. Microsoft Corp. antitrust trial was whether Microsoft's Internet Explorer web browser was part of its Windows operating system or a separable piece of application software.
As another example, the GNU/Linux naming controversy is, in part, due to disagreement about the relationship between the Linux kernel and the operating systems built over this kernel. In some types of embedded systems, the application software and the operating system software may be indistinguishable to the user, as in the case of software used to control a VCR, DVD player or microwave oven.
The above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an app: see Application Portfolio Management.
The word "application", once used as an adjective, is not restricted to the "of or pertaining to application software" meaning. For example, concepts such as the following apply to all computer programs alike, not just application software:
- application programming interface (API),
- application server,
- application virtualization,
- application lifecycle management
- and portable application.
Click on any of the following blue hyperlinks for more about Applications Software:
Why Do We Need Supercomputers and Who Is Using Them? (by Michael Kan July 10, 2019, PC Magazine )
and Supercomputers (Wikipedia)
and Supercomputers (Wikipedia)
- YouTube Video: Building a Working Human Brain on a Supercomputer
- YouTube Video: IBM Unveils Groundbreaking Quantum Computing System I (Fortune Magazine)
- YouTube Video: The new supercomputer behind the US nuclear arsenal
Why Do We Need Supercomputers and Who Is Using Them? (by Michael Kan July 10, 2019, PC Magazine)
Lawrence Livermore National Laboratory is home to several supercomputers, including Sierra, the world's second fastest. We stopped by to find out how these supercharged computers handle everything from virtual nuclear weapons tests to weather modeling.
As the US competes with China to build the fastest supercomputers, you might be wondering how these giant machines are being used.
A supercomputer can contain hundreds of thousands of processor cores and require an entire building to house and cool—not to mention millions of dollars to create and maintain them.
But despite these challenges, more and more are set to go online as the US and China develop new "exascale" supercomputers, which promise a five-fold performance boost compared to current leading systems.
So who needs all this computing power and why? To find out, PCMag visited the Lawrence Livermore National Laboratory in California, which is home to several supercomputers, including the world's second fastest, Sierra. It was there we learned how system engineers are maintaining the machines to serve scientific researchers but also test something you might not expect: nuclear weapons.
When you visit Sierra, you'll notice the words "classified" and "secret restricted data" posted on the supercomputer, which is made up of 240 server-like racks. The warnings exist because Sierra is processing data involving the US's nuclear stockpile, including how the weapons should detonate in the real world.
The US conducted its last live nuclear weapons test in 1992. Since then, the country has used supercomputers to help carry out the experiments virtually, and Sierra is part of that mission. The machine was completed last year primarily to aid the US government in monitoring and testing the effectiveness of the country's aging nuclear arsenal, which needs to be routinely maintained.
"The only way a deterrent works is if you know that it can function, and that your adversary also knows and believes it functions," said Adam Bertsch, a high performance computing systems engineer at the lab.
Not surprisingly, simulating a nuclear explosion requires a lot of math. Foundational principles in science can predict how particles will interact with each other under different conditions. The US government also possesses decades of data collected from real nuclear tests. Scientists have combined this information to create equations inside computer models, which can calculate how a nuclear explosion will go off and change over time.
Essentially, you're trying to map out a chain reaction. So to make the models accurate, they've been designed to predict a nuclear detonation at molecular levels using real-world physics. The challenge is that calculating what all these particles will do requires a lot of number-crunching.
Enter Sierra. The supercomputer has 190,000 CPU processor cores and 17,000 GPU cores.
All that computing power means it can take a huge task, like simulating nuclear fission, and break it down into smaller pieces. Each core can then process a tiny chunk of the simulation and communicate the results to the rest of the machine. The process will repeat over and over again as the supercomputer tries to model a nuclear explosion from one second to the next.
"You can do a full simulation of a nuclear device in the computer," Bertsch added. "You can find out that it works, exactly how well it works and what kind of effects would happen."
A supercomputer's ability to calculate and model particle interactions is why it's become such an important tool for researchers. In a sense, reactions are happening all around us. This can include the weather, how a star forms, or when human cells come in contact with a drug.
A supercomputer can simulate all these interactions. Scientists can then take the data to learn useful insights, like whether it'll rain tomorrow, if a new scientific theory is valid, or if an upcoming cancer treatment holds any promise.
The same technologies can also let industries explore countless new designs and figure out which ones are worth testing in the real world. It's why the lab has experienced huge demand for its two dozen supercomputers.
"No matter how much computing power we've had, people would use it up and ask for more," Bertsch said.
It also explains why the US government wants an exascale supercomputer. The extra computing power will allow scientists to develop more advanced simulations, like recreating even smaller particle interactions, which could pave the way for new research breakthroughs.
The exascale systems will also be able to complete current research projects in less time. "What you previously had to spend months doing might only take hours," Bertsch added.
Sierra is part of a classified network not connected to the public internet, which is available to about 1,000 approved researchers in affiliated scientific programs. About 3,000 people conduct research on unclassified supercomputers, which are accessible online provided you have a user account and the right login credentials. (Sorry, Bitcoin miners.)
"We have people buy into the computer at the acquisition time," Bertsch said. "The amount of money you put in correlates to the percentage of the machine you bought."
A scheduling system is used to ensure your "fair share" with the machine. "It tries to steer your usage toward the percentage you've been allocated," Bertsch added. "If you used less than your fair share over time, your priority goes up and you'll run sooner."
Simulations are always running. One supercomputer can run thousands of jobs at any given time. A machine can also process what's called a "hero run," or a single job that's so big the entire supercomputer is required to complete it in a reasonable time.
Keeping It Up And Running:
Sierra is a supercomputer, but the machine has largely been made with commodity parts. The processors, for example, are enterprise-grade chips from IBM and Nvidia, and the system itself runs Red Hat Enterprise Linux, a popular OS among server vendors.
"Back in the day, supercomputers were these monolithic big, esoteric blobs of hardware," said Robin Goldstone, the lab's high performance computing solution architect. "These days, even the world's biggest systems are essentially just a bunch of servers connected together."
To maximize its use, a system like Sierra needs to be capable of conducting different kinds of research. So the lab set out to create an all-purpose machine. But even a supercomputer isn't perfect. The lab estimates that every 12 hours Sierra will suffer an error that can involve a hardware malfunction. That may sound surprising, but think of it as owning 100,000 computers; failures and repairs are inevitable.
"The most common things that fail are probably memory DIMMs, power supplies, fans," Goldstone said. Fortunately, Sierra is so huge, it has plenty of capacity. The supercomputer is also routinely creating memory backups in the event an error disrupts a project.
"To some degree, this isn't exactly like a PC you have at home, but a flavor of that," Goldstone added. "Take the gamers who are obsessed with getting the fastest memory, and the fastest GPU, and that's the same thing we're obsessed with. The challenge with us is we have so many running at the same time."
Sierra itself sits in a 47,000-square-foot room, which is filled with the noise of fans keeping the hardware cool. A level below the machine is the building's water pumping system. Each minute, it can send thousands of gallons into pipes, which then feed into the supercomputer's racks and circulates water back out.
On the power front, the lab has been equipped to supply 45 megawatts—or enough electricity for a small city. About 11 of those megawatts have been delegated to Sierra. However, a supercomputer's power consumption can occasionally spark complaints from local energy companies. When an application crashes, a machine's energy demands can suddenly drop several megawatts.
The energy supplier "does not like that at all. Because they have to shed load. They are paying for power," Goldstone said. "They've called us up on the phone and said, 'Can you not do that anymore?'"
The Exascale Future:
The Lawrence Livermore National Lab is also home to another supercomputer called Sequoia, which briefly reigned as the world's top system back in 2012. But the lab plans to retire it later this year to make way for a bigger and better supercomputer, called El Capitan, which is among the exascale supercomputers the US government has been planning.
Expect it to go online in 2023. But it won't be alone. El Capitan will join two other exascale systems, which the US is spending over $1 billion to construct. Both will be completed in 2021 at separate labs in Illinois and Tennessee.
"At some point, I keep thinking, 'Isn't it fast enough? How much faster do we really need these computers to be?'" Goldstone said. "But it's more about being able to solve problems faster or study problems at higher resolution, so we can really see something at the molecular levels."
But the supercomputing industry will eventually need to innovate. It's simply unsustainable to continue building bigger machines that eat up more power and take more physical room.
"We're pushing the limits of what today's technology can do," she said. "There's going to have to be advances in other areas beyond traditional silicon-based computing chips to take us to that next level."
In the meantime, the lab has been working with vendors such as IBM and Nvidia to resolve immediate bottlenecks, including improving a supercomputer's network architecture so it can quickly communicate across the different clusters, as well as component reliability.
"Processor speed just doesn't matter anymore," she added. "As fast as the processors are, we're constrained by memory bandwidth."
The lab will announce more details about El Capitan in the future. As for the computer it's replacing, Sequoia, the system is headed for oblivion.
For security purposes, the lab plans to grind up every piece of the machine and recycling its remains.
Supercomputers can end up running classified government data, so it's vital any trace of that information is completely purged—even if it means turning the machine into scrap. That may sound extreme, but errors can be made when trying to delete the data virtually, so the lab needs to be absolutely sure the data is permanently gone.
[End of Article]
___________________________________________________________________________
Wikipedia below:
A supercomputer is a computer with a high level of performance compared to a general-purpose computer.
The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there are supercomputers which can perform over a hundred quadrillion FLOPS.
Since November 2017, all of the world's fastest 500 supercomputers run Linux-based operating systems. Additional research is being conducted in China, the United States, the European Union, Taiwan and Japan to build even faster, more powerful and more technologically superior exascale supercomputers.
Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including the following:
Throughout their history, they have been essential in the field of cryptanalysis.
Supercomputers were introduced in the 1960s, and for several decades the fastest were made by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. The first such machines were highly tuned conventional designs that ran faster than their more general-purpose contemporaries.
Through the 1960s, they began to add increasing amounts of parallelism with one to four processors being typical.
From the 1970s, vector processors operating on large arrays of data came to dominate. A notable example is the highly successful Cray-1 of 1976.
Vector computers remained the dominant design into the 1990s. From then until today, massively parallel supercomputers with tens of thousands of off-the-shelf processors became the norm.
The US has long been the leader in the supercomputer field, first through Cray's almost uninterrupted dominance of the field, and later through a variety of technology companies.
Japan made major strides in the field in the 1980s and 90s, but since then China has become increasingly active in the field. As of November 2018, the fastest supercomputer on the TOP 500 supercomputer list is the Summit, in the United States, with a LINPACK benchmark score of 143.5 PFLOPS, followed by, Sierra, by around 48.860 PFLOPS.
The US has five of the top 10 and China has two. In June 2018, all supercomputers on the list combined have broken the 1 exaFLOPS mark.
Click on any of the following blue hyperlinks for more about Supercomputers:
Lawrence Livermore National Laboratory is home to several supercomputers, including Sierra, the world's second fastest. We stopped by to find out how these supercharged computers handle everything from virtual nuclear weapons tests to weather modeling.
As the US competes with China to build the fastest supercomputers, you might be wondering how these giant machines are being used.
A supercomputer can contain hundreds of thousands of processor cores and require an entire building to house and cool—not to mention millions of dollars to create and maintain them.
But despite these challenges, more and more are set to go online as the US and China develop new "exascale" supercomputers, which promise a five-fold performance boost compared to current leading systems.
So who needs all this computing power and why? To find out, PCMag visited the Lawrence Livermore National Laboratory in California, which is home to several supercomputers, including the world's second fastest, Sierra. It was there we learned how system engineers are maintaining the machines to serve scientific researchers but also test something you might not expect: nuclear weapons.
When you visit Sierra, you'll notice the words "classified" and "secret restricted data" posted on the supercomputer, which is made up of 240 server-like racks. The warnings exist because Sierra is processing data involving the US's nuclear stockpile, including how the weapons should detonate in the real world.
The US conducted its last live nuclear weapons test in 1992. Since then, the country has used supercomputers to help carry out the experiments virtually, and Sierra is part of that mission. The machine was completed last year primarily to aid the US government in monitoring and testing the effectiveness of the country's aging nuclear arsenal, which needs to be routinely maintained.
"The only way a deterrent works is if you know that it can function, and that your adversary also knows and believes it functions," said Adam Bertsch, a high performance computing systems engineer at the lab.
Not surprisingly, simulating a nuclear explosion requires a lot of math. Foundational principles in science can predict how particles will interact with each other under different conditions. The US government also possesses decades of data collected from real nuclear tests. Scientists have combined this information to create equations inside computer models, which can calculate how a nuclear explosion will go off and change over time.
Essentially, you're trying to map out a chain reaction. So to make the models accurate, they've been designed to predict a nuclear detonation at molecular levels using real-world physics. The challenge is that calculating what all these particles will do requires a lot of number-crunching.
Enter Sierra. The supercomputer has 190,000 CPU processor cores and 17,000 GPU cores.
All that computing power means it can take a huge task, like simulating nuclear fission, and break it down into smaller pieces. Each core can then process a tiny chunk of the simulation and communicate the results to the rest of the machine. The process will repeat over and over again as the supercomputer tries to model a nuclear explosion from one second to the next.
"You can do a full simulation of a nuclear device in the computer," Bertsch added. "You can find out that it works, exactly how well it works and what kind of effects would happen."
A supercomputer's ability to calculate and model particle interactions is why it's become such an important tool for researchers. In a sense, reactions are happening all around us. This can include the weather, how a star forms, or when human cells come in contact with a drug.
A supercomputer can simulate all these interactions. Scientists can then take the data to learn useful insights, like whether it'll rain tomorrow, if a new scientific theory is valid, or if an upcoming cancer treatment holds any promise.
The same technologies can also let industries explore countless new designs and figure out which ones are worth testing in the real world. It's why the lab has experienced huge demand for its two dozen supercomputers.
"No matter how much computing power we've had, people would use it up and ask for more," Bertsch said.
It also explains why the US government wants an exascale supercomputer. The extra computing power will allow scientists to develop more advanced simulations, like recreating even smaller particle interactions, which could pave the way for new research breakthroughs.
The exascale systems will also be able to complete current research projects in less time. "What you previously had to spend months doing might only take hours," Bertsch added.
Sierra is part of a classified network not connected to the public internet, which is available to about 1,000 approved researchers in affiliated scientific programs. About 3,000 people conduct research on unclassified supercomputers, which are accessible online provided you have a user account and the right login credentials. (Sorry, Bitcoin miners.)
"We have people buy into the computer at the acquisition time," Bertsch said. "The amount of money you put in correlates to the percentage of the machine you bought."
A scheduling system is used to ensure your "fair share" with the machine. "It tries to steer your usage toward the percentage you've been allocated," Bertsch added. "If you used less than your fair share over time, your priority goes up and you'll run sooner."
Simulations are always running. One supercomputer can run thousands of jobs at any given time. A machine can also process what's called a "hero run," or a single job that's so big the entire supercomputer is required to complete it in a reasonable time.
Keeping It Up And Running:
Sierra is a supercomputer, but the machine has largely been made with commodity parts. The processors, for example, are enterprise-grade chips from IBM and Nvidia, and the system itself runs Red Hat Enterprise Linux, a popular OS among server vendors.
"Back in the day, supercomputers were these monolithic big, esoteric blobs of hardware," said Robin Goldstone, the lab's high performance computing solution architect. "These days, even the world's biggest systems are essentially just a bunch of servers connected together."
To maximize its use, a system like Sierra needs to be capable of conducting different kinds of research. So the lab set out to create an all-purpose machine. But even a supercomputer isn't perfect. The lab estimates that every 12 hours Sierra will suffer an error that can involve a hardware malfunction. That may sound surprising, but think of it as owning 100,000 computers; failures and repairs are inevitable.
"The most common things that fail are probably memory DIMMs, power supplies, fans," Goldstone said. Fortunately, Sierra is so huge, it has plenty of capacity. The supercomputer is also routinely creating memory backups in the event an error disrupts a project.
"To some degree, this isn't exactly like a PC you have at home, but a flavor of that," Goldstone added. "Take the gamers who are obsessed with getting the fastest memory, and the fastest GPU, and that's the same thing we're obsessed with. The challenge with us is we have so many running at the same time."
Sierra itself sits in a 47,000-square-foot room, which is filled with the noise of fans keeping the hardware cool. A level below the machine is the building's water pumping system. Each minute, it can send thousands of gallons into pipes, which then feed into the supercomputer's racks and circulates water back out.
On the power front, the lab has been equipped to supply 45 megawatts—or enough electricity for a small city. About 11 of those megawatts have been delegated to Sierra. However, a supercomputer's power consumption can occasionally spark complaints from local energy companies. When an application crashes, a machine's energy demands can suddenly drop several megawatts.
The energy supplier "does not like that at all. Because they have to shed load. They are paying for power," Goldstone said. "They've called us up on the phone and said, 'Can you not do that anymore?'"
The Exascale Future:
The Lawrence Livermore National Lab is also home to another supercomputer called Sequoia, which briefly reigned as the world's top system back in 2012. But the lab plans to retire it later this year to make way for a bigger and better supercomputer, called El Capitan, which is among the exascale supercomputers the US government has been planning.
Expect it to go online in 2023. But it won't be alone. El Capitan will join two other exascale systems, which the US is spending over $1 billion to construct. Both will be completed in 2021 at separate labs in Illinois and Tennessee.
"At some point, I keep thinking, 'Isn't it fast enough? How much faster do we really need these computers to be?'" Goldstone said. "But it's more about being able to solve problems faster or study problems at higher resolution, so we can really see something at the molecular levels."
But the supercomputing industry will eventually need to innovate. It's simply unsustainable to continue building bigger machines that eat up more power and take more physical room.
"We're pushing the limits of what today's technology can do," she said. "There's going to have to be advances in other areas beyond traditional silicon-based computing chips to take us to that next level."
In the meantime, the lab has been working with vendors such as IBM and Nvidia to resolve immediate bottlenecks, including improving a supercomputer's network architecture so it can quickly communicate across the different clusters, as well as component reliability.
"Processor speed just doesn't matter anymore," she added. "As fast as the processors are, we're constrained by memory bandwidth."
The lab will announce more details about El Capitan in the future. As for the computer it's replacing, Sequoia, the system is headed for oblivion.
For security purposes, the lab plans to grind up every piece of the machine and recycling its remains.
Supercomputers can end up running classified government data, so it's vital any trace of that information is completely purged—even if it means turning the machine into scrap. That may sound extreme, but errors can be made when trying to delete the data virtually, so the lab needs to be absolutely sure the data is permanently gone.
[End of Article]
___________________________________________________________________________
Wikipedia below:
A supercomputer is a computer with a high level of performance compared to a general-purpose computer.
The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there are supercomputers which can perform over a hundred quadrillion FLOPS.
Since November 2017, all of the world's fastest 500 supercomputers run Linux-based operating systems. Additional research is being conducted in China, the United States, the European Union, Taiwan and Japan to build even faster, more powerful and more technologically superior exascale supercomputers.
Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including the following:
- quantum mechanics,
- weather forecasting,
- climate research,
- oil and gas exploration,
- molecular modeling (computing the structures and properties of chemical compounds biological macromolecules, polymers, and crystals),
- and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion).
Throughout their history, they have been essential in the field of cryptanalysis.
Supercomputers were introduced in the 1960s, and for several decades the fastest were made by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. The first such machines were highly tuned conventional designs that ran faster than their more general-purpose contemporaries.
Through the 1960s, they began to add increasing amounts of parallelism with one to four processors being typical.
From the 1970s, vector processors operating on large arrays of data came to dominate. A notable example is the highly successful Cray-1 of 1976.
Vector computers remained the dominant design into the 1990s. From then until today, massively parallel supercomputers with tens of thousands of off-the-shelf processors became the norm.
The US has long been the leader in the supercomputer field, first through Cray's almost uninterrupted dominance of the field, and later through a variety of technology companies.
Japan made major strides in the field in the 1980s and 90s, but since then China has become increasingly active in the field. As of November 2018, the fastest supercomputer on the TOP 500 supercomputer list is the Summit, in the United States, with a LINPACK benchmark score of 143.5 PFLOPS, followed by, Sierra, by around 48.860 PFLOPS.
The US has five of the top 10 and China has two. In June 2018, all supercomputers on the list combined have broken the 1 exaFLOPS mark.
Click on any of the following blue hyperlinks for more about Supercomputers:
- History
- Special purpose supercomputers
- Energy usage and heat management
- Software and system management
- Distributed supercomputing
- HPC clouds
- Performance measurement
- Largest Supercomputer Vendors according to the total Rmax (GFLOPS) operated
- Applications
- Development and trends
- In fiction
- See also:
- ACM/IEEE Supercomputing Conference
- ACM SIGHPC
- High-performance technical computing
- Jungle computing
- Nvidia Tesla Personal Supercomputer
- Parallel computing
- Supercomputing in China
- Supercomputing in Europe
- Supercomputing in India
- Supercomputing in Japan
- Testing high-performance computing applications
- Ultra Network Technologies
- Quantum computing
- A Tunable, Software-based DRAM Error Detection and Correction Library for HPC
- Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing
Lenovo
- YouTube Video: Lenovo Tech Life and IFA 2019 Highlights
- YouTube Video: Rube Goldberg Experience at MWC 2019
- YouTube Video: Lenovo Unboxed: Yoga A940 All-In-One PC
Lenovo Group Limited often shortened to Lenovo, is a Chinese multinational technology company with headquarters in Beijing.
Lenovo designs, develops, manufactures, and sells:
Lenovo is the world's largest personal computer vendor by unit sales, as of March 2019. It markets the ThinkPad and ThinkBook business lines of notebook computers, IdeaPad, Yoga and Legion consumer lines of notebook laptops, and the IdeaCentre and ThinkCentre lines of desktops.
Lenovo also has a joint venture with NEC, Lenovo NEC Holdings, which produces personal computers for the Japanese market.
Lenovo was founded in Beijing in November 1984 as Legend and was incorporated in Hong Kong in 1988. Lenovo acquired IBM's personal computer business in 2005 and agreed to acquire its Intel-based server business in 2014.
Lenovo entered the smartphone market in 2012 and as of 2014 was the largest vendor of smartphones in Mainland China. In 2014, Lenovo acquired the mobile phone handset maker Motorola Mobility from Google.
Lenovo is listed on the Hong Kong Stock Exchange and is a constituent of the Hang Seng China-Affiliated Corporations Index, often referred to as "Red Chips".
Click on any of the following blue hyperlinks for more about Lenovo:
Lenovo designs, develops, manufactures, and sells:
- personal computers,
- tablet computers,
- smartphones,
- workstations,
- servers,
- electronic storage devices,
- IT management software,
- and smart televisions.
Lenovo is the world's largest personal computer vendor by unit sales, as of March 2019. It markets the ThinkPad and ThinkBook business lines of notebook computers, IdeaPad, Yoga and Legion consumer lines of notebook laptops, and the IdeaCentre and ThinkCentre lines of desktops.
- Lenovo has operations in more than 60 countries and sells its products in around 160 countries. Lenovo's principal facilities are in Beijing and Morrisville (North Carolina, U.S.), with Chinese research centers in: Beijing,
- Shanghai,
- Shenzhen,
- Xiamen,
- Chengdu,
- Nanjing,
- and Wuhan, Yamato (Kanagawa Prefecture, Japan),
- and Morrisville.
Lenovo also has a joint venture with NEC, Lenovo NEC Holdings, which produces personal computers for the Japanese market.
Lenovo was founded in Beijing in November 1984 as Legend and was incorporated in Hong Kong in 1988. Lenovo acquired IBM's personal computer business in 2005 and agreed to acquire its Intel-based server business in 2014.
Lenovo entered the smartphone market in 2012 and as of 2014 was the largest vendor of smartphones in Mainland China. In 2014, Lenovo acquired the mobile phone handset maker Motorola Mobility from Google.
Lenovo is listed on the Hong Kong Stock Exchange and is a constituent of the Hang Seng China-Affiliated Corporations Index, often referred to as "Red Chips".
Click on any of the following blue hyperlinks for more about Lenovo:
- History
- Name
- Products and services
- Operations
- Corporate affairs
- Marketing and sponsorships
- Controversy and security issues
- See also;
Software Industry, including a List of the Largest Software Companies, the Largest Information Technology Companies, and The Largest Internet Companies
TOP: (L) Microsoft Office Software Suite, (R) Pros and Cons of Amazon Prime (Consumer Reports 6/14/2019)
BOTTOM: Creating a Store Locator on Google Maps
- YouTube Video: The Beginner's Guide to Excel - Excel Basics Tutorial
- YouTube Video: What Is Amazon Prime and Is It Worth It?
- YouTube Video: How does Google Maps Work?
TOP: (L) Microsoft Office Software Suite, (R) Pros and Cons of Amazon Prime (Consumer Reports 6/14/2019)
BOTTOM: Creating a Store Locator on Google Maps
- Click here for a List of the Largest Software Companies based on Revenues.
- Click here for a List of the Largest Technology Companies by revenue
- Click here for a List of the Largest Internet Companies based on Revenues
The software industry includes businesses for development, maintenance and publication of software that are using different business models, mainly either "license/maintenance based" (on-premises) or "Cloud based" (such as SaaS, PaaS, IaaS, MaaS, AaaS, etc.). The industry also includes software services, such as training, documentation, consulting and data recovery .
History:
The word "software" was coined as a prank as early as 1953, but did not appear in print until the 1960s. Before this time, computers were programmed either by customers, or the few commercial computer vendors of the time, such as UNIVAC and IBM. The first company founded to provide software products and services was Computer Usage Company in 1955.
The software industry expanded in the early 1960s, almost immediately after computers were first sold in mass-produced quantities. Universities, government, and business customers created a demand for software. Many of these programs were written in-house by full-time staff programmers.
Some were distributed freely between users of a particular machine for no charge. Others were done on a commercial basis, and other firms such as Computer Sciences Corporation (founded in 1959) started to grow.
Other influential or typical software companies begun in the early 1960s included:
- Advanced Computer Techniques,
- Automatic Data Processing,
- Applied Data Research,
- and Informatics General.
The computer/hardware makers started bundling operating systems, systems software and programming environments with their machines.
When Digital Equipment Corporation (DEC) brought a relatively low-priced microcomputer to market, it brought computing within the reach of many more companies and universities worldwide, and it spawned great innovation in terms of new, powerful programming languages and methodologies. New software was built for microcomputers, so other manufacturers including IBM, followed DEC's example quickly, resulting in the IBM AS/400 among others.
The industry expanded greatly with the rise of the personal computer ("PC") in the mid-1970s, which brought desktop computing to the office worker for the first time. In the following years, it also created a growing market for games, applications, and utilities. DOS, Microsoft's first operating system product, was the dominant operating system at the time.
In the early years of the 21st century, another successful business model has arisen for hosted software, called software-as-a-service, or SaaS; this was at least the third time this model had been attempted.
From the point of view of producers of some proprietary software, SaaS reduces the concerns about unauthorized copying, since it can only be accessed through the Web, and by definition no client software is loaded onto the end user's PC.
Size of the industry:
According to industry analyst Gartner, the size of the worldwide software industry in 2013 was US$407.3 billion, an increase of 4.8% over 2012. As in past years, the largest four software vendors were Microsoft, Oracle Corporation, IBM, and SAP respectively.
Mergers and acquisitions:
The software industry has been subject to a high degree of consolidation over the past couple of decades. Between 1995 and 2018 around 37,039 mergers and acquisitions have been announced with a total known value of US$ 1,166 bil. USD.
The highest number and value of deals was set in 2000 during the high times of the dot-com bubble with 2,674 transactions valued at 105. bil. USD. In 2017, 2,547 deals were announced valued at $111 billion. Approaches to successfully acquire and integrate software companies are available.
Business models within the software industry:
Business models of software companies have been widely discussed. Network effects in software ecosystems, networks of companies, and their customers are an important element in the strategy of software companies.
See also:
- Software engineering
- World's largest software companies
- Function point
- Software development effort estimation
- Comparison of development estimation software