Copyright © 2015 Bert N. Langford (Images may be subject to copyright. Please send feedback)
Welcome to Our Generation USA!
On this Page We Cover
The Best of The Internet.
Note that, however,
For Social Networking, click here
For Web-based Television, click here
For Internet Security, click here
Google: Search Engine and Multinational Technology Company.
YouTube Video: Introducing Google Gnome Game
YouTube Video: (Google) Waymo's fully self-driving cars are here
YouTube Video: Made by Google 2017 | Event highlights
Google LLC is an American multinational technology company that specializes in Internet-related services and products. These include the following:
Google was founded in 1998 by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University, in California. Together, they own about 14 percent of its shares, and control 56 percent of the stockholder voting power through supervoting stock. They incorporated Google as a privately held company on September 4, 1998.
An initial public offering (IPO) took place on August 19, 2004, and Google moved to its new headquarters in Mountain View, California, nicknamed the Googleplex.
In August 2015, Google announced plans to reorganize its various interests as a conglomerate called Alphabet Inc. Google, Alphabet's leading subsidiary, will continue to be the umbrella company for Alphabet's Internet interests. Upon completion of the restructure, Sundar Pichai was appointed CEO of Google; he replaced Larry Page, who became CEO of Alphabet.
The company's rapid growth since incorporation has triggered a chain of products, acquisitions, and partnerships beyond Google's core search engine (Google Search).
It offers services designed for
The company leads the development of the Android mobile operating system, the Google Chrome web browser, and Chrome OS, a lightweight operating system based on the Chrome browser.
Google has moved increasingly into hardware; from 2010 to 2015, it partnered with major electronics manufacturers in the production of its Nexus devices, and in October 2016, it released multiple hardware products, including the following:
The new hardware chief, Rick Osterloh, stated: "a lot of the innovation that we want to do now ends up requiring controlling the end-to-end user experience". Google has also experimented with becoming an Internet carrier.
In February 2010, it announced Google Fiber, a fiber-optic infrastructure that was installed in Kansas City; in April 2015, it launched Project Fi in the United States, combining Wi-Fi and cellular networks from different providers; and in 2016, it announced the Google Station initiative to make public Wi-Fi available around the world, with initial deployment in India.
Alexa, a company that monitors commercial web traffic, lists Google.com as the most visited website in the world. Several other Google services also figure in the top 100 most visited websites, including YouTube and Blogger. Google is the most valuable brand in the world as of 2017, but has received significant criticism involving issues such as privacy concerns, tax avoidance, antitrust, censorship, and search neutrality.
Google's mission statement, from the outset, was "to organize the world's information and make it universally accessible and useful", and its unofficial slogan was "Don't be evil". In October 2015, the motto was replaced in the Alphabet corporate code of conduct by the phrase "Do the right thing".
Click on any of the following blue hyperlinks for more about the company "Google"
- online advertising technologies,
- search,
- cloud computing,
- software,
- and hardware.
Google was founded in 1998 by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University, in California. Together, they own about 14 percent of its shares, and control 56 percent of the stockholder voting power through supervoting stock. They incorporated Google as a privately held company on September 4, 1998.
An initial public offering (IPO) took place on August 19, 2004, and Google moved to its new headquarters in Mountain View, California, nicknamed the Googleplex.
In August 2015, Google announced plans to reorganize its various interests as a conglomerate called Alphabet Inc. Google, Alphabet's leading subsidiary, will continue to be the umbrella company for Alphabet's Internet interests. Upon completion of the restructure, Sundar Pichai was appointed CEO of Google; he replaced Larry Page, who became CEO of Alphabet.
The company's rapid growth since incorporation has triggered a chain of products, acquisitions, and partnerships beyond Google's core search engine (Google Search).
It offers services designed for
- work and productivity:
- email (Gmail/Inbox),
- scheduling and time management (Google Calendar),
- cloud storage (Google Drive),
- social networking (Google+),
- instant messaging and video chat (Google Allo/Duo/Hangouts),
- language translation (Google Translate),
- mapping and turn-by-turn navigation (Google Maps/Waze/Earth/Street View),
- video sharing (YouTube),
- notetaking (Google Keep),
- and photo organizing and editing (Google Photos).
The company leads the development of the Android mobile operating system, the Google Chrome web browser, and Chrome OS, a lightweight operating system based on the Chrome browser.
Google has moved increasingly into hardware; from 2010 to 2015, it partnered with major electronics manufacturers in the production of its Nexus devices, and in October 2016, it released multiple hardware products, including the following:
- Google Pixel smartphone,
- Home smart speaker,
- Wifi mesh wireless router,
- and Daydream View virtual reality headset.
The new hardware chief, Rick Osterloh, stated: "a lot of the innovation that we want to do now ends up requiring controlling the end-to-end user experience". Google has also experimented with becoming an Internet carrier.
In February 2010, it announced Google Fiber, a fiber-optic infrastructure that was installed in Kansas City; in April 2015, it launched Project Fi in the United States, combining Wi-Fi and cellular networks from different providers; and in 2016, it announced the Google Station initiative to make public Wi-Fi available around the world, with initial deployment in India.
Alexa, a company that monitors commercial web traffic, lists Google.com as the most visited website in the world. Several other Google services also figure in the top 100 most visited websites, including YouTube and Blogger. Google is the most valuable brand in the world as of 2017, but has received significant criticism involving issues such as privacy concerns, tax avoidance, antitrust, censorship, and search neutrality.
Google's mission statement, from the outset, was "to organize the world's information and make it universally accessible and useful", and its unofficial slogan was "Don't be evil". In October 2015, the motto was replaced in the Alphabet corporate code of conduct by the phrase "Do the right thing".
Click on any of the following blue hyperlinks for more about the company "Google"
- History
- Products and services
- Corporate affairs and culture
- Criticism and controversy
- See also:
- AngularJS
- Comparison of web search engines
- Don't Be Evil
- Google (verb)
- Google Balloon Internet
- Google Catalogs
- Google China
- Google bomb
- Google Chrome Experiments
- Google Get Your Business Online
- Google logo
- Google Maps
- Google platform
- Google Street View
- Google tax
- Google Ventures – venture capital fund
- Google X
- Life sciences division of Google X
- Googlebot – web crawler
- Googlization
- List of Google apps for Android
- List of mergers and acquisitions by Alphabet
- Apple, Inc.
- Outline of Google
- Reunion
- Ungoogleable
- Surveillance capitalism
- Calico
- Official website
- Google website at the Wayback Machine (archived November 11, 1998)
- Google at CrunchBase
- Google companies grouped at OpenCorporates
- Business data for Google, Inc.:
YouTube Including a List of the most watched YouTube Videos
YouTube Video: HOW TO MAKE VIDEOS AND START A YOUTUBE CHANNEL
Video Sharing Website Ranked #2 by Alexa (#3 by SimilarWeb)
Click here for a List of the Most Watched YouTube Videos.
YouTube is an American video-sharing website headquartered in San Bruno, California. The service was created by three former PayPal employees — Chad Hurley, Steve Chen, and Jawed Karim — in February 2005. Google bought the site in November 2006 for US$1.65 billion; YouTube now operates as one of Google's subsidiaries.
YouTube allows users to upload, view, rate, share, add to favorites, report, comment on videos, and subscribe to other users. It uses WebM, H.264/MPEG-4 AVC, and Adobe Flash Video technology to display a wide variety of user-generated and corporate media videos.
Available content includes the following:
Most of the content on YouTube has been uploaded by individuals, but media corporations including CBS, the BBC, Vevo, and Hulu offer some of their material via YouTube as part of the YouTube partnership program.
Unregistered users can only watch videos on the site, while registered users are permitted to upload an unlimited number of videos and add comments to videos. Videos deemed potentially offensive are available only to registered users affirming themselves to be at least 18 years old.
YouTube earns advertising revenue from Google AdSense, a program which targets ads according to site content and audience. The vast majority of its videos are free to view, but there are exceptions, including subscription-based premium channels, film rentals, as well as YouTube Red, a subscription service offering ad-free access to the website and access to exclusive content made in partnership with existing users.
As of February 2017, there are more than 400 hours of content uploaded to YouTube each minute, and one billion hours of content are watched on YouTube every day. As of April 2017, the website is ranked as the second most popular site in the world by Alexa Internet, a web traffic analysis company.
Click on any of the following blue hyperlinks for more about YouTube Videos:
YouTube is an American video-sharing website headquartered in San Bruno, California. The service was created by three former PayPal employees — Chad Hurley, Steve Chen, and Jawed Karim — in February 2005. Google bought the site in November 2006 for US$1.65 billion; YouTube now operates as one of Google's subsidiaries.
YouTube allows users to upload, view, rate, share, add to favorites, report, comment on videos, and subscribe to other users. It uses WebM, H.264/MPEG-4 AVC, and Adobe Flash Video technology to display a wide variety of user-generated and corporate media videos.
Available content includes the following:
- video clips,
- TV show clips,
- music videos,
- short and documentary films,
- audio recordings,
- movie trailers,
- video blogging,
- short original videos,
- and educational videos.
Most of the content on YouTube has been uploaded by individuals, but media corporations including CBS, the BBC, Vevo, and Hulu offer some of their material via YouTube as part of the YouTube partnership program.
Unregistered users can only watch videos on the site, while registered users are permitted to upload an unlimited number of videos and add comments to videos. Videos deemed potentially offensive are available only to registered users affirming themselves to be at least 18 years old.
YouTube earns advertising revenue from Google AdSense, a program which targets ads according to site content and audience. The vast majority of its videos are free to view, but there are exceptions, including subscription-based premium channels, film rentals, as well as YouTube Red, a subscription service offering ad-free access to the website and access to exclusive content made in partnership with existing users.
As of February 2017, there are more than 400 hours of content uploaded to YouTube each minute, and one billion hours of content are watched on YouTube every day. As of April 2017, the website is ranked as the second most popular site in the world by Alexa Internet, a web traffic analysis company.
Click on any of the following blue hyperlinks for more about YouTube Videos:
- Company history
- Features
- Video technology
- Playback
- Uploading
Quality and formats
3D videos
360° videos
- User features
- Content accessibility
- Localization
- YouTube Red
- YouTube TV
- Video technology
- Social impact
- Revenue
- Community policy
- Censorship and filtering
- Music Key licensing
- NSA Prism program
- April Fools
- CNN-YouTube presidential debates
- List of YouTubers
- BookTube
- Ouellette v. Viacom International Inc.
- Reply Girls
- YouTube Awards
- YouTube Instant
- YouTube Live
- YouTube Multi Channel Network
- YouTube Symphony Orchestra
- Viacom International Inc. v. YouTube, Inc.
- Alternative media
- Comparison of video hosting services
- List of Internet phenomena
- List of video hosting services
- YouTube on Blogger
- Press room – YouTube
- YouTube – Google Developers
- Haran, Brady; Hamilton, Ted. "Why do YouTube views freeze at 301?". Numberphile. Brady Haran.
- Dickey, Megan Rose (February 15, 2013). "The 22 Key Turning Points in the History of YouTube". Business Insider. Axel Springer SE. Retrieved March 25, 2017.
- Are Youtubers Revolutionizing Entertainment? (June 6, 2013), video produced for PBS by Off Book.
- First Youtube video ever
Facebook and its owner Meta Platforms Pictured below: Infographic: Timeline of Facebook data scandal
Facebook is an online social media and social networking service owned by American company Meta Platforms (see below).
Founded in 2004 by Mark Zuckerberg with fellow Harvard College students and roommates Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes, its name comes from the face book directories often given to American university students.
Membership was initially limited to Harvard students, gradually expanding to other North American universities and, since 2006, anyone over 13 years old.
As of July 2022, Facebook claimed 2.93 billion monthly active users, and ranked third worldwide among the most visited websites as of July 2022. It was the most downloaded mobile app of the 2010s.
Facebook can be accessed from devices with Internet connectivity, such as personal computers, tablets and smartphones. After registering, users can create a profile revealing information about themselves. They can post text, photos and multimedia which are shared with any other users who have agreed to be their "friend" or, with different privacy settings, publicly.
Users can also communicate directly with each other with Facebook Messenger, join common-interest groups, and receive notifications on the activities of their Facebook friends and the pages they follow.
The subject of numerous controversies, Facebook has often been criticized over issues such as user privacy (as with the Cambridge Analytica data scandal), political manipulation (as with the 2016 U.S. elections) and mass surveillance.
Posts originating from the Facebook page of Breitbart News, a media organization previously affiliated with Cambridge Analytica, are currently among the most widely shared political content on Facebook.
Facebook has also been subject to criticism over psychological effects such as addiction and low self-esteem, and various controversies over content such as fake news, conspiracy theories, copyright infringement, and hate speech.
Commentators have accused Facebook of willingly facilitating the spread of such content, as well as exaggerating its number of users to appeal to advertisers
Click on any of the following blue hyperlinks for more about Facebook:
Meta Platforms, Inc., doing business as Meta and formerly named Facebook, Inc., and TheFacebook, Inc., is an American multinational technology conglomerate based in Menlo Park, California.
The company owns Facebook, Instagram, and WhatsApp, among other products and services. Meta was once one of the world's most valuable companies, but as of 2022 is not one of the top twenty biggest companies in the United States.
Meta is considered one of the Big Five American information technology companies, alongside Alphabet (Google), Amazon, Apple, and Microsoft.
As of 2022, it is the least profitable of the five.
Meta's products and services include Facebook, Messenger, Facebook Watch, and Meta Portal. It has also acquired Oculus, Giphy, Mapillary, Kustomer, Presize and has a 9.99% stake in Jio Platforms. In 2021, the company generated 97.5% of its revenue from the sale of advertising.
In October 2021, the parent company of Facebook changed its name from Facebook, Inc., to Meta Platforms, Inc., to "reflect its focus on building the metaverse". According to Meta, the "metaverse" refers to the integrated environment that links all of the company's products and services.
Click on any of the following blue hyperlinks for more about Meta Platforms, Inc.:
Founded in 2004 by Mark Zuckerberg with fellow Harvard College students and roommates Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes, its name comes from the face book directories often given to American university students.
Membership was initially limited to Harvard students, gradually expanding to other North American universities and, since 2006, anyone over 13 years old.
As of July 2022, Facebook claimed 2.93 billion monthly active users, and ranked third worldwide among the most visited websites as of July 2022. It was the most downloaded mobile app of the 2010s.
Facebook can be accessed from devices with Internet connectivity, such as personal computers, tablets and smartphones. After registering, users can create a profile revealing information about themselves. They can post text, photos and multimedia which are shared with any other users who have agreed to be their "friend" or, with different privacy settings, publicly.
Users can also communicate directly with each other with Facebook Messenger, join common-interest groups, and receive notifications on the activities of their Facebook friends and the pages they follow.
The subject of numerous controversies, Facebook has often been criticized over issues such as user privacy (as with the Cambridge Analytica data scandal), political manipulation (as with the 2016 U.S. elections) and mass surveillance.
Posts originating from the Facebook page of Breitbart News, a media organization previously affiliated with Cambridge Analytica, are currently among the most widely shared political content on Facebook.
Facebook has also been subject to criticism over psychological effects such as addiction and low self-esteem, and various controversies over content such as fake news, conspiracy theories, copyright infringement, and hate speech.
Commentators have accused Facebook of willingly facilitating the spread of such content, as well as exaggerating its number of users to appeal to advertisers
Click on any of the following blue hyperlinks for more about Facebook:
- History
- 2003–2006: Thefacebook, Thiel investment, and name change
- 2006–2012: Public access, Microsoft alliance, and rapid growth
- 2012–2013: IPO, lawsuits, and one billion active users
- 2013–2014: Site developments, A4AI, and 10th anniversary
- 2015–2020: Algorithm revision; fake news
- 2020–present: FTC lawsuit, corporate re-branding, shut down of facial recognition technology, ease of policy
- Website
- Reception
- Criticisms and controversies
- Impact
- See also:
Meta Platforms, Inc., doing business as Meta and formerly named Facebook, Inc., and TheFacebook, Inc., is an American multinational technology conglomerate based in Menlo Park, California.
The company owns Facebook, Instagram, and WhatsApp, among other products and services. Meta was once one of the world's most valuable companies, but as of 2022 is not one of the top twenty biggest companies in the United States.
Meta is considered one of the Big Five American information technology companies, alongside Alphabet (Google), Amazon, Apple, and Microsoft.
As of 2022, it is the least profitable of the five.
Meta's products and services include Facebook, Messenger, Facebook Watch, and Meta Portal. It has also acquired Oculus, Giphy, Mapillary, Kustomer, Presize and has a 9.99% stake in Jio Platforms. In 2021, the company generated 97.5% of its revenue from the sale of advertising.
In October 2021, the parent company of Facebook changed its name from Facebook, Inc., to Meta Platforms, Inc., to "reflect its focus on building the metaverse". According to Meta, the "metaverse" refers to the integrated environment that links all of the company's products and services.
Click on any of the following blue hyperlinks for more about Meta Platforms, Inc.:
- History
- Mergers and acquisitions
- Lobbying
- Lawsuits
- Structure
- Revenue
- Facilities
- Reception
- See also:
- Big Tech
- Criticism of Facebook
- Facebook–Cambridge Analytica data scandal
- 2021 Facebook leak
- Meta AI
- The Social Network
- Official website
- Meta Platforms companies grouped at OpenCorporates
- Business data for Meta Platforms, Inc.:
Yahoo!
Portal and media Website Ranked #5 by both Alexa and SimilarWeb.
YouTube Video Yahoo!: What It's Really Like To Buy A Tesla
Yahoo Inc. (styled as Yahoo!) is an American multinational technology company headquartered in Sunnyvale, California.
It is globally known for its Web portal, search engine Yahoo! Search, and related services, including:
- Yahoo! Directory,
- Yahoo! Mail,
- Yahoo! News,
- Yahoo! Finance,
- Yahoo! Groups,
- Yahoo! Answers,
- advertising,
- online mapping,
- video sharing,
- fantasy sports
- and its social media website.
It is one of the most popular sites in the United States. According to third-party web analytics providers, Alexa and SimilarWeb, Yahoo! is the highest-read news and media website, with over 7 billion readers per month, being the fourth most visited website globally, as of June 2015.
According to news sources, roughly 700 million people visit Yahoo websites every month. Yahoo itself claims it attracts "more than half a billion consumers every month in more than 30 languages."
Yahoo was founded by Jerry Yang and David Filo in January 1994 and was incorporated on March 2, 1995. Marissa Mayer, a former Google executive, serves as CEO and President of the company.
In January 2015, the company announced it planned to spin-off its stake in Alibaba Group in a separately listed company. In December 2015 it reversed this decision, opting instead to spin-off its internet business as a separate company.
Amazon including its founder, Jeff Bezos
YouTube by Billionaire Jeff Bezos in Starting Amazon
Amazon.com, Inc., doing business as Amazon, is an American electronic commerce and cloud computing company based in Seattle, Washington, that was founded by Jeff Bezos [see next topic below] on July 5, 1994.
The tech giant is the largest Internet retailer in the world as measured by revenue and market capitalization, and second largest after Alibaba Group in terms of total sales.
The Amazon.com website started as an online bookstore and later diversified to sell the following:
The company also owns/produces the following:
Amazon is the world's largest provider of cloud infrastructure services (IaaS and PaaS) through its AWS subsidiary. Amazon also sells certain low-end products under its in-house brand AmazonBasics.
Amazon has separate retail websites for the United States, the United Kingdom and Ireland, France, Canada, Germany, Italy, Spain, Netherlands, Australia, Brazil, Japan, China, India, Mexico, Singapore, and Turkey. In 2016, Dutch, Polish, and Turkish language versions of the German Amazon website were also launched. Amazon also offers international shipping of some of its products to certain other countries.
In 2015, Amazon surpassed Walmart as the most valuable retailer in the United States by market capitalization.
Amazon is:
In 2017, Amazon acquired Whole Foods Market for $13.4 billion, which vastly increased Amazon's presence as a brick-and-mortar retailer. The acquisition was interpreted by some as a direct attempt to challenge Walmart's traditional retail stores.
In 2018, for the first time, Jeff Bezos released in Amazon's shareholder letter the number of Amazon Prime subscribers, which is 100 million worldwide.
In 2018, Amazon.com contributed US$1 million to the Wikimedia Endowment.
In November 2018, Amazon announced it would be splitting its second headquarters project between two cities. They are currently in the finalization stage of the process.
Click on any of the following blue hyperlinks for more about Amazon, Inc.:
Jeffrey Preston Bezos (né Jorgensen; born January 12, 1964) is an American technology and retail entrepreneur, investor, electrical engineer, computer scientist, and philanthropist, best known as the founder, chairman, and chief executive officer of Amazon.com, the world's largest online shopping retailer.
The company began as an Internet merchant of books and expanded to a wide variety of products and services, most recently video and audio streaming. Amazon.com is currently the world's largest Internet sales company on the World Wide Web, as well as the world's largest provider of cloud infrastructure services, which is available through its Amazon Web Services arm.
Bezos' other diversified business interests include aerospace and newspapers. He is the founder and manufacturer of Blue Origin (founded in 2000) with test flights to space which started in 2015, and plans for commercial suborbital human spaceflight beginning in 2018.
In 2013, Bezos purchased The Washington Post newspaper. A number of other business investments are managed through Bezos Expeditions.
When the financial markets opened on July 27, 2017, Bezos briefly surpassed Bill Gates on the Forbes list of billionaires to become the world's richest person, with an estimated net worth of just over $90 billion. He lost the title later in the day when Amazon's stock dropped, returning him to second place with a net worth just below $90 billion.
On October 27, 2017, Bezos again surpassed Gates on the Forbes list as the richest person in the world. Bezos's net worth surpassed $100 billion for the first time on November 24, 2017 after Amazon's share price increased by more than 2.5%.
Click on any of the following blue hyperlinks for more about Jeff Bezos:
The tech giant is the largest Internet retailer in the world as measured by revenue and market capitalization, and second largest after Alibaba Group in terms of total sales.
The Amazon.com website started as an online bookstore and later diversified to sell the following:
- video downloads/streaming,
- MP3 downloads/streaming,
- audiobook downloads/streaming,
- software,
- video games,
- electronics,
- apparel,
- furniture,
- food,
- toys,
- and jewelry.
The company also owns/produces the following:
- a publishing arm, Amazon Publishing,
- a film and television studio, Amazon Studios,
- produces consumer electronics lines including;
Amazon is the world's largest provider of cloud infrastructure services (IaaS and PaaS) through its AWS subsidiary. Amazon also sells certain low-end products under its in-house brand AmazonBasics.
Amazon has separate retail websites for the United States, the United Kingdom and Ireland, France, Canada, Germany, Italy, Spain, Netherlands, Australia, Brazil, Japan, China, India, Mexico, Singapore, and Turkey. In 2016, Dutch, Polish, and Turkish language versions of the German Amazon website were also launched. Amazon also offers international shipping of some of its products to certain other countries.
In 2015, Amazon surpassed Walmart as the most valuable retailer in the United States by market capitalization.
Amazon is:
- the third most valuable public company in the United States (behind Apple and Microsoft),
- the largest Internet company by revenue in the world,
- and after Walmart, the second largest employer in the United States.
In 2017, Amazon acquired Whole Foods Market for $13.4 billion, which vastly increased Amazon's presence as a brick-and-mortar retailer. The acquisition was interpreted by some as a direct attempt to challenge Walmart's traditional retail stores.
In 2018, for the first time, Jeff Bezos released in Amazon's shareholder letter the number of Amazon Prime subscribers, which is 100 million worldwide.
In 2018, Amazon.com contributed US$1 million to the Wikimedia Endowment.
In November 2018, Amazon announced it would be splitting its second headquarters project between two cities. They are currently in the finalization stage of the process.
Click on any of the following blue hyperlinks for more about Amazon, Inc.:
- History
- Choosing a name
Online bookstore and IPO
2000's
2010 to present
Amazon Go
Amazon 4-Star
Mergers and acquisitions
- Choosing a name
- Board of directors
- Merchant partnerships
- Products and services
- Subsidiaries
- Website at www.amazon.com
- Amazon sales rank
- Technology
- Multi-level sales strategy
- Finances
- October 2018 wage increase
- Controversies
- Notable businesses founded by former employees
- See also:
- Amazon Breakthrough Novel Award
- Amazon Flexible Payments Service
- Amazon Marketplace
- Amazon Standard Identification Number (ASIN)
- List of book distributors
- Statistically improbable phrases – Amazon.com's phrase extraction technique for indexing books
- Amazon (company) companies grouped at OpenCorporates
- Business data for Amazon.com, Inc.: Google Finance
Jeffrey Preston Bezos (né Jorgensen; born January 12, 1964) is an American technology and retail entrepreneur, investor, electrical engineer, computer scientist, and philanthropist, best known as the founder, chairman, and chief executive officer of Amazon.com, the world's largest online shopping retailer.
The company began as an Internet merchant of books and expanded to a wide variety of products and services, most recently video and audio streaming. Amazon.com is currently the world's largest Internet sales company on the World Wide Web, as well as the world's largest provider of cloud infrastructure services, which is available through its Amazon Web Services arm.
Bezos' other diversified business interests include aerospace and newspapers. He is the founder and manufacturer of Blue Origin (founded in 2000) with test flights to space which started in 2015, and plans for commercial suborbital human spaceflight beginning in 2018.
In 2013, Bezos purchased The Washington Post newspaper. A number of other business investments are managed through Bezos Expeditions.
When the financial markets opened on July 27, 2017, Bezos briefly surpassed Bill Gates on the Forbes list of billionaires to become the world's richest person, with an estimated net worth of just over $90 billion. He lost the title later in the day when Amazon's stock dropped, returning him to second place with a net worth just below $90 billion.
On October 27, 2017, Bezos again surpassed Gates on the Forbes list as the richest person in the world. Bezos's net worth surpassed $100 billion for the first time on November 24, 2017 after Amazon's share price increased by more than 2.5%.
Click on any of the following blue hyperlinks for more about Jeff Bezos:
- Early life and education
- Business career
- Philanthropy
- Recognition
- Criticism
- Personal life
- Politics
- See also:
Twitter
Ranked #10 by Alexa and #11 by SimilarWeb
YouTube Video: How to Use Twitter
Twitter is an online social networking service that enables users to send and read short 280-character messages called "tweets".
Registered users can read and post tweets, but those who are unregistered can only read them. Users access Twitter through the website interface, SMS or mobile device app.
Twitter Inc. is based in San Francisco and has more than 25 offices around the world. Twitter was created in March 2006 by Jack Dorsey, Evan Williams, Biz Stone, and Noah Glass and launched in July 2006.
The service rapidly gained worldwide popularity, with more than 100 million users posting 340 million tweets a day in 2012. The service also handled 1.6 billion search queries per day.
In 2013, Twitter was one of the ten most-visited websites and has been described as "the SMS of the Internet". As of May 2015, Twitter has more than 500 million users, out of which more than 332 million are active.
Registered users can read and post tweets, but those who are unregistered can only read them. Users access Twitter through the website interface, SMS or mobile device app.
Twitter Inc. is based in San Francisco and has more than 25 offices around the world. Twitter was created in March 2006 by Jack Dorsey, Evan Williams, Biz Stone, and Noah Glass and launched in July 2006.
The service rapidly gained worldwide popularity, with more than 100 million users posting 340 million tweets a day in 2012. The service also handled 1.6 billion search queries per day.
In 2013, Twitter was one of the ten most-visited websites and has been described as "the SMS of the Internet". As of May 2015, Twitter has more than 500 million users, out of which more than 332 million are active.
List of Most Subscribed Users onYouTube
YouTube Music Video by Taylor Swift performing 22
Pictured: LEFT: PewDiePie (43 Million Followers) and RIGHT: Rihanna (20 Million Followers)
This list of the most subscribed users on YouTube contains representations of the channels with the most subscribers on the video platform YouTube.
The ability to "subscribe" to a user's videos was added to YouTube by late October 2005, The "most subscribed" list on YouTube began being listed by a chart on the site by May 2006, at which time Smosh was #1 with fewer than 3,000 subscribers. As of April 5, 2016, the most subscribed user is PewDiePie, with over 42 million subscribers. The PewDiePie channel has held the peak position since December 22, 2013 (2 years, 3 months and 14 days), when it surpassed YouTube's Spotlight channel.
This list depicts the 25 most subscribed channels on YouTube as of March 10, 2016. This lists omits "channels", and instead only includes "users". A "user" is defined as a channel that has released videos. "Channels" that have released zero videos, such as #Music, #Gaming, or #Sports, are not included on this list, even if they have more subscribers than the users on this list. Additionally, these subscriber counts are approximations.
Reactions
In late 2006 when Peter Oakley aka Geriatric1927 became most subscribed, a number of TV channels wanted to interview him on his rise to fame. The Daily Mail and TechBlog did an article about him and his success. In 2009, the FRED channel was the first channel to have over one million subscribers.
Following the third time that the user Smosh became most subscribed, Ray William Johnson collaborated with the duo.
A flurry of top YouTubers including Ryan Higa, Shane Dawson, Felix Kjellberg, Michael Buckley, Kassem Gharaibeh, The Fine Brothers, and Johnson himself, congratulated the duo shortly after they surpassed Johnson as the most subscribed channel.
Following Felix Kjellberg's positioning at the top of YouTube, Variety heavily criticized the Swede's videos.
See also
The ability to "subscribe" to a user's videos was added to YouTube by late October 2005, The "most subscribed" list on YouTube began being listed by a chart on the site by May 2006, at which time Smosh was #1 with fewer than 3,000 subscribers. As of April 5, 2016, the most subscribed user is PewDiePie, with over 42 million subscribers. The PewDiePie channel has held the peak position since December 22, 2013 (2 years, 3 months and 14 days), when it surpassed YouTube's Spotlight channel.
This list depicts the 25 most subscribed channels on YouTube as of March 10, 2016. This lists omits "channels", and instead only includes "users". A "user" is defined as a channel that has released videos. "Channels" that have released zero videos, such as #Music, #Gaming, or #Sports, are not included on this list, even if they have more subscribers than the users on this list. Additionally, these subscriber counts are approximations.
Reactions
In late 2006 when Peter Oakley aka Geriatric1927 became most subscribed, a number of TV channels wanted to interview him on his rise to fame. The Daily Mail and TechBlog did an article about him and his success. In 2009, the FRED channel was the first channel to have over one million subscribers.
Following the third time that the user Smosh became most subscribed, Ray William Johnson collaborated with the duo.
A flurry of top YouTubers including Ryan Higa, Shane Dawson, Felix Kjellberg, Michael Buckley, Kassem Gharaibeh, The Fine Brothers, and Johnson himself, congratulated the duo shortly after they surpassed Johnson as the most subscribed channel.
Following Felix Kjellberg's positioning at the top of YouTube, Variety heavily criticized the Swede's videos.
See also
Vevo
YouTube Video: Lenny Kravitz - Lenny Kravitz Talks ‘Raise Vibration,’ And Why Love Still Rules
Vevo is a multinational video hosting service owned and operated by a joint venture of Universal Music Group (UMG), Google, Sony Music Entertainment (SME), and Abu Dhabi Media, and based in New York City.
Launched on December 8, 2009, Vevo hosts videos syndicated across the web, with Google and Vevo sharing the advertising revenue.
Vevo offers music videos from two of the "big three" major record labels, UMG and SME. EMI also licensed its library for Vevo shortly before launch; it was acquired by UMG in 2012.
Warner Music Group was initially reported to be considering hosting its content on the service, but formed an alliance with rival MTV Networks (now Viacom Media Networks). In August 2015, Vevo expressed interest in licensing music from Warner Music Group.
The concept for Vevo was described as being a streaming service for music videos (similar to the streaming service Hulu, a streaming service for movies and TV shows after they air), with the goal being to attract more high-end advertisers.
The site's other revenue sources include a merchandise store and referral links to purchase viewed songs on Amazon Music and iTunes.
UMG acquired the domain name vevo.com on November 20, 2008. SME reached a deal to add its content to the site in June 2009.
The site went live on December 8, 2009, and that same month became the number one most visited music site in the United States, overtaking MySpace Music.
In June 2012, Vevo launched its Certified awards, which honors artists with at least 100 million views on Vevo and its partners (including YouTube) through special features on the Vevo website.
Vevo TV:On March 12, 2013, Vevo launched Vevo TV, an advertising-supported internet television channel running 24 hours a day, featuring blocks of music videos and specials. The channel is only available to viewers in North America and Germany, with geographical IP address blocking being used to enforce the restriction. Vevo has planned launches in other countries.
After revamping its website, Vevo TV later branched off into three separate networks: Hits, Flow (hip hop and R&B), and Nashville (country music).
Availability:
Vevo is available in Belgium, Brazil, Canada, Chile, France, Germany, Ireland, Italy, Mexico, the Netherlands, New Zealand, Poland, Spain, the United Kingdom, and the United States. The website was scheduled to go worldwide in 2010, but as of January 1, 2016, it was still not available outside these countries.
Vevo's official blog cited licensing issues for the delay in the worldwide rollout. Most of Vevo's videos on YouTube are viewable by users in other countries, while others will produce the message "The uploader has not made this video available in your country."
The Vevo service in the United Kingdom and Ireland was launched on April 26, 2011.
On April 16, 2012, Vevo was launched in Australia and New Zealand by MCM Entertainment. On August 14, 2012, Brazil became the first Latin American country to have the service. It was expected to be launched in six more European and Latin American countries in 2012. Vevo launched in Spain, Italy, and France on November 15, 2012. Vevo launched in the Netherlands on April 3, 2013, and on May 17, 2013, also in Poland.
In September 29, 2013, Vevo updated its iOS application that now includes launching in Germany. On April 30, 2014, Vevo was launched in Mexico.
Vevo is also available for a range of platforms including Android, iOS, Windows Phone, Windows 8, Fire OS, Google TV, Apple TV, Boxee, Roku, Xbox 360, PlayStation 3, and PlayStation 4.
Edited content Versions of videos on Vevo with explicit content such as profanity may be edited, according to a company spokesperson, "to keep everything clean for broadcast, 'the MTV version.'" This allows Vevo to make their network more friendly to advertising partners such as McDonald's.
Vevo has stated that it does not have specific policies or a list of words that are forbidden. Some explicit videos are provided with uncut versions in addition to the edited version.
There is no formal rating system in place, aside from classifying videos as explicit or non-explicit, but discussions are taking place to create a rating system that allows users and advertisers to choose the level of profanity they are willing to accept.
24-Hour Vevo Record:
The 24-Hour Vevo Record, commonly referred to as the Vevo Record, is the record for the most views a music video associated with Vevo has received within 24 hours of its release.
The video that currently holds this record is "Hello" by Adele with 27.7 million views.
In 2012, Nicki Minaj's "Stupid Hoe" became one of the first Vevo music videos to receive a significant amount of media attention upon its release day, during which it accumulated 4.8 million views. The record has consistently been kept track of by Vevo ever since.
Total views of a video are counted from across all of Vevo's platforms, including YouTube, Yahoo! and other syndication partners.
On 14 April 2013, Psy's "Gentleman" unofficially broke the record by reaching 38.4 million views in its first 24 hours. However, this record is not acknowledged by Vevo because it was only associated with them four days after its release.
Minaj has broken the Vevo Record more than any other artist with three separate videos: "Stupid Hoe", "Beauty and a Beat" and "Anaconda". She has held the record for an accumulated 622 days.
Justin Bieber, One Direction and Miley Cyrus have all broken the record twice.
Launched on December 8, 2009, Vevo hosts videos syndicated across the web, with Google and Vevo sharing the advertising revenue.
Vevo offers music videos from two of the "big three" major record labels, UMG and SME. EMI also licensed its library for Vevo shortly before launch; it was acquired by UMG in 2012.
Warner Music Group was initially reported to be considering hosting its content on the service, but formed an alliance with rival MTV Networks (now Viacom Media Networks). In August 2015, Vevo expressed interest in licensing music from Warner Music Group.
The concept for Vevo was described as being a streaming service for music videos (similar to the streaming service Hulu, a streaming service for movies and TV shows after they air), with the goal being to attract more high-end advertisers.
The site's other revenue sources include a merchandise store and referral links to purchase viewed songs on Amazon Music and iTunes.
UMG acquired the domain name vevo.com on November 20, 2008. SME reached a deal to add its content to the site in June 2009.
The site went live on December 8, 2009, and that same month became the number one most visited music site in the United States, overtaking MySpace Music.
In June 2012, Vevo launched its Certified awards, which honors artists with at least 100 million views on Vevo and its partners (including YouTube) through special features on the Vevo website.
Vevo TV:On March 12, 2013, Vevo launched Vevo TV, an advertising-supported internet television channel running 24 hours a day, featuring blocks of music videos and specials. The channel is only available to viewers in North America and Germany, with geographical IP address blocking being used to enforce the restriction. Vevo has planned launches in other countries.
After revamping its website, Vevo TV later branched off into three separate networks: Hits, Flow (hip hop and R&B), and Nashville (country music).
Availability:
Vevo is available in Belgium, Brazil, Canada, Chile, France, Germany, Ireland, Italy, Mexico, the Netherlands, New Zealand, Poland, Spain, the United Kingdom, and the United States. The website was scheduled to go worldwide in 2010, but as of January 1, 2016, it was still not available outside these countries.
Vevo's official blog cited licensing issues for the delay in the worldwide rollout. Most of Vevo's videos on YouTube are viewable by users in other countries, while others will produce the message "The uploader has not made this video available in your country."
The Vevo service in the United Kingdom and Ireland was launched on April 26, 2011.
On April 16, 2012, Vevo was launched in Australia and New Zealand by MCM Entertainment. On August 14, 2012, Brazil became the first Latin American country to have the service. It was expected to be launched in six more European and Latin American countries in 2012. Vevo launched in Spain, Italy, and France on November 15, 2012. Vevo launched in the Netherlands on April 3, 2013, and on May 17, 2013, also in Poland.
In September 29, 2013, Vevo updated its iOS application that now includes launching in Germany. On April 30, 2014, Vevo was launched in Mexico.
Vevo is also available for a range of platforms including Android, iOS, Windows Phone, Windows 8, Fire OS, Google TV, Apple TV, Boxee, Roku, Xbox 360, PlayStation 3, and PlayStation 4.
Edited content Versions of videos on Vevo with explicit content such as profanity may be edited, according to a company spokesperson, "to keep everything clean for broadcast, 'the MTV version.'" This allows Vevo to make their network more friendly to advertising partners such as McDonald's.
Vevo has stated that it does not have specific policies or a list of words that are forbidden. Some explicit videos are provided with uncut versions in addition to the edited version.
There is no formal rating system in place, aside from classifying videos as explicit or non-explicit, but discussions are taking place to create a rating system that allows users and advertisers to choose the level of profanity they are willing to accept.
24-Hour Vevo Record:
The 24-Hour Vevo Record, commonly referred to as the Vevo Record, is the record for the most views a music video associated with Vevo has received within 24 hours of its release.
The video that currently holds this record is "Hello" by Adele with 27.7 million views.
In 2012, Nicki Minaj's "Stupid Hoe" became one of the first Vevo music videos to receive a significant amount of media attention upon its release day, during which it accumulated 4.8 million views. The record has consistently been kept track of by Vevo ever since.
Total views of a video are counted from across all of Vevo's platforms, including YouTube, Yahoo! and other syndication partners.
On 14 April 2013, Psy's "Gentleman" unofficially broke the record by reaching 38.4 million views in its first 24 hours. However, this record is not acknowledged by Vevo because it was only associated with them four days after its release.
Minaj has broken the Vevo Record more than any other artist with three separate videos: "Stupid Hoe", "Beauty and a Beat" and "Anaconda". She has held the record for an accumulated 622 days.
Justin Bieber, One Direction and Miley Cyrus have all broken the record twice.
Alexa Internet
YouTube Video: The Secret to Becoming the Top Website in Any Popular Niche 2017 - Better Alexa Rank
Alexa Internet, Inc. is a California-based company that provides commercial web traffic data and analytics. It is a wholly owned subsidiary of Amazon.com.
Founded as an independent company in 1996, Alexa was acquired by Amazon in 1999. Its toolbar collects data on browsing behavior and transmits them to the Alexa website, where they are stored and analyzed, forming the basis for the company's web traffic reporting. According to its website, Alexa provides traffic data, global rankings and other information on 30 million websites, and as of 2015 its website is visited by over 6.5 million people monthly.
Alexa Internet was founded in April 1996 by American web entrepreneurs Brewster Kahle and Bruce Gilliat. The company's name was chosen in homage to the Library of Alexandria of Ptolemaic Egypt, drawing a parallel between the largest repository of knowledge in the ancient world and the potential of the Internet to become a similar store of knowledge.
Alexa initially offered a toolbar that gave Internet users suggestions on where to go next, based on the traffic patterns of its user community. The company also offered context for each site visited: to whom it was registered, how many pages it had, how many other sites pointed to it, and how frequently it was updated.
Alexa's operations grew to include archiving of web pages as they are crawled. This database served as the basis for the creation of the Internet Archive accessible through the Wayback Machine. In 1998, the company donated a copy of the archive, two terabytes in size, to the Library of Congress. Alexa continues to supply the Internet Archive with Web crawls.
In 1999, as the company moved away from its original vision of providing an "intelligent" search engine, Alexa was acquired by Amazon.com for approximately US$250 million in Amazon stock.
Alexa began a partnership with Google in early 2002, and with the web directory DMOZ in January 2003. In May 2006, Amazon replaced Google with Bing (at the time known as Windows Live Search) as a provider of search results.
In December 2006, Amazon released Alexa Image Search. Built in-house, it was the first major application built on the company's Web platform.
In December 2005, Alexa opened its extensive search index and Web-crawling facilities to third party programs through a comprehensive set of Web services and APIs. These could be used, for instance, to construct vertical search engines that could run on Alexa's own servers or elsewhere. In May 2007, Alexa changed their API to limit comparisons to three websites, reduce the size of embedded graphs in Flash, and add mandatory embedded BritePic advertisements.
In April 2007, the lawsuit Alexa v. Hornbaker was filed to stop trademark infringement by the Statsaholic service. In the lawsuit, Alexa alleged that Hornbaker was stealing traffic graphs for profit, and that the primary purpose of his site was to display graphs that were generated by Alexa's servers. Hornbaker removed the term Alexa from his service name on March 19, 2007.
On November 27, 2008, Amazon announced that Alexa Web Search was no longer accepting new customers, and that the service would be deprecated or discontinued for existing customers on January 26, 2009. Thereafter, Alexa became a purely analytics-focused company.
On March 31, 2009, Alexa launched a major website redesign. The redesigned site provided new web traffic metrics—including average page views per individual user, bounce rate, and user time on site. In the following weeks, Alexa added more features, including visitor demographics, clickstream and search traffic statistics. Alexa introduced these new features to compete with other web analytics services.
Tracking:
Toolbar: Alexa ranks sites based primarily on tracking a sample set of internet traffic—users of its toolbar for the Internet Explorer, Firefox and Google Chrome web browsers.
The Alexa Toolbar includes a popup blocker, a search box, links to Amazon.com and the Alexa homepage, and the Alexa ranking of the site that the user is visiting. It also allows the user to rate the site and view links to external, relevant sites.
In early 2005, Alexa stated that there had been 10 million downloads of the toolbar, though the company did not provide statistics about active usage. Originally, web pages were only ranked amongst users who had the Alexa Toolbar installed, and could be biased if a specific audience subgroup was reluctant to take part in the rankings. This caused some controversy over how representative Alexa's user base was of typical Internet behavior, especially for less-visited sites.
In 2007, Michael Arrington provided examples of Alexa rankings known to contradict data from the comScore web analytics service, including ranking YouTube ahead of Google.
Until 2007, a third-party-supplied plugin for the Firefox browser served as the only option for Firefox users after Amazon abandoned its A9 toolbar. On July 16, 2007, Alexa released an official toolbar for Firefox called Sparky.
On 16 April 2008, many users reported dramatic shifts in their Alexa rankings. Alexa confirmed this later in the day with an announcement that they had released an updated ranking system, claiming that they would now take into account more sources of data "beyond Alexa Toolbar users".
Certified statistics:
Using the Alexa Pro service, website owners can sign up for "certified statistics," which allows Alexa more access to a site's traffic data. Site owners input Javascript code on each page of their website that, if permitted by the user's security and privacy settings, runs and sends traffic data to Alexa, allowing Alexa to display—or not display, depending on the owner's preference—more accurate statistics such as total pageviews and unique pageviews.
Privacy and malware assessments:
A number of antivirus companies have assessed Alexa's toolbar. The toolbar for Internet Explorer 7 was at one point flagged as malware by Microsoft Defender.
Symantec classifies the toolbar as "trackware", while McAfee classifies it as adware, deeming it a "potentially unwanted program." McAfee Site Advisor rates the Alexa site as "green", finding "no significant problems" but warning of a "small fraction of downloads ... that some people consider adware or other potentially unwanted programs."
Though it is possible to delete a paid subscription within an Alexa account, it is not possible to delete an account that is created at Alexa through any web interface, though any user may contact the company via its support webpage.
Founded as an independent company in 1996, Alexa was acquired by Amazon in 1999. Its toolbar collects data on browsing behavior and transmits them to the Alexa website, where they are stored and analyzed, forming the basis for the company's web traffic reporting. According to its website, Alexa provides traffic data, global rankings and other information on 30 million websites, and as of 2015 its website is visited by over 6.5 million people monthly.
Alexa Internet was founded in April 1996 by American web entrepreneurs Brewster Kahle and Bruce Gilliat. The company's name was chosen in homage to the Library of Alexandria of Ptolemaic Egypt, drawing a parallel between the largest repository of knowledge in the ancient world and the potential of the Internet to become a similar store of knowledge.
Alexa initially offered a toolbar that gave Internet users suggestions on where to go next, based on the traffic patterns of its user community. The company also offered context for each site visited: to whom it was registered, how many pages it had, how many other sites pointed to it, and how frequently it was updated.
Alexa's operations grew to include archiving of web pages as they are crawled. This database served as the basis for the creation of the Internet Archive accessible through the Wayback Machine. In 1998, the company donated a copy of the archive, two terabytes in size, to the Library of Congress. Alexa continues to supply the Internet Archive with Web crawls.
In 1999, as the company moved away from its original vision of providing an "intelligent" search engine, Alexa was acquired by Amazon.com for approximately US$250 million in Amazon stock.
Alexa began a partnership with Google in early 2002, and with the web directory DMOZ in January 2003. In May 2006, Amazon replaced Google with Bing (at the time known as Windows Live Search) as a provider of search results.
In December 2006, Amazon released Alexa Image Search. Built in-house, it was the first major application built on the company's Web platform.
In December 2005, Alexa opened its extensive search index and Web-crawling facilities to third party programs through a comprehensive set of Web services and APIs. These could be used, for instance, to construct vertical search engines that could run on Alexa's own servers or elsewhere. In May 2007, Alexa changed their API to limit comparisons to three websites, reduce the size of embedded graphs in Flash, and add mandatory embedded BritePic advertisements.
In April 2007, the lawsuit Alexa v. Hornbaker was filed to stop trademark infringement by the Statsaholic service. In the lawsuit, Alexa alleged that Hornbaker was stealing traffic graphs for profit, and that the primary purpose of his site was to display graphs that were generated by Alexa's servers. Hornbaker removed the term Alexa from his service name on March 19, 2007.
On November 27, 2008, Amazon announced that Alexa Web Search was no longer accepting new customers, and that the service would be deprecated or discontinued for existing customers on January 26, 2009. Thereafter, Alexa became a purely analytics-focused company.
On March 31, 2009, Alexa launched a major website redesign. The redesigned site provided new web traffic metrics—including average page views per individual user, bounce rate, and user time on site. In the following weeks, Alexa added more features, including visitor demographics, clickstream and search traffic statistics. Alexa introduced these new features to compete with other web analytics services.
Tracking:
Toolbar: Alexa ranks sites based primarily on tracking a sample set of internet traffic—users of its toolbar for the Internet Explorer, Firefox and Google Chrome web browsers.
The Alexa Toolbar includes a popup blocker, a search box, links to Amazon.com and the Alexa homepage, and the Alexa ranking of the site that the user is visiting. It also allows the user to rate the site and view links to external, relevant sites.
In early 2005, Alexa stated that there had been 10 million downloads of the toolbar, though the company did not provide statistics about active usage. Originally, web pages were only ranked amongst users who had the Alexa Toolbar installed, and could be biased if a specific audience subgroup was reluctant to take part in the rankings. This caused some controversy over how representative Alexa's user base was of typical Internet behavior, especially for less-visited sites.
In 2007, Michael Arrington provided examples of Alexa rankings known to contradict data from the comScore web analytics service, including ranking YouTube ahead of Google.
Until 2007, a third-party-supplied plugin for the Firefox browser served as the only option for Firefox users after Amazon abandoned its A9 toolbar. On July 16, 2007, Alexa released an official toolbar for Firefox called Sparky.
On 16 April 2008, many users reported dramatic shifts in their Alexa rankings. Alexa confirmed this later in the day with an announcement that they had released an updated ranking system, claiming that they would now take into account more sources of data "beyond Alexa Toolbar users".
Certified statistics:
Using the Alexa Pro service, website owners can sign up for "certified statistics," which allows Alexa more access to a site's traffic data. Site owners input Javascript code on each page of their website that, if permitted by the user's security and privacy settings, runs and sends traffic data to Alexa, allowing Alexa to display—or not display, depending on the owner's preference—more accurate statistics such as total pageviews and unique pageviews.
Privacy and malware assessments:
A number of antivirus companies have assessed Alexa's toolbar. The toolbar for Internet Explorer 7 was at one point flagged as malware by Microsoft Defender.
Symantec classifies the toolbar as "trackware", while McAfee classifies it as adware, deeming it a "potentially unwanted program." McAfee Site Advisor rates the Alexa site as "green", finding "no significant problems" but warning of a "small fraction of downloads ... that some people consider adware or other potentially unwanted programs."
Though it is possible to delete a paid subscription within an Alexa account, it is not possible to delete an account that is created at Alexa through any web interface, though any user may contact the company via its support webpage.
Public Key Certificate or Digital Certificate
YouTube Video: Understanding Digital Certificates
Pictured: Public Key Infrastructure: Basics about digital certificates (HTTPS, SSL)
In cryptography, a public key certificate (also known as a digital certificate or identity certificate) is an electronic document used to prove ownership of a public key.
The certificate includes information about the key, information about its owner's identity, and the digital signature of an entity that has verified the certificate's contents are correct. If the signature is valid, and the person examining the certificate trusts the signer, then they know they can use that key to communicate with its owner.
In a typical public-key infrastructure (PKI) scheme, the signer is a certificate authority (CA), usually a company which charges customers to issue certificates for them.
In a web of trust scheme, the signer is either the key's owner (a self-signed certificate) or other users ("endorsements") whom the person examining the certificate might know and trust.
Certificates are an important component of Transport Layer Security (TLS, sometimes called by its older name SSL, Secure Sockets Layer), where they prevent an attacker from impersonating a secure website or other server. They are also used in other important applications, such as email encryption and code signing.
For the Rest about this Topic, Click Here
The certificate includes information about the key, information about its owner's identity, and the digital signature of an entity that has verified the certificate's contents are correct. If the signature is valid, and the person examining the certificate trusts the signer, then they know they can use that key to communicate with its owner.
In a typical public-key infrastructure (PKI) scheme, the signer is a certificate authority (CA), usually a company which charges customers to issue certificates for them.
In a web of trust scheme, the signer is either the key's owner (a self-signed certificate) or other users ("endorsements") whom the person examining the certificate might know and trust.
Certificates are an important component of Transport Layer Security (TLS, sometimes called by its older name SSL, Secure Sockets Layer), where they prevent an attacker from impersonating a secure website or other server. They are also used in other important applications, such as email encryption and code signing.
For the Rest about this Topic, Click Here
How to Protect Your Child from Accessing Inappropriate Content on the Internet
YouTube Video: How to Block Websites with Parental Controls on Your iPhone, Android, and Computer
Pictured: CIPA Logo
The Children's Internet Protection Act (CIPA) requires that K-12 schools and libraries in the United States use Internet filters and implement other measures to protect children from harmful online content as a condition for federal funding. It was signed into law on December 21, 2000, and was found to be constitutional by the United States Supreme Court on June 23, 2003.
Background:
CIPA is one of a number of bills that the United States Congress proposed to limit children's exposure to pornography and explicit content online. Both of Congress's earlier attempts at restricting indecent Internet content, the Communications Decency Act and the Child Online Protection Act, were held to be unconstitutional by the U.S. Supreme Court on First Amendment grounds.
CIPA represented a change in strategy by Congress. While the federal government had no means of directly controlling local school and library boards, many schools and libraries took advantage of Universal Service Fund (USF) discounts derived from universal service fees paid by users in order to purchase eligible telecom services and Internet access.
In passing CIPA, Congress required libraries and K-12 schools using these E-Rate discounts on Internet access and internal connections to purchase and use a "technology protection measure" on every computer connected to the Internet.
These conditions also applied to a small subset of grants authorized through the Library Services and Technology Act (LSTA). CIPA did not provide additional funds for the purchase of the "technology protection measure".
Stipulations:
CIPA requires K-12 schools and libraries using E-Rate discounts to operate "a technology protection measure with respect to any of its computers with Internet access that protects against access through such computers to visual depictions that are obscene, child pornography, or harmful to minors".
Such a technology protection measure must be employed "during any use of such computers by minors". The law also provides that the school or library "may disable the technology protection measure concerned, during use by an adult, to enable access for bona fide research or other lawful purpose".
Schools and libraries that do not receive E-Rate discounts or only receive discounts for telecommunication services and not for Internet access or internal connections, do not have an obligation to filter under CIPA. As of 2007, approximately one-third of libraries had chosen to forego federal E-Rate and certain types of LSTA funds so they would not be required to institute filtering.
This act has several requirements for institutions to meet before they can receive government funds. Libraries and schools must "provide reasonable public notice and hold at least one public hearing or meeting to address the proposed Internet safety policy" (47 U.S.C. § 254(1)(B)) as added by CIPA sec. 1732).
The policy proposed at this meeting must address:
CIPA does not, however, require that Internet use be tracked. All Internet access, even by adults, must be filtered, though filtering requirements can be less restrictive for adults.
Content to be filtered:
The following content must be filtered or blocked:
Some of the terms mentioned in this act, such as “inappropriate matter” and what is “harmful to minors”, are explained in the law. Under the Neighborhood Act (47 U.S.C. § 254(l)(2) as added by CIPA sec. 1732), the definition of “inappropriate matter” is locally determined:
Local Determination of Content – a determination regarding what matter is inappropriate for minors shall be made by the school board, local educational agency, library, or other United States authority responsible for making the determination.
No agency or instrumentality of the Government may – (a) establish criteria for making such determination; (b) review agency determination made by the certifying school, school board, local educational agency, library, or other authority; or (c) consider the criteria employed by the certifying school, school board, educational agency, library, or other authority in the administration of subsection 47 U.S.C. § 254(h)(1)(B).
The CIPA defines “harmful to minors” as:
Any picture, image, graphic image file, or other visual depiction that –
(i) taken as a whole and with respect to minors, appeals to a prurient interest in nudity, sex, or excretion;
(ii) depicts, describes, or represents, in a patently offensive way with respect to what is suitable for minors, an actual or simulated sexual act or sexual contact, actual or simulated normal or perverted sexual acts, or a lewd exhibition of the genitals;
and (iii) taken as a whole, lacks serious literary, artistic, political, or scientific value as to minors” (Secs. 1703(b)(2), 20 U.S.C. sec 3601(a)(5)(F) as added by CIPA sec 1711, 20 U.S.C. sec 9134(b)(f )(7)(B) as added by CIPA sec 1712(a), and 147 U.S.C. sec. 254(h)(c)(G) as added by CIPA sec. 1721(a)).
As mentioned above, there is an exception for Bona Fide Research. An institution can disable filters for adults in the pursuit of bona fide research or another type of lawful purpose. However, the law provides no definition for “bona fide research”.
However, in a later ruling the U.S. Supreme Court said that libraries would be required to adopt an Internet use policy providing for unblocking the Internet for adult users, without a requirement that the library inquire into the user's reasons for disabling the filter.
Justice Rehnquist stated "[a]ssuming that such erroneous blocking presents constitutional difficulties, any such concerns are dispelled by the ease with which patrons may have the filtering software disabled. When a patron encounters a blocked site, he need only ask a librarian to unblock it or (at least in the case of adults) disable the filter". This effectively puts the decision of what constitutes "bona fide research" in the hands of the adult asking to have the filter disabled.
The U.S. Federal Communications Commission (FCC) subsequently instructed libraries complying with CIPA to implement a procedure for unblocking the filter upon request by an adult.
Other filtered content includes sites that contain "inappropriate language", "blogs", or are deemed "tasteless".
This can be somewhat limiting in research for some students, as a resource they wish to use may be disallowed by the filter's vague explanations of why a page is banned. For example, if someone tries to access the page "March 4", and, ironically "Internet Censorship" on Wikipedia, the filter will immediately turn them away, claiming the page contains "Extreme language".
Suits challenging CIPA’s Constitutionality:
On January 17, 2001, the American Library Association (ALA) voted to challenge CIPA, on the grounds that the law required libraries to unconstitutionally block access to constitutionally protected information on the Internet. It charged first that, because CIPA's enforcement mechanism involved removing federal funds intended to assist disadvantaged facilities, "CIPA runs counter to these federal efforts to close the digital divide for all Americans". Second, it argued that "no filtering software successfully differentiates constitutionally protected speech from illegal speech on the Internet".
Working with the American Civil Liberties Union (ACLU), the ALA successfully challenged the law before a three-judge panel of the U.S. District Court for the Eastern District of Pennsylvania.
In a 200-page decision, the judges wrote that "in view of the severe limitations of filtering technology and the existence of these less restrictive alternatives [including making filtering software optional or supervising users directly], we conclude that it is not possible for a public library to comply with CIPA without blocking a very substantial amount of constitutionally protected speech, in violation of the First Amendment". 201 F.Supp.2d 401, 490 (2002).
Upon appeal to the U.S. Supreme Court, however, the law was upheld as constitutional as a condition imposed on institutions in exchange for government funding. In upholding the law, the Supreme Court, adopting the interpretation urged by the U.S. Solicitor General at oral argument, made it clear that the constitutionality of CIPA would be upheld only "if, as the Government represents, a librarian will unblock filtered material or disable the Internet software filter without significant delay on an adult user's request".
In the ruling Chief Justice William Rehnquist, joined by Justice Sandra Day O'Connor, Justice Antonin Scalia, and Justice Clarence Thomas, concluded two points. First, “Because public libraries' use of Internet filtering software does not violate their patrons' First Amendment rights, CIPA does not induce libraries to violate the Constitution, and is a valid exercise of Congress' spending power”.
The argument goes that, because of the immense amount of information available online and how quickly it changes, libraries cannot separate items individually to exclude, and blocking entire websites can often lead to an exclusion of valuable information. Therefore, it is reasonable for public libraries to restrict access to certain categories of content.
Secondly, “CIPA does not impose an unconstitutional condition on libraries that receive E-Rate and LSTA subsidies by requiring them, as a condition on that receipt, to surrender their First Amendment right to provide the public with access to constitutionally protected speech”. The argument here is that, the government can offer public funds to help institutions fulfill their roles, as in the case of libraries providing access to information.
The Justices cited Rust v. Sullivan (1991) as precedent to show how the Court has approved using government funds with certain limitations to facilitate a program. Furthermore, since public libraries traditionally do not include pornographic material in their book collections, the court can reasonably uphold a law that imposes a similar limitation for online texts.
As noted above, the text of the law authorized institutions to disable the filter on request "for bona fide research or other lawful purpose", implying that the adult would be expected to provide justification with his request. But under the interpretation urged by the Solicitor General and adopted by the Supreme Court, libraries would be required to adopt an Internet use policy providing for unblocking the Internet for adult users, without a requirement that the library inquire into the user's reasons for disabling the filter.
Legislation after CIPA:
An attempt to expand CIPA to include "social networking" web sites was considered by the U.S. Congress in 2006. See Deleting Online Predators Act. More attempts have been made recently by the International Society for Technology in Education (ISTE) and the Consortium for School Networking (CoSN) urging Congress to update CIPA terms in hopes of regulating, not abolishing, students' access to social networking and chat rooms.
Neither ISTE nor CoSN wish to ban these online communication outlets entirely however, as they believe the "Internet contains valuable content, collaboration and communication opportunities that can and do materially contribute to a student's academic growth and preparation for the workforce".
See Also:
Background:
CIPA is one of a number of bills that the United States Congress proposed to limit children's exposure to pornography and explicit content online. Both of Congress's earlier attempts at restricting indecent Internet content, the Communications Decency Act and the Child Online Protection Act, were held to be unconstitutional by the U.S. Supreme Court on First Amendment grounds.
CIPA represented a change in strategy by Congress. While the federal government had no means of directly controlling local school and library boards, many schools and libraries took advantage of Universal Service Fund (USF) discounts derived from universal service fees paid by users in order to purchase eligible telecom services and Internet access.
In passing CIPA, Congress required libraries and K-12 schools using these E-Rate discounts on Internet access and internal connections to purchase and use a "technology protection measure" on every computer connected to the Internet.
These conditions also applied to a small subset of grants authorized through the Library Services and Technology Act (LSTA). CIPA did not provide additional funds for the purchase of the "technology protection measure".
Stipulations:
CIPA requires K-12 schools and libraries using E-Rate discounts to operate "a technology protection measure with respect to any of its computers with Internet access that protects against access through such computers to visual depictions that are obscene, child pornography, or harmful to minors".
Such a technology protection measure must be employed "during any use of such computers by minors". The law also provides that the school or library "may disable the technology protection measure concerned, during use by an adult, to enable access for bona fide research or other lawful purpose".
Schools and libraries that do not receive E-Rate discounts or only receive discounts for telecommunication services and not for Internet access or internal connections, do not have an obligation to filter under CIPA. As of 2007, approximately one-third of libraries had chosen to forego federal E-Rate and certain types of LSTA funds so they would not be required to institute filtering.
This act has several requirements for institutions to meet before they can receive government funds. Libraries and schools must "provide reasonable public notice and hold at least one public hearing or meeting to address the proposed Internet safety policy" (47 U.S.C. § 254(1)(B)) as added by CIPA sec. 1732).
The policy proposed at this meeting must address:
- Measures to restrict a minor’s access to inappropriate or harmful materials on the Internet
- Security and safety of minors using chat rooms, email, instant messaging, or any other types of online communications
- Unauthorized disclosure of a minor’s personal information
- Unauthorized access like hacking by minors.
CIPA does not, however, require that Internet use be tracked. All Internet access, even by adults, must be filtered, though filtering requirements can be less restrictive for adults.
Content to be filtered:
The following content must be filtered or blocked:
- Obscenity as defined by Miller v. California (1973)
- Child pornography as defined by 18 U.S.C. 2256
- Harmful to minors
Some of the terms mentioned in this act, such as “inappropriate matter” and what is “harmful to minors”, are explained in the law. Under the Neighborhood Act (47 U.S.C. § 254(l)(2) as added by CIPA sec. 1732), the definition of “inappropriate matter” is locally determined:
Local Determination of Content – a determination regarding what matter is inappropriate for minors shall be made by the school board, local educational agency, library, or other United States authority responsible for making the determination.
No agency or instrumentality of the Government may – (a) establish criteria for making such determination; (b) review agency determination made by the certifying school, school board, local educational agency, library, or other authority; or (c) consider the criteria employed by the certifying school, school board, educational agency, library, or other authority in the administration of subsection 47 U.S.C. § 254(h)(1)(B).
The CIPA defines “harmful to minors” as:
Any picture, image, graphic image file, or other visual depiction that –
(i) taken as a whole and with respect to minors, appeals to a prurient interest in nudity, sex, or excretion;
(ii) depicts, describes, or represents, in a patently offensive way with respect to what is suitable for minors, an actual or simulated sexual act or sexual contact, actual or simulated normal or perverted sexual acts, or a lewd exhibition of the genitals;
and (iii) taken as a whole, lacks serious literary, artistic, political, or scientific value as to minors” (Secs. 1703(b)(2), 20 U.S.C. sec 3601(a)(5)(F) as added by CIPA sec 1711, 20 U.S.C. sec 9134(b)(f )(7)(B) as added by CIPA sec 1712(a), and 147 U.S.C. sec. 254(h)(c)(G) as added by CIPA sec. 1721(a)).
As mentioned above, there is an exception for Bona Fide Research. An institution can disable filters for adults in the pursuit of bona fide research or another type of lawful purpose. However, the law provides no definition for “bona fide research”.
However, in a later ruling the U.S. Supreme Court said that libraries would be required to adopt an Internet use policy providing for unblocking the Internet for adult users, without a requirement that the library inquire into the user's reasons for disabling the filter.
Justice Rehnquist stated "[a]ssuming that such erroneous blocking presents constitutional difficulties, any such concerns are dispelled by the ease with which patrons may have the filtering software disabled. When a patron encounters a blocked site, he need only ask a librarian to unblock it or (at least in the case of adults) disable the filter". This effectively puts the decision of what constitutes "bona fide research" in the hands of the adult asking to have the filter disabled.
The U.S. Federal Communications Commission (FCC) subsequently instructed libraries complying with CIPA to implement a procedure for unblocking the filter upon request by an adult.
Other filtered content includes sites that contain "inappropriate language", "blogs", or are deemed "tasteless".
This can be somewhat limiting in research for some students, as a resource they wish to use may be disallowed by the filter's vague explanations of why a page is banned. For example, if someone tries to access the page "March 4", and, ironically "Internet Censorship" on Wikipedia, the filter will immediately turn them away, claiming the page contains "Extreme language".
Suits challenging CIPA’s Constitutionality:
On January 17, 2001, the American Library Association (ALA) voted to challenge CIPA, on the grounds that the law required libraries to unconstitutionally block access to constitutionally protected information on the Internet. It charged first that, because CIPA's enforcement mechanism involved removing federal funds intended to assist disadvantaged facilities, "CIPA runs counter to these federal efforts to close the digital divide for all Americans". Second, it argued that "no filtering software successfully differentiates constitutionally protected speech from illegal speech on the Internet".
Working with the American Civil Liberties Union (ACLU), the ALA successfully challenged the law before a three-judge panel of the U.S. District Court for the Eastern District of Pennsylvania.
In a 200-page decision, the judges wrote that "in view of the severe limitations of filtering technology and the existence of these less restrictive alternatives [including making filtering software optional or supervising users directly], we conclude that it is not possible for a public library to comply with CIPA without blocking a very substantial amount of constitutionally protected speech, in violation of the First Amendment". 201 F.Supp.2d 401, 490 (2002).
Upon appeal to the U.S. Supreme Court, however, the law was upheld as constitutional as a condition imposed on institutions in exchange for government funding. In upholding the law, the Supreme Court, adopting the interpretation urged by the U.S. Solicitor General at oral argument, made it clear that the constitutionality of CIPA would be upheld only "if, as the Government represents, a librarian will unblock filtered material or disable the Internet software filter without significant delay on an adult user's request".
In the ruling Chief Justice William Rehnquist, joined by Justice Sandra Day O'Connor, Justice Antonin Scalia, and Justice Clarence Thomas, concluded two points. First, “Because public libraries' use of Internet filtering software does not violate their patrons' First Amendment rights, CIPA does not induce libraries to violate the Constitution, and is a valid exercise of Congress' spending power”.
The argument goes that, because of the immense amount of information available online and how quickly it changes, libraries cannot separate items individually to exclude, and blocking entire websites can often lead to an exclusion of valuable information. Therefore, it is reasonable for public libraries to restrict access to certain categories of content.
Secondly, “CIPA does not impose an unconstitutional condition on libraries that receive E-Rate and LSTA subsidies by requiring them, as a condition on that receipt, to surrender their First Amendment right to provide the public with access to constitutionally protected speech”. The argument here is that, the government can offer public funds to help institutions fulfill their roles, as in the case of libraries providing access to information.
The Justices cited Rust v. Sullivan (1991) as precedent to show how the Court has approved using government funds with certain limitations to facilitate a program. Furthermore, since public libraries traditionally do not include pornographic material in their book collections, the court can reasonably uphold a law that imposes a similar limitation for online texts.
As noted above, the text of the law authorized institutions to disable the filter on request "for bona fide research or other lawful purpose", implying that the adult would be expected to provide justification with his request. But under the interpretation urged by the Solicitor General and adopted by the Supreme Court, libraries would be required to adopt an Internet use policy providing for unblocking the Internet for adult users, without a requirement that the library inquire into the user's reasons for disabling the filter.
Legislation after CIPA:
An attempt to expand CIPA to include "social networking" web sites was considered by the U.S. Congress in 2006. See Deleting Online Predators Act. More attempts have been made recently by the International Society for Technology in Education (ISTE) and the Consortium for School Networking (CoSN) urging Congress to update CIPA terms in hopes of regulating, not abolishing, students' access to social networking and chat rooms.
Neither ISTE nor CoSN wish to ban these online communication outlets entirely however, as they believe the "Internet contains valuable content, collaboration and communication opportunities that can and do materially contribute to a student's academic growth and preparation for the workforce".
See Also:
- Content-control software
- Internet censorship
- The King's English v. Shurtleff
- State of Connecticut v. Julie Amero
- Think of the children
The Internet, including its History, Access, Protocol, and a List of the Largest Internet Companies by Revenue
YouTube Video of "How Does the Internet Work ?"
Pictured: Packet routing across the Internet involves several tiers of Internet service providers.
(Courtesy of User:Ludovic.ferre - Internet Connectivity Distribution&Core.svg, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=10030716)
The Internet is the global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link billions of devices worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies.
The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), electronic mail, telephony, and peer-to-peer networks for file sharing.
The origins of the Internet date back to research commissioned by the United States government in the 1960s to build robust, fault-tolerant communication via computer networks. The primary precursor network, the ARPANET, initially served as a backbone for interconnection of regional academic and military networks in the 1980s.
The funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The linking of commercial networks and enterprises by the early 1990s marks the beginning of the transition to the modern Internet, and generated a sustained exponential growth as generations of institutional, personal, and mobile computers were connected to the network.
Although the Internet has been widely used by academia since the 1980s, the commercialization incorporated its services and technologies into virtually every aspect of modern life.
Internet use grew rapidly in the West from the mid-1990s and from the late 1990s in the developing world. In the 20 years since 1995, Internet use has grown 100-times, measured for the period of one year, to over one third of the world population.
Most traditional communications media, including telephony and television, are being reshaped or redefined by the Internet, giving birth to new services such as Internet telephony and Internet television. Newspaper, book, and other print publishing are adapting to website technology, or are reshaped into blogging and web feeds.
The entertainment industry was initially the fastest growing segment on the Internet. The Internet has enabled and accelerated new forms of personal interactions through instant messaging, Internet forums, and social networking. Online shopping has grown exponentially both for major retailers and small artisans and traders. Business-to-business and financial services on the Internet affect supply chains across entire industries.
The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own policies. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN).
The technical underpinning and standardization of the core protocols is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.
(For more detailed information about the Internet, click here)
___________________________________________________________________________
The history of the Internet begins with the development of electronic computers in the 1950s. Initial concepts of packet networking originated in several computer science laboratories in the United States, United Kingdom, and France.
The US Department of Defense awarded contracts as early as the 1960s for packet network systems, including the development of the ARPANET. The first message was sent over the ARPANET from computer science Professor Leonard Kleinrock's laboratory at University of California, Los Angeles (UCLA) to the second network node at Stanford Research. Institute (SRI).
Packet switching networks such as ARPANET, NPL network, CYCLADES, Merit Network, Tymnet, and Telenet, were developed in the late 1960s and early 1970s using a variety of communications protocols. Donald Davies first designed a packet-switched network at the National Physics Laboratory in the UK, which became a testbed for UK research for almost two decades.
The ARPANET project led to the development of protocols for internetworking, in which multiple separate networks could be joined into a network of networks.
Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet protocol suite (TCP/IP) was introduced as the standard networking protocol on the ARPANET.
In the early 1980s the NSF funded the establishment for national supercomputing centers at several universities, and provided interconnectivity in 1986 with the NSFNET project, which also created network access to the supercomputer sites in the United States from research and education organizations. Commercial Internet service providers (ISPs) began to emerge in the very late 1980s.
The ARPANET was decommissioned in 1990. Limited private connections to parts of the Internet by officially commercial entities emerged in several American cities by late 1989 and 1990, and the NSFNET was decommissioned in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic.
In the 1980s, research at CERN in Switzerland by British computer scientist Tim Berners-Lee resulted in the World Wide Web, linking hypertext documents into an information system, accessible from any node on the network.
Since the mid-1990s, the Internet has had a revolutionary impact on culture, commerce, and technology, including the rise of near-instant communication by electronic mail, instant messaging, voice over Internet Protocol (VoIP) telephone calls, two-way interactive video calls, and the World Wide Web with its discussion forums, blogs, social networking, and online shopping sites.
The research and education community continues to develop and use advanced networks such as NSF's very high speed Backbone Network Service (vBNS), Internet2, and National LambdaRail.
Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more. The Internet's takeover of the global communication landscape was almost instant in historical terms: it only communicated 1% of the information flowing through two-way telecommunications networks in the year 1993, already 51% by 2000, and more than 97% of the telecommunicated information by 2007.
Today the Internet continues to grow, driven by ever greater amounts of online information, commerce, entertainment, and social networking.
Click here further information about the History of the Internet.
___________________________________________________________________________
Internet access is the process that enables individuals and organisations to connect to the Internet using computer terminals, computers, mobile devices, sometimes via computer networks.
Once connected to the Internet, users can access Internet services, such as email and the World Wide Web. Internet service providers (ISPs) offer Internet access through various technologies that offer a wide range of data signaling rates (speeds).
Consumer use of the Internet first became popular through dial-up Internet access in the 1990s. By the first decade of the 21st century, many consumers in developed nations used faster, broadband Internet access technologies. By 2014 this was almost ubiquitous worldwide, with a global average connection speed exceeding 4 Mbit/s.
Click here for more about Internet Access.
___________________________________________________________________________
The Internet Protocol (IP) is the principal communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet.
IP has the task of delivering packets from the source host to the destination host solely based on the IP addresses in the packet headers. For this purpose, IP defines packet structures that encapsulate the data to be delivered. It also defines addressing methods that are used to label the datagram with source and destination information.
Historically, IP was the connectionless datagram service in the original Transmission Control Program introduced by Vint Cerf and Bob Kahn in 1974; the other being the connection-oriented Transmission Control Protocol (TCP). The Internet protocol suite is therefore often referred to as TCP/IP.
The first major version of IP, Internet Protocol Version 4 (IPv4), is the dominant protocol of the Internet. Its successor is Internet Protocol Version 6 (IPv6).
Click here for more about Internet Protocol.
___________________________________________________________________________
A List of the World's Largest Internet Companies:
This is a list of the world's largest internet companies by revenue and market capitalization.
The list is restricted to dot-com companies, defined as a company that does the majority of its business on the Internet, with annual revenues exceeding 1 billion USD. It excludes Internet service providers or other information technology companies. For a more general list of IT companies, see list of the largest information technology companies.
Click here for a listing of the Largest Internet Companies based on revenues.
The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), electronic mail, telephony, and peer-to-peer networks for file sharing.
The origins of the Internet date back to research commissioned by the United States government in the 1960s to build robust, fault-tolerant communication via computer networks. The primary precursor network, the ARPANET, initially served as a backbone for interconnection of regional academic and military networks in the 1980s.
The funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The linking of commercial networks and enterprises by the early 1990s marks the beginning of the transition to the modern Internet, and generated a sustained exponential growth as generations of institutional, personal, and mobile computers were connected to the network.
Although the Internet has been widely used by academia since the 1980s, the commercialization incorporated its services and technologies into virtually every aspect of modern life.
Internet use grew rapidly in the West from the mid-1990s and from the late 1990s in the developing world. In the 20 years since 1995, Internet use has grown 100-times, measured for the period of one year, to over one third of the world population.
Most traditional communications media, including telephony and television, are being reshaped or redefined by the Internet, giving birth to new services such as Internet telephony and Internet television. Newspaper, book, and other print publishing are adapting to website technology, or are reshaped into blogging and web feeds.
The entertainment industry was initially the fastest growing segment on the Internet. The Internet has enabled and accelerated new forms of personal interactions through instant messaging, Internet forums, and social networking. Online shopping has grown exponentially both for major retailers and small artisans and traders. Business-to-business and financial services on the Internet affect supply chains across entire industries.
The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own policies. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN).
The technical underpinning and standardization of the core protocols is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.
(For more detailed information about the Internet, click here)
___________________________________________________________________________
The history of the Internet begins with the development of electronic computers in the 1950s. Initial concepts of packet networking originated in several computer science laboratories in the United States, United Kingdom, and France.
The US Department of Defense awarded contracts as early as the 1960s for packet network systems, including the development of the ARPANET. The first message was sent over the ARPANET from computer science Professor Leonard Kleinrock's laboratory at University of California, Los Angeles (UCLA) to the second network node at Stanford Research. Institute (SRI).
Packet switching networks such as ARPANET, NPL network, CYCLADES, Merit Network, Tymnet, and Telenet, were developed in the late 1960s and early 1970s using a variety of communications protocols. Donald Davies first designed a packet-switched network at the National Physics Laboratory in the UK, which became a testbed for UK research for almost two decades.
The ARPANET project led to the development of protocols for internetworking, in which multiple separate networks could be joined into a network of networks.
Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet protocol suite (TCP/IP) was introduced as the standard networking protocol on the ARPANET.
In the early 1980s the NSF funded the establishment for national supercomputing centers at several universities, and provided interconnectivity in 1986 with the NSFNET project, which also created network access to the supercomputer sites in the United States from research and education organizations. Commercial Internet service providers (ISPs) began to emerge in the very late 1980s.
The ARPANET was decommissioned in 1990. Limited private connections to parts of the Internet by officially commercial entities emerged in several American cities by late 1989 and 1990, and the NSFNET was decommissioned in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic.
In the 1980s, research at CERN in Switzerland by British computer scientist Tim Berners-Lee resulted in the World Wide Web, linking hypertext documents into an information system, accessible from any node on the network.
Since the mid-1990s, the Internet has had a revolutionary impact on culture, commerce, and technology, including the rise of near-instant communication by electronic mail, instant messaging, voice over Internet Protocol (VoIP) telephone calls, two-way interactive video calls, and the World Wide Web with its discussion forums, blogs, social networking, and online shopping sites.
The research and education community continues to develop and use advanced networks such as NSF's very high speed Backbone Network Service (vBNS), Internet2, and National LambdaRail.
Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more. The Internet's takeover of the global communication landscape was almost instant in historical terms: it only communicated 1% of the information flowing through two-way telecommunications networks in the year 1993, already 51% by 2000, and more than 97% of the telecommunicated information by 2007.
Today the Internet continues to grow, driven by ever greater amounts of online information, commerce, entertainment, and social networking.
Click here further information about the History of the Internet.
___________________________________________________________________________
Internet access is the process that enables individuals and organisations to connect to the Internet using computer terminals, computers, mobile devices, sometimes via computer networks.
Once connected to the Internet, users can access Internet services, such as email and the World Wide Web. Internet service providers (ISPs) offer Internet access through various technologies that offer a wide range of data signaling rates (speeds).
Consumer use of the Internet first became popular through dial-up Internet access in the 1990s. By the first decade of the 21st century, many consumers in developed nations used faster, broadband Internet access technologies. By 2014 this was almost ubiquitous worldwide, with a global average connection speed exceeding 4 Mbit/s.
Click here for more about Internet Access.
___________________________________________________________________________
The Internet Protocol (IP) is the principal communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet.
IP has the task of delivering packets from the source host to the destination host solely based on the IP addresses in the packet headers. For this purpose, IP defines packet structures that encapsulate the data to be delivered. It also defines addressing methods that are used to label the datagram with source and destination information.
Historically, IP was the connectionless datagram service in the original Transmission Control Program introduced by Vint Cerf and Bob Kahn in 1974; the other being the connection-oriented Transmission Control Protocol (TCP). The Internet protocol suite is therefore often referred to as TCP/IP.
The first major version of IP, Internet Protocol Version 4 (IPv4), is the dominant protocol of the Internet. Its successor is Internet Protocol Version 6 (IPv6).
Click here for more about Internet Protocol.
___________________________________________________________________________
A List of the World's Largest Internet Companies:
This is a list of the world's largest internet companies by revenue and market capitalization.
The list is restricted to dot-com companies, defined as a company that does the majority of its business on the Internet, with annual revenues exceeding 1 billion USD. It excludes Internet service providers or other information technology companies. For a more general list of IT companies, see list of the largest information technology companies.
Click here for a listing of the Largest Internet Companies based on revenues.
Glossary of Internet-related Terms
YouTube Video: Humorous Take by "Tonight with John Oliver: Net Neutrality (HBO)"
Pictured: Internet terms as a Crossword Puzzle
Click on the above link "Glossary of Internet-related Terms" for an alphabetical listed glossary of Internet Terms.
Listing of Online Retailers based in the United States
YouTube Video: Jeff Bezos* 7 Rules of Success | Amazon.com founder | inspirational speech
*- Jeff Bezos is the founder of Amazon.com
Pictured: Shopping online at Walmart.com
Online Digital Libraries
Wikipedia Library Database
YouTube Video: Transforming Tutorials: Tips to Make Digital Library Videos More Engaging and Accessible Online
[RE: YouTube Video (above): This presentation will show how to use the new TED-Ed tool to increase engagement, participation, and retention of concepts covered in library videos. The presentation will also showcase how library videos can be embedded in popular platforms such as LibGuides, Blackboard or social networking websites to reach a wider audience. Finally, the presentation will comment on how the YouTube hosting platform supports.]
Click here to access the Wikipedia Online Library
A digital library is a special library with a focused collection of digital objects that can include text, visual material, audio material, video material, stored as electronic media formats (as opposed to print, microform, or other media), along with means for organizing, storing, and retrieving the files and media contained in the library collection.
Digital libraries can vary immensely in size and scope, and can be maintained by individuals, organizations, or affiliated with established physical library buildings or institutions, or with academic institutions.
The digital content may be stored locally, or accessed remotely via computer networks. An electronic library is a type of information retrieval system.
For amplification, click on any of the following:
Click here to access the Wikipedia Online Library
A digital library is a special library with a focused collection of digital objects that can include text, visual material, audio material, video material, stored as electronic media formats (as opposed to print, microform, or other media), along with means for organizing, storing, and retrieving the files and media contained in the library collection.
Digital libraries can vary immensely in size and scope, and can be maintained by individuals, organizations, or affiliated with established physical library buildings or institutions, or with academic institutions.
The digital content may be stored locally, or accessed remotely via computer networks. An electronic library is a type of information retrieval system.
For amplification, click on any of the following:
- 1 Software implementation
- 2 History
- 3 Terminology
- 4 Academic repositories
- 5 Digital archives
- 6 The future
- 7 Searching
- 8 Advantages
- 9 Disadvantages
- 10 See also
- 11 References
- 12 Further reading
- 13 External links
TED (Technology, Entertainment, Design)
YouTube Video: The 20 Most-Watched TEDTalks
Click here to visit TED.COM website.
TED (Technology, Entertainment, Design) is a global set of conferences run by the private nonprofit organization Sapling Foundation, under the slogan "Ideas Worth Spreading".
TED was founded in February 1984 as a one-off event. The annual conference series began in 1990.
TED's early emphasis was technology and design, consistent with its Silicon Valley origins, but it has since broadened its focus to include talks on many scientific, cultural, and academic topics.
The main TED conference is held annually in Vancouver, British Columbia, Canada and its companion TEDActive is held in the neighboring city of Whistler. Prior to 2014, the two conferences were held in Long Beach and Palm Springs, California, respectively.
TED events are also held throughout North America and in Europe and Asia, offering live streaming of the talks. They address a wide range of topics within the research and practice of science and culture, often through storytelling.
The speakers are given a maximum of 18 minutes to present their ideas in the most innovative and engaging ways they can.
Past speakers include:
TED's current curator is the British former computer journalist and magazine publisher Chris Anderson.
As of March 2016, over 2,400 talks are freely available on the website. In June 2011, the talks' combined viewing figure stood at more than 500 million, and by November 2012, TED talks had been watched over one billion times worldwide.
Not all TED talks are equally popular, however. Those given by academics tend to be watched more online, and art and design videos tend to be watched less than average.
Click on any of the following blue hyperlinks for more about TED (Conference):
TED (Technology, Entertainment, Design) is a global set of conferences run by the private nonprofit organization Sapling Foundation, under the slogan "Ideas Worth Spreading".
TED was founded in February 1984 as a one-off event. The annual conference series began in 1990.
TED's early emphasis was technology and design, consistent with its Silicon Valley origins, but it has since broadened its focus to include talks on many scientific, cultural, and academic topics.
The main TED conference is held annually in Vancouver, British Columbia, Canada and its companion TEDActive is held in the neighboring city of Whistler. Prior to 2014, the two conferences were held in Long Beach and Palm Springs, California, respectively.
TED events are also held throughout North America and in Europe and Asia, offering live streaming of the talks. They address a wide range of topics within the research and practice of science and culture, often through storytelling.
The speakers are given a maximum of 18 minutes to present their ideas in the most innovative and engaging ways they can.
Past speakers include:
- Bill Clinton,
- Jane Goodall,
- Al Gore,
- Gordon Brown,
- Billy Graham,
- Richard Dawkins,
- Richard Stallman,
- Bill Gates,
- Bono,
- Mike Rowe,
- Google founders Larry Page and Sergey Brin,
- and many Nobel Prize winners.
TED's current curator is the British former computer journalist and magazine publisher Chris Anderson.
As of March 2016, over 2,400 talks are freely available on the website. In June 2011, the talks' combined viewing figure stood at more than 500 million, and by November 2012, TED talks had been watched over one billion times worldwide.
Not all TED talks are equally popular, however. Those given by academics tend to be watched more online, and art and design videos tend to be watched less than average.
Click on any of the following blue hyperlinks for more about TED (Conference):
Internet Content Management Systems (Web CMS)
YouTube Video: Web Content Management Explained
Pictured: Illustration of the process for Web CMS
A web content management system (WCMS) is a software system that provides website authoring, collaboration, and administration tools designed to allow users with little knowledge of web programming languages or markup languages to create and manage website content with relative ease.
A robust Web Content Management System provides the foundation for collaboration, offering users the ability to manage documents and output for multiple author editing and participation.
Most systems use a content repository or a database to store page content, metadata, and other information assets that might be needed by the system.
A presentation layer (template engine) displays the content to website visitors based on a set of templates, which are sometimes XSLT files.
Most systems use server side caching to improve performance. This works best when the WCMS is not changed often but visits happen regularly.
Administration is also typically done through browser-based interfaces, but some systems require the use of a fat client.
A WCMS allows non-technical users to make changes to a website with little training. A WCMS typically requires a systems administrator and/or a web developer to set up and add features, but it is primarily a website maintenance tool for non-technical staff.
Capabilities:
A web content management system is used to control a dynamic collection of web material, including HTML documents, images, and other forms of media. A CMS facilitates document control, auditing, editing, and timeline management. A WCMS typically has the following features:
Automated templates:
Create standard templates (usually HTML and XML) that can be automatically applied to new and existing content, allowing the appearance of all content to be changed from one central place.
Access control:
Some WCMS systems support user groups. User groups allow you to control how registered users interact with the site. A page on the site can be restricted to one or more groups. This means an anonymous user (someone not logged on), or a logged on user who is not a member of the group a page is restricted to, will be denied access to the page.
Scalable expansion:
Available in most modern WCMSs is the ability to expand a single implementation (one installation on one server) across multiple domains, depending on the server's settings.
WCMS sites may be able to create microsites/web portals within a main site as well.
Easily editable content:
Once content is separated from the visual presentation of a site, it usually becomes much easier and quicker to edit and manipulate. Most WCMS software includes WYSIWYG editing tools allowing non-technical users to create and edit content.
Scalable feature sets:
Most WCMS software includes plug-ins or modules that can be easily installed to extend an existing site's functionality.
Web standards upgrades:
Active WCMS software usually receives regular updates that include new feature sets and keep the system up to current web standards.
Workflow management:
Workflow is the process of creating cycles of sequential and parallel tasks that must be accomplished in the CMS. For example, one or many content creators can submit a story, but it is not published until the copy editor cleans it up and the editor-in-chief approves it.
Collaboration:
CMS software may act as a collaboration platform allowing content to be retrieved and worked on by one or many authorized users. Changes can be tracked and authorized for publication or ignored reverting to old versions. Other advanced forms of collaboration allow multiple users to modify (or comment) a page at the same time in a collaboration session.
Delegation:
Some CMS software allows for various user groups to have limited privileges over specific content on the website, spreading out the responsibility of content management.
Document management:
CMS software may provide a means of collaboratively managing the life cycle of a document from initial creation time, through revisions, publication, archive, and document destruction.
Content virtualization:
CMS software may provide a means of allowing each user to work within a virtual copy of the entire web site, document set, and/or code base. This enables changes to multiple interdependent resources to be viewed and/or executed in-context prior to submission.
Content syndication:
CMS software often assists in content distribution by generating RSS and Atom data feeds to other systems. They may also e-mail users when updates are available as part of the workflow process.
Multilingual:
Ability to display content in multiple languages.
Versioning:
Like document management systems, CMS software may allow the process of versioning by which pages are checked in or out of the WCMS, allowing authorized editors to retrieve previous versions and to continue work from a selected point. Versioning is useful for content that changes over time and requires updating, but it may be necessary to go back to or reference a previous copy.
Types: There are three major types of WCMS: offline processing, online processing, and hybrid systems. These terms describe the deployment pattern for the WCMS in terms of when presentation templates are applied to render web pages from structured content.
Offline processing: These systems, sometimes referred to as "static site generators", pre-process all content, applying templates before publication to generate web pages. Since pre-processing systems do not require a server to apply the templates at request time, they may also exist purely as design-time tools.
Online processing:These systems apply templates on-demand. HTML may be generated when a user visits the page or it is pulled from a web cache.
Most open source WCMSs have the capability to support add-ons, which provide extended capabilities including forums, blog, wiki, web stores, photo galleries, contact management, etc. These are often called modules, nodes, widgets, add-ons, or extensions.
Hybrid systems: Some systems combine the offline and online approaches. Some systems write out executable code (e.g., JSP, ASP, PHP, ColdFusion, or Perl pages) rather than just static HTML, so that the CMS itself does not need to be deployed on every web server. Other hybrids operate in either an online or offline mode.
Advantages:
Low cost:
Some content management systems are free, such as Drupal, eZ Publish, TYPO3, Joomla, and WordPress. Others may be affordable based on size subscriptions. Although subscriptions can be expensive, overall the cost of not having to hire full-time developers can lower the total costs. Plus software can be bought based on need for many CMSs.
Easy customization:
A universal layout is created, making pages have a similar theme and design without much code. Many CMS tools use a drag and drop AJAX system for their design modes. It makes it easy for beginner users to create custom front-ends.
Easy to use:
CMSs are designed with non-technical people in mind. Simplicity in design of the admin UI allows website content managers and other users to update content without much training in coding or technical aspects of system maintenance.
Workflow management:
CMSs provide the facility to control how content is published, when it is published, and who publishes it. Some WCMSs allow administrators to set up rules for workflow management, guiding content managers through a series of steps required for each of their tasks.
Good For SEO:
CMS websites are also good for search engine optimization (SEO).
Freshness of content is one factor that helps, as it is believed that some search engines give preference to website with new and updated content than websites with stale and outdated content.
Usage of social media plugins help in weaving a community around your blog. RSS feeds which are automatically generated by blogs or CMS websites can increase the number of subscribers and readers to your site.
Url rewriting can be implemented easily which produces clean urls without parameters which further help in seo. There are plugins available that specifically help with website.
SEO Disadvantages:
Cost of implementations:
Larger scale implementations may require training, planning, and certifications. Certain CMSs may require hardware installations. Commitment to the software is required on bigger investments. Commitment to training, developing, and upkeep are all costs that will be incurred for enterprise systems.
Cost of maintenance:
Maintaining CMSs may require license updates, upgrades, and hardware maintenance.
Latency issues:
Larger CMSs can experience latency if hardware infrastructure is not up to date, if databases are not being utilized correctly, and if web cache files that have to be reloaded every time data is updated grow large. Load balancing issues may also impair caching files.
Tool mixing:
Because the URLs of many CMSs are dynamically generated with internal parameters and reference information, they are often not stable enough for static pages and other web tools, particularly search engines, to rely on them.
Security:
CMS's are often forgotten about when hardware, software, and operating systems are patched for security threats. Due to lack of patching by the user, a hacker can use unpatched CMS software to exploit vulnerabilities to enter an otherwise secure environment. CMS's should be part of an overall, holistic security patch management program to maintain the highest possible security standards.
Notable web CMS:
See also: List of content management systems
Some notable examples of CMS:
See also:
A robust Web Content Management System provides the foundation for collaboration, offering users the ability to manage documents and output for multiple author editing and participation.
Most systems use a content repository or a database to store page content, metadata, and other information assets that might be needed by the system.
A presentation layer (template engine) displays the content to website visitors based on a set of templates, which are sometimes XSLT files.
Most systems use server side caching to improve performance. This works best when the WCMS is not changed often but visits happen regularly.
Administration is also typically done through browser-based interfaces, but some systems require the use of a fat client.
A WCMS allows non-technical users to make changes to a website with little training. A WCMS typically requires a systems administrator and/or a web developer to set up and add features, but it is primarily a website maintenance tool for non-technical staff.
Capabilities:
A web content management system is used to control a dynamic collection of web material, including HTML documents, images, and other forms of media. A CMS facilitates document control, auditing, editing, and timeline management. A WCMS typically has the following features:
Automated templates:
Create standard templates (usually HTML and XML) that can be automatically applied to new and existing content, allowing the appearance of all content to be changed from one central place.
Access control:
Some WCMS systems support user groups. User groups allow you to control how registered users interact with the site. A page on the site can be restricted to one or more groups. This means an anonymous user (someone not logged on), or a logged on user who is not a member of the group a page is restricted to, will be denied access to the page.
Scalable expansion:
Available in most modern WCMSs is the ability to expand a single implementation (one installation on one server) across multiple domains, depending on the server's settings.
WCMS sites may be able to create microsites/web portals within a main site as well.
Easily editable content:
Once content is separated from the visual presentation of a site, it usually becomes much easier and quicker to edit and manipulate. Most WCMS software includes WYSIWYG editing tools allowing non-technical users to create and edit content.
Scalable feature sets:
Most WCMS software includes plug-ins or modules that can be easily installed to extend an existing site's functionality.
Web standards upgrades:
Active WCMS software usually receives regular updates that include new feature sets and keep the system up to current web standards.
Workflow management:
Workflow is the process of creating cycles of sequential and parallel tasks that must be accomplished in the CMS. For example, one or many content creators can submit a story, but it is not published until the copy editor cleans it up and the editor-in-chief approves it.
Collaboration:
CMS software may act as a collaboration platform allowing content to be retrieved and worked on by one or many authorized users. Changes can be tracked and authorized for publication or ignored reverting to old versions. Other advanced forms of collaboration allow multiple users to modify (or comment) a page at the same time in a collaboration session.
Delegation:
Some CMS software allows for various user groups to have limited privileges over specific content on the website, spreading out the responsibility of content management.
Document management:
CMS software may provide a means of collaboratively managing the life cycle of a document from initial creation time, through revisions, publication, archive, and document destruction.
Content virtualization:
CMS software may provide a means of allowing each user to work within a virtual copy of the entire web site, document set, and/or code base. This enables changes to multiple interdependent resources to be viewed and/or executed in-context prior to submission.
Content syndication:
CMS software often assists in content distribution by generating RSS and Atom data feeds to other systems. They may also e-mail users when updates are available as part of the workflow process.
Multilingual:
Ability to display content in multiple languages.
Versioning:
Like document management systems, CMS software may allow the process of versioning by which pages are checked in or out of the WCMS, allowing authorized editors to retrieve previous versions and to continue work from a selected point. Versioning is useful for content that changes over time and requires updating, but it may be necessary to go back to or reference a previous copy.
Types: There are three major types of WCMS: offline processing, online processing, and hybrid systems. These terms describe the deployment pattern for the WCMS in terms of when presentation templates are applied to render web pages from structured content.
Offline processing: These systems, sometimes referred to as "static site generators", pre-process all content, applying templates before publication to generate web pages. Since pre-processing systems do not require a server to apply the templates at request time, they may also exist purely as design-time tools.
Online processing:These systems apply templates on-demand. HTML may be generated when a user visits the page or it is pulled from a web cache.
Most open source WCMSs have the capability to support add-ons, which provide extended capabilities including forums, blog, wiki, web stores, photo galleries, contact management, etc. These are often called modules, nodes, widgets, add-ons, or extensions.
Hybrid systems: Some systems combine the offline and online approaches. Some systems write out executable code (e.g., JSP, ASP, PHP, ColdFusion, or Perl pages) rather than just static HTML, so that the CMS itself does not need to be deployed on every web server. Other hybrids operate in either an online or offline mode.
Advantages:
Low cost:
Some content management systems are free, such as Drupal, eZ Publish, TYPO3, Joomla, and WordPress. Others may be affordable based on size subscriptions. Although subscriptions can be expensive, overall the cost of not having to hire full-time developers can lower the total costs. Plus software can be bought based on need for many CMSs.
Easy customization:
A universal layout is created, making pages have a similar theme and design without much code. Many CMS tools use a drag and drop AJAX system for their design modes. It makes it easy for beginner users to create custom front-ends.
Easy to use:
CMSs are designed with non-technical people in mind. Simplicity in design of the admin UI allows website content managers and other users to update content without much training in coding or technical aspects of system maintenance.
Workflow management:
CMSs provide the facility to control how content is published, when it is published, and who publishes it. Some WCMSs allow administrators to set up rules for workflow management, guiding content managers through a series of steps required for each of their tasks.
Good For SEO:
CMS websites are also good for search engine optimization (SEO).
Freshness of content is one factor that helps, as it is believed that some search engines give preference to website with new and updated content than websites with stale and outdated content.
Usage of social media plugins help in weaving a community around your blog. RSS feeds which are automatically generated by blogs or CMS websites can increase the number of subscribers and readers to your site.
Url rewriting can be implemented easily which produces clean urls without parameters which further help in seo. There are plugins available that specifically help with website.
SEO Disadvantages:
Cost of implementations:
Larger scale implementations may require training, planning, and certifications. Certain CMSs may require hardware installations. Commitment to the software is required on bigger investments. Commitment to training, developing, and upkeep are all costs that will be incurred for enterprise systems.
Cost of maintenance:
Maintaining CMSs may require license updates, upgrades, and hardware maintenance.
Latency issues:
Larger CMSs can experience latency if hardware infrastructure is not up to date, if databases are not being utilized correctly, and if web cache files that have to be reloaded every time data is updated grow large. Load balancing issues may also impair caching files.
Tool mixing:
Because the URLs of many CMSs are dynamically generated with internal parameters and reference information, they are often not stable enough for static pages and other web tools, particularly search engines, to rely on them.
Security:
CMS's are often forgotten about when hardware, software, and operating systems are patched for security threats. Due to lack of patching by the user, a hacker can use unpatched CMS software to exploit vulnerabilities to enter an otherwise secure environment. CMS's should be part of an overall, holistic security patch management program to maintain the highest possible security standards.
Notable web CMS:
See also: List of content management systems
Some notable examples of CMS:
- WordPress originated as a blogging CMS, but has been adapted into a full-fledged CMS.
- Textpattern is one of the first open source CMS.
- Joomla! is a popular content management system.
- TYPO3 is one of the most used open source enterprise class CMS.
- Drupal is the third most used CMS and originated before WordPress and Joomla.
- Expression Engine is in the top 5 most used CMSs. It is a commercial CMS made by EllisLab.
- MediaWiki powers Wikipedia and related projects, and is one of the most prominent examples of a wiki CMS.
- Magnolia CMS
- eXo Platform Open Source Social CMS
- Liferay Open Source Portal WCMS
- TWiki Open Source Structured wiki CMS
- Foswiki Open Source Structured wiki CMS
- HP TeamSite
- SoNET Web Engine
- EpiServer
- FileNet
- OpenText Web Experience Management
See also:
Web Browsers: click on topic hyperlinks below (in bold underline blue):
YouTube Video: Edge vs Chrome v Firefox - 2016 Visual Comparison of Web Browsers
Pictured: Web browsers and the date each became commercially available
Click Here for A List of Web Browsers
Click Here for a Comparison of Web Browsers as tables that compare general and technical information. Please see the individual products' articles for further information.
Click Here for Usage Share of Web Browsers.
A web browser (commonly referred to as a browser) is a software application for retrieving, presenting, and traversing information resources on the World Wide Web.
An information resource is identified by a Uniform Resource Identifier (URI/URL) and may be a web page, image, video or other piece of content. Hyperlinks present in resources enable users easily to navigate their browsers to related resources.
Although browsers are primarily intended to use the World Wide Web, they can also be used to access information provided by web servers in private networks or files in file systems.
The major web browsers are Firefox, Internet Explorer/Microsoft Edge, Google Chrome, Opera, and Safari.
Click on any of the following blue hyperlinks for more information about Web Browsers:
Click Here for a Comparison of Web Browsers as tables that compare general and technical information. Please see the individual products' articles for further information.
Click Here for Usage Share of Web Browsers.
A web browser (commonly referred to as a browser) is a software application for retrieving, presenting, and traversing information resources on the World Wide Web.
An information resource is identified by a Uniform Resource Identifier (URI/URL) and may be a web page, image, video or other piece of content. Hyperlinks present in resources enable users easily to navigate their browsers to related resources.
Although browsers are primarily intended to use the World Wide Web, they can also be used to access information provided by web servers in private networks or files in file systems.
The major web browsers are Firefox, Internet Explorer/Microsoft Edge, Google Chrome, Opera, and Safari.
Click on any of the following blue hyperlinks for more information about Web Browsers:
Web Search Engines including a List of Search Engines
YouTube Video: How Search Works (by Google)
For a tutorial on using search engines for researching Wikipedia articles, Click here: Wikipedia:Search engine test.
Pictured: Logos of Major Search Engines
Click here for a List of Search Engines
A web search engine is a software system that is designed to search for information on the World Wide Web.
The search results are generally presented in a line of results often referred to as search engine results pages (SERPs).
The information may be a mix of web pages, images, and other types of files. Some search engines also mine data available in databases or open directories.
Unlike web directories, which are maintained only by human editors, search engines also maintain real-time information by running an algorithm on a web crawler.
For further information about web search engines, click on any of the following blue hyperlinks:
A web search engine is a software system that is designed to search for information on the World Wide Web.
The search results are generally presented in a line of results often referred to as search engine results pages (SERPs).
The information may be a mix of web pages, images, and other types of files. Some search engines also mine data available in databases or open directories.
Unlike web directories, which are maintained only by human editors, search engines also maintain real-time information by running an algorithm on a web crawler.
For further information about web search engines, click on any of the following blue hyperlinks:
- History
- How web search engines work
- Market share
- Search engine bias
- Customized results and filter bubbles
- Christian, Islamic and Jewish search engines
- Search engine submission
- See also:
Search Engine Optimization (SEO) including Search Engine Results Page (SERP)
YouTube Video: What Is Search Engine Optimization / SEO
YouTube Video: 5 Hours of SEO | Zero-Click Searches, SERP Features, and Getting Traffic
Pictured: Illustration of the SEO Process
Search engine optimization (SEO) is the process of affecting the visibility of a website or a web page in a web search engine's unpaid results — often referred to as "natural," "organic," or "earned" results.
In general, the earlier (or higher ranked on the search results page), and more frequently a site appears in the search results list, the more visitors it will receive from the search engine's users, and these visitors can be converted into customers.
SEO may target different kinds of search, including image search, local search, video search, academic search, news search and industry-specific vertical search engines.
As an Internet marketing strategy, SEO considers how search engines work, what people search for, the actual search terms or keywords typed into search engines and which search engines are preferred by their targeted audience.
Optimizing a website may involve editing its content, HTML and associated coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines.
Promoting a site to increase the number of backlinks, or inbound links, is another SEO tactic. As of May 2015, mobile search has finally surpassed desktop search, Google is developing and pushing mobile search as the future in all of its products and many brands are beginning to take a different approach on their internet strategies.
Google's Role in SEO:
In 1998, Graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub," a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.
PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.
Page and Brin founded Google in 1998. Google attracted a loyal following among the growing number of Internet users, who liked its simple design. Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings.
Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.
By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. In June 2007, The New York Times' Saul Hansell stated Google ranks sites using more than 200 different signals.The leading search engines, Google, Bing, and Yahoo, do not disclose the algorithms they use to rank pages.
Some SEO practitioners have studied different approaches to search engine optimization, and have shared their personal opinions. Patents related to search engines can provide information to better understand search engines.
In 2005, Google began personalizing search results for each user. Depending on their history of previous searches, Google crafted results for logged in users. In 2008, Bruce Clay said that "ranking is dead" because of personalized search. He opined that it would become meaningless to discuss how a website ranked, because its rank would potentially be different for each user and each search.
In 2007, Google announced a campaign against paid links that transfer PageRank. On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the no follow attribute on links. Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat no-followed links in the same way, in order to prevent SEO service providers from using no-follow for PageRank sculpting.
As a result of this change the usage of nofollow leads to evaporation of pagerank. In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated Javascript and thus permit PageRank sculpting.
Additionally several solutions have been suggested that include the usage of iframes, Flash and Javascript.
In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.
On June 8, 2010 a new web indexing system called Google Caffeine was announced. Designed to allow users to find news results, forum posts and other content much sooner after publishing than before, Google caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before. According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index..."
Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.
In February 2011, Google announced the Panda update, which penalizes websites containing content duplicated from other websites and sources. Historically websites have copied content from one another and benefited in search engine rankings by engaging in this practice, however Google implemented a new system which punishes sites whose content is not unique.
The 2012 Google Penguin attempted to penalize websites that used manipulative techniques to improve their rankings on the search engine, and the 2013 Google Hummingbird update featured an algorithm change designed to improve Google's natural language processing and semantic understanding of web pages.
Methods
Getting indexed:
The leading search engines, such as Google, Bing and Yahoo!, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. Two major directories, the Yahoo Directory and DMOZ, both require manual submission and human editorial review.
Google offers Google Search Console, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that are not discoverable by automatically following links in addition to their URL submission console. Yahoo! formerly operated a paid submission service that guaranteed crawling for a cost per click; this was discontinued in 2009.
Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.
Preventing crawling:
Main article: Robots Exclusion Standard
To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots.
When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed, and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled.
Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.
Increasing prominence:
A variety of methods can increase the prominence of a webpage within the search results. Cross linking between pages of the same website to provide more links to important pages may improve its visibility.
Writing content that includes frequently searched keyword phrase, so as to be relevant to a wide variety of search queries will tend to increase traffic. Updating content so as to keep search engines crawling back frequently can give additional weight to a site.
Adding relevant keywords to a web page's meta data, including the title tag and meta description, will tend to improve the relevancy of a site's search listings, thus increasing traffic. URL normalization of web pages accessible via multiple urls, using the canonical link element or via 301 redirects can help make sure links to different versions of the URL all count towards the page's link popularity score.
White hat versus black hat techniques:
SEO techniques can be classified into two broad categories: techniques that search engines recommend as part of good design, and those techniques of which search engines do not approve. The search engines attempt to minimize the effect of the latter, among them spamdexing. Industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO.
White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.
An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception. As the search engine guidelines are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines, but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see.
White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the spiders, rather than attempting to trick the algorithm from its intended purpose. White hat SEO is in many ways similar to web development that promotes accessibility, although the two are not identical.
Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception. One black hat technique uses text that is hidden, either as text colored similar to the background, in an invisible div, or positioned off screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.
Another category sometimes used is grey hat SEO. This is in between black hat and white hat approaches where the methods employed avoid the site being penalized however do not act in producing the best content for users, rather entirely focused on improving search engine rankings.
Search engines may penalize sites they discover using black hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines' algorithms, or by a manual site review.
One example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices. Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google's list.
As a marketing strategy, SEO is not an appropriate strategy for every website, and other Internet marketing strategies can be more effective like paid advertising through pay per click (PPC) campaigns, depending on the site operator's goals.
A successful Internet marketing campaign may also depend upon building high quality web pages to engage and persuade, setting up analytics programs to enable site owners to measure results, and improving a site's conversion rate.
In November 2015, Google released a full 160 page version of its Search Quality Rating Guidelines to the public, which now shows a shift in their focus towards "usefulness" and mobile search.
SEO may generate an adequate return on investment. However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals. Due to this lack of guarantees and certainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors. Search engines can change their algorithms, impacting a website's placement, possibly resulting in a serious loss of traffic.
According to Google's CEO, Eric Schmidt, in 2010, Google made over 500 algorithm changes – almost 1.5 per day. It is considered wise business practice for website operators to liberate themselves from dependence on search engine traffic.
In addition to accessibility in terms of web crawlers (addressed above), user web accessibility has become increasingly important for SEO.
International markets:
Optimization techniques are highly tuned to the dominant search engines in the target market. The search engines' market shares vary from market to market, as does competition. In 2003, Danny Sullivan stated that Google represented about 75% of all searches. In markets outside the United States, Google's share is often larger, and Google remains the dominant search engine worldwide as of 2007.
As of 2006, Google had an 85–90% market share in Germany. While there were hundreds of SEO firms in the US at that time, there were only about five in Germany. As of June 2008, the marketshare of Google in the UK was close to 90% according to Hitwise. That market share is achieved in a number of countries.
As of 2009, there are only a few large markets where Google is not the leading search engine. In most cases, when Google is not leading in a given market, it is lagging behind a local player. The most notable example markets are China, Japan, South Korea, Russia and the Czech Republic where respectively Baidu, Yahoo! Japan, Naver, Yandex and Seznam are market leaders.
Successful search optimization for international markets may require professional translation of web pages, registration of a domain name with a top level domain in the target market, and web hosting that provides a local IP address. Otherwise, the fundamental elements of search optimization are essentially the same, regardless of language.
Legal precedents:
On October 17, 2002, SearchKing filed suit in the United States District Court, Western District of Oklahoma, against the search engine Google. SearchKing's claim was that Google's tactics to prevent spamdexing constituted a tortious interference with contractual relations. On May 27, 2003, the court granted Google's motion to dismiss the complaint because SearchKing "failed to state a claim upon which relief may be granted."
In March 2006, KinderStart filed a lawsuit against Google over search engine rankings. Kinderstart's website was removed from Google's index prior to the lawsuit and the amount of traffic to the site dropped by 70%. On March 16, 2007 the United States District Court for the Northern District of California (San Jose Division) dismissed KinderStart's complaint without leave to amend, and partially granted Google's motion for Rule 11 sanctions against KinderStart's attorney, requiring him to pay part of Google's legal expenses.
See also: ___________________________________________________________________________
Search Engine Results Page (SERP)
A search engine results page (SERP) is the page displayed by a web search engine in response to a query by a searcher. The main component of the SERP is the listing of results that are returned by the search engine in response to a keyword query, although the page may also contain other results such as advertisements.
The results are of two general types, organic (i.e., retrieved by the search engine's algorithm) and sponsored (i.e., advertisements). The results are normally ranked by relevance to the query. Each result displayed on the SERP normally includes a title, a link that points to the actual page on the Web and a short description showing where the keywords have matched content within the page for organic results. For sponsored results, the advertiser chooses what to display.
Due to the huge number of items that are available or related to the query there usually are several SERPs in response to a single search query as the search engine or the user's preferences restrict viewing to a subset of results per page. Each succeeding page will tend to have lower ranking or lower relevancy results.
Just like the world of traditional print media and its advertising, this enables competitive pricing for page real estate, but compounded by the dynamics of consumer expectations and intent— unlike static print media where the content and the advertising on every page is the same all of the time for all viewers, despite such hard copy being localized to some degree, usually geographic, like state, metro-area, city, or neighborhoods.
Components:
There are basically four main components of SERP, which are
However, the SERPs of major search engines, like Google, Yahoo!, and Bing, may include many different types of enhanced results (organic search and sponsored) such as rich snippets, images, maps, definitions, answer boxes, videos or suggested search refinements. A recent study revealed that 97% of queries in Google returned at least one rich feature.
The major search engines visually differentiate specific content types such as images, news, and blogs. Many content types have specialized SERP templates and visual enhancements on the main search results page.
Search query:
Also known as 'user search string', this is the word or set of words that are typed by the user in the search bar of the search engine. The search box is located on all major search engines like Google, Yahoo, and Bing. Users indicate the topic desired based on the keywords they enter into the search box in the search engine.
In the competition between search engines to draw the attention of more users and advertisers, consumer satisfaction has been a driving force in the evolution of the search algorithm applied to better filter the results by relevancy.
Search queries are no longer successful based upon merely finding words that match purely by spelling. Intent and expectations have to be derived to determine whether the appropriate result is a match based upon the broader meanings drawn from context.
And that sense of context has grown from simple matching of words, and then of phrases, to the matching of ideas. And the meanings of those ideas change over time and context.
Successful matching can be crowd sourced, what are others currently searching for and clicking on, when one enters keywords related to those other searches. And the crowd sourcing may be focused based upon one's own social networking.
With the advent of portable devices, smartphones, and wearable devices, watches and various sensors, these provide ever more contextual dimensions for consumer and advertiser to refine and maximize relevancy using such additional factors that may be gleaned like:
Social context and crowd sourcing influences can also be pertinent factors.
The move away from keyboard input and the search box to voice access, aside from convenience, also makes other factors available to varying degrees of accuracy and pertinence, like: a person's character, intonation, mood, accent, ethnicity, and even elements overheard from nearby people and the background environment.
Searching is changing from explicit keywords: on TV show w, did x marry y or z, or election results for candidate x in county y for this date z, or final scores for team x in game y for this date z to vocalizing from a particular time and location: hey, so who won. And getting the results that one expects.
Organic results:
Main article: Web search query
Organic SERP listings are the natural listings generated by search engines based on a series of metrics that determines their relevance to the searched term. Webpages that score well on a search engine's algorithmic test show in this list.
These algorithms are generally based upon factors such as the content of a webpage, the trustworthiness of the website, and external factors such as backlinks, social media, news, advertising, etc.
People tend to view the SERP and the first results on each SERP. Each page of search engine results usually contains 10 organic listings (however some results pages may have fewer organic listings).
The listings, which are on the first page are the most important ones, because those get 91% of the click through rates (CTR) from a particular search. According to a 2013 study, the CTR's for the first page goes as:
Sponsored results:
Main article: Search engine marketing § Paid inclusion
Every major search engine with significant market share accepts paid listings. This unique form of search engine advertising guarantees that your site will appear in the top results for the keyword terms you target within a day or less. Paid search listings are also called sponsored listings and/or Pay Per Click (PPC) listings.
Rich snippets:
Rich snippets are displayed by Google in the search results page when a website contains content in structured data markup. Structured data markup helps the Google algorithm to index and understand the content better. Google supports rich snippets for the following data types:
Knowledge Graph:
Search engines like Google or Bing have started to expand their data into encyclopedias and other rich sources of information.
Google for example calls this sort of information "Knowledge Graph", if a search query matches it will display an additional sub-window on right hand side with information from its sources.
Information about hotels, events, flights, places, businesses, people, books and movies, countries, sport groups, architecture and more can be obtained that way.
Generation:
Major search engines like Google, Yahoo!, and Bing primarily use content contained within the page and fallback to metadata tags of a web page to generate the content that makes up a search snippet. Generally, the HTML title tag will be used as the title of the snippet while the most relevant or useful contents of the web page (description tag or page copy) will be used for the description.
Scraping and automated access:
Search engine result pages are protected from automated access by a range of defensive mechanisms and the terms of service. These result pages are the primary data source for SEO companies, the website placement for competitive keywords became an important field of business and interest. Google has even used twitter to warn users against this practice.
The sponsored (creative) results on Google can cost an large amount of money for advertisers, a few of which pay Google nearly 1000 USD for each sponsored click.
The process of harvesting search engine result page data is usually called "search engine scraping" or in a general form "web crawling" and generates the data SEO related companies need to evaluate website competitive organic and sponsored rankings. This data can be used to track the position of websites and show the effectiveness of SEO as well as keywords that may need more SEO investment to rank higher.
User intent:
User intent or query intent is the identification and categorization of what a user online intended or wanted when they typed their search terms into an online web search engine for the purpose of search engine optimization or conversion rate optimization. When a user goes online, the goal can be fact-checking, comparison shopping, filling downtime, or other activity.
Types:
Though there are various ways of classifying or naming the categories of the different types of user intent, overall they seem to follow the same clusters. In general and up until the rise and explosion of mobile search, there are and were three very broad categories: informational, transactional, and navigational. However over time and with the rise of mobile search, other categories have appeared or categories have segmented into more specific categorization.
See also:
In general, the earlier (or higher ranked on the search results page), and more frequently a site appears in the search results list, the more visitors it will receive from the search engine's users, and these visitors can be converted into customers.
SEO may target different kinds of search, including image search, local search, video search, academic search, news search and industry-specific vertical search engines.
As an Internet marketing strategy, SEO considers how search engines work, what people search for, the actual search terms or keywords typed into search engines and which search engines are preferred by their targeted audience.
Optimizing a website may involve editing its content, HTML and associated coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines.
Promoting a site to increase the number of backlinks, or inbound links, is another SEO tactic. As of May 2015, mobile search has finally surpassed desktop search, Google is developing and pushing mobile search as the future in all of its products and many brands are beginning to take a different approach on their internet strategies.
Google's Role in SEO:
In 1998, Graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub," a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.
PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.
Page and Brin founded Google in 1998. Google attracted a loyal following among the growing number of Internet users, who liked its simple design. Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings.
Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.
By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. In June 2007, The New York Times' Saul Hansell stated Google ranks sites using more than 200 different signals.The leading search engines, Google, Bing, and Yahoo, do not disclose the algorithms they use to rank pages.
Some SEO practitioners have studied different approaches to search engine optimization, and have shared their personal opinions. Patents related to search engines can provide information to better understand search engines.
In 2005, Google began personalizing search results for each user. Depending on their history of previous searches, Google crafted results for logged in users. In 2008, Bruce Clay said that "ranking is dead" because of personalized search. He opined that it would become meaningless to discuss how a website ranked, because its rank would potentially be different for each user and each search.
In 2007, Google announced a campaign against paid links that transfer PageRank. On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the no follow attribute on links. Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat no-followed links in the same way, in order to prevent SEO service providers from using no-follow for PageRank sculpting.
As a result of this change the usage of nofollow leads to evaporation of pagerank. In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated Javascript and thus permit PageRank sculpting.
Additionally several solutions have been suggested that include the usage of iframes, Flash and Javascript.
In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.
On June 8, 2010 a new web indexing system called Google Caffeine was announced. Designed to allow users to find news results, forum posts and other content much sooner after publishing than before, Google caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before. According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index..."
Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.
In February 2011, Google announced the Panda update, which penalizes websites containing content duplicated from other websites and sources. Historically websites have copied content from one another and benefited in search engine rankings by engaging in this practice, however Google implemented a new system which punishes sites whose content is not unique.
The 2012 Google Penguin attempted to penalize websites that used manipulative techniques to improve their rankings on the search engine, and the 2013 Google Hummingbird update featured an algorithm change designed to improve Google's natural language processing and semantic understanding of web pages.
Methods
Getting indexed:
The leading search engines, such as Google, Bing and Yahoo!, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. Two major directories, the Yahoo Directory and DMOZ, both require manual submission and human editorial review.
Google offers Google Search Console, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that are not discoverable by automatically following links in addition to their URL submission console. Yahoo! formerly operated a paid submission service that guaranteed crawling for a cost per click; this was discontinued in 2009.
Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.
Preventing crawling:
Main article: Robots Exclusion Standard
To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots.
When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed, and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled.
Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.
Increasing prominence:
A variety of methods can increase the prominence of a webpage within the search results. Cross linking between pages of the same website to provide more links to important pages may improve its visibility.
Writing content that includes frequently searched keyword phrase, so as to be relevant to a wide variety of search queries will tend to increase traffic. Updating content so as to keep search engines crawling back frequently can give additional weight to a site.
Adding relevant keywords to a web page's meta data, including the title tag and meta description, will tend to improve the relevancy of a site's search listings, thus increasing traffic. URL normalization of web pages accessible via multiple urls, using the canonical link element or via 301 redirects can help make sure links to different versions of the URL all count towards the page's link popularity score.
White hat versus black hat techniques:
SEO techniques can be classified into two broad categories: techniques that search engines recommend as part of good design, and those techniques of which search engines do not approve. The search engines attempt to minimize the effect of the latter, among them spamdexing. Industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO.
White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.
An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception. As the search engine guidelines are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines, but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see.
White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the spiders, rather than attempting to trick the algorithm from its intended purpose. White hat SEO is in many ways similar to web development that promotes accessibility, although the two are not identical.
Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception. One black hat technique uses text that is hidden, either as text colored similar to the background, in an invisible div, or positioned off screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.
Another category sometimes used is grey hat SEO. This is in between black hat and white hat approaches where the methods employed avoid the site being penalized however do not act in producing the best content for users, rather entirely focused on improving search engine rankings.
Search engines may penalize sites they discover using black hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines' algorithms, or by a manual site review.
One example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices. Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google's list.
As a marketing strategy, SEO is not an appropriate strategy for every website, and other Internet marketing strategies can be more effective like paid advertising through pay per click (PPC) campaigns, depending on the site operator's goals.
A successful Internet marketing campaign may also depend upon building high quality web pages to engage and persuade, setting up analytics programs to enable site owners to measure results, and improving a site's conversion rate.
In November 2015, Google released a full 160 page version of its Search Quality Rating Guidelines to the public, which now shows a shift in their focus towards "usefulness" and mobile search.
SEO may generate an adequate return on investment. However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals. Due to this lack of guarantees and certainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors. Search engines can change their algorithms, impacting a website's placement, possibly resulting in a serious loss of traffic.
According to Google's CEO, Eric Schmidt, in 2010, Google made over 500 algorithm changes – almost 1.5 per day. It is considered wise business practice for website operators to liberate themselves from dependence on search engine traffic.
In addition to accessibility in terms of web crawlers (addressed above), user web accessibility has become increasingly important for SEO.
International markets:
Optimization techniques are highly tuned to the dominant search engines in the target market. The search engines' market shares vary from market to market, as does competition. In 2003, Danny Sullivan stated that Google represented about 75% of all searches. In markets outside the United States, Google's share is often larger, and Google remains the dominant search engine worldwide as of 2007.
As of 2006, Google had an 85–90% market share in Germany. While there were hundreds of SEO firms in the US at that time, there were only about five in Germany. As of June 2008, the marketshare of Google in the UK was close to 90% according to Hitwise. That market share is achieved in a number of countries.
As of 2009, there are only a few large markets where Google is not the leading search engine. In most cases, when Google is not leading in a given market, it is lagging behind a local player. The most notable example markets are China, Japan, South Korea, Russia and the Czech Republic where respectively Baidu, Yahoo! Japan, Naver, Yandex and Seznam are market leaders.
Successful search optimization for international markets may require professional translation of web pages, registration of a domain name with a top level domain in the target market, and web hosting that provides a local IP address. Otherwise, the fundamental elements of search optimization are essentially the same, regardless of language.
Legal precedents:
On October 17, 2002, SearchKing filed suit in the United States District Court, Western District of Oklahoma, against the search engine Google. SearchKing's claim was that Google's tactics to prevent spamdexing constituted a tortious interference with contractual relations. On May 27, 2003, the court granted Google's motion to dismiss the complaint because SearchKing "failed to state a claim upon which relief may be granted."
In March 2006, KinderStart filed a lawsuit against Google over search engine rankings. Kinderstart's website was removed from Google's index prior to the lawsuit and the amount of traffic to the site dropped by 70%. On March 16, 2007 the United States District Court for the Northern District of California (San Jose Division) dismissed KinderStart's complaint without leave to amend, and partially granted Google's motion for Rule 11 sanctions against KinderStart's attorney, requiring him to pay part of Google's legal expenses.
See also: ___________________________________________________________________________
Search Engine Results Page (SERP)
A search engine results page (SERP) is the page displayed by a web search engine in response to a query by a searcher. The main component of the SERP is the listing of results that are returned by the search engine in response to a keyword query, although the page may also contain other results such as advertisements.
The results are of two general types, organic (i.e., retrieved by the search engine's algorithm) and sponsored (i.e., advertisements). The results are normally ranked by relevance to the query. Each result displayed on the SERP normally includes a title, a link that points to the actual page on the Web and a short description showing where the keywords have matched content within the page for organic results. For sponsored results, the advertiser chooses what to display.
Due to the huge number of items that are available or related to the query there usually are several SERPs in response to a single search query as the search engine or the user's preferences restrict viewing to a subset of results per page. Each succeeding page will tend to have lower ranking or lower relevancy results.
Just like the world of traditional print media and its advertising, this enables competitive pricing for page real estate, but compounded by the dynamics of consumer expectations and intent— unlike static print media where the content and the advertising on every page is the same all of the time for all viewers, despite such hard copy being localized to some degree, usually geographic, like state, metro-area, city, or neighborhoods.
Components:
There are basically four main components of SERP, which are
- the search query contained within a query box
- the organic SERP results
- sponsored SERP results
However, the SERPs of major search engines, like Google, Yahoo!, and Bing, may include many different types of enhanced results (organic search and sponsored) such as rich snippets, images, maps, definitions, answer boxes, videos or suggested search refinements. A recent study revealed that 97% of queries in Google returned at least one rich feature.
The major search engines visually differentiate specific content types such as images, news, and blogs. Many content types have specialized SERP templates and visual enhancements on the main search results page.
Search query:
Also known as 'user search string', this is the word or set of words that are typed by the user in the search bar of the search engine. The search box is located on all major search engines like Google, Yahoo, and Bing. Users indicate the topic desired based on the keywords they enter into the search box in the search engine.
In the competition between search engines to draw the attention of more users and advertisers, consumer satisfaction has been a driving force in the evolution of the search algorithm applied to better filter the results by relevancy.
Search queries are no longer successful based upon merely finding words that match purely by spelling. Intent and expectations have to be derived to determine whether the appropriate result is a match based upon the broader meanings drawn from context.
And that sense of context has grown from simple matching of words, and then of phrases, to the matching of ideas. And the meanings of those ideas change over time and context.
Successful matching can be crowd sourced, what are others currently searching for and clicking on, when one enters keywords related to those other searches. And the crowd sourcing may be focused based upon one's own social networking.
With the advent of portable devices, smartphones, and wearable devices, watches and various sensors, these provide ever more contextual dimensions for consumer and advertiser to refine and maximize relevancy using such additional factors that may be gleaned like:
- a person's relative health,
- wealth,
- and various other status,
- time of day,
- personal habits,
- mobility,
- location,
- weather,
- and nearby services and opportunities, whether urban or suburban, like events, food, recreation, and business.
Social context and crowd sourcing influences can also be pertinent factors.
The move away from keyboard input and the search box to voice access, aside from convenience, also makes other factors available to varying degrees of accuracy and pertinence, like: a person's character, intonation, mood, accent, ethnicity, and even elements overheard from nearby people and the background environment.
Searching is changing from explicit keywords: on TV show w, did x marry y or z, or election results for candidate x in county y for this date z, or final scores for team x in game y for this date z to vocalizing from a particular time and location: hey, so who won. And getting the results that one expects.
Organic results:
Main article: Web search query
Organic SERP listings are the natural listings generated by search engines based on a series of metrics that determines their relevance to the searched term. Webpages that score well on a search engine's algorithmic test show in this list.
These algorithms are generally based upon factors such as the content of a webpage, the trustworthiness of the website, and external factors such as backlinks, social media, news, advertising, etc.
People tend to view the SERP and the first results on each SERP. Each page of search engine results usually contains 10 organic listings (however some results pages may have fewer organic listings).
The listings, which are on the first page are the most important ones, because those get 91% of the click through rates (CTR) from a particular search. According to a 2013 study, the CTR's for the first page goes as:
- TOP 1: 32.5%
- TOP 2: 17.6%
- TOP 3: 11.4%
- TOP 4: 8.1%
- TOP 5: 6.1%
- TOP 6: 4.4%
- TOP 7: 3.5%
- TOP 8: 3.1%
- TOP 9: 2.6%
- TOP 10: 2.4%
Sponsored results:
Main article: Search engine marketing § Paid inclusion
Every major search engine with significant market share accepts paid listings. This unique form of search engine advertising guarantees that your site will appear in the top results for the keyword terms you target within a day or less. Paid search listings are also called sponsored listings and/or Pay Per Click (PPC) listings.
Rich snippets:
Rich snippets are displayed by Google in the search results page when a website contains content in structured data markup. Structured data markup helps the Google algorithm to index and understand the content better. Google supports rich snippets for the following data types:
- Product – Information about a product, including price, availability, and review ratings.
- Recipe – Recipes that can be displayed in web searches and Recipe View.
- Review – A review of an item such as a restaurant, movie, or store.
- Event – An organized event, such as musical concerts or art festivals, that people may attend at a particular time and place.
- SoftwareApplication – Information about a software app, including its URL, review ratings, and price.
- Video – An online video, including a description and thumbnail.
- News article – A news article, including headline, images, and publisher info.
- Science datasets
Knowledge Graph:
Search engines like Google or Bing have started to expand their data into encyclopedias and other rich sources of information.
Google for example calls this sort of information "Knowledge Graph", if a search query matches it will display an additional sub-window on right hand side with information from its sources.
Information about hotels, events, flights, places, businesses, people, books and movies, countries, sport groups, architecture and more can be obtained that way.
Generation:
Major search engines like Google, Yahoo!, and Bing primarily use content contained within the page and fallback to metadata tags of a web page to generate the content that makes up a search snippet. Generally, the HTML title tag will be used as the title of the snippet while the most relevant or useful contents of the web page (description tag or page copy) will be used for the description.
Scraping and automated access:
Search engine result pages are protected from automated access by a range of defensive mechanisms and the terms of service. These result pages are the primary data source for SEO companies, the website placement for competitive keywords became an important field of business and interest. Google has even used twitter to warn users against this practice.
The sponsored (creative) results on Google can cost an large amount of money for advertisers, a few of which pay Google nearly 1000 USD for each sponsored click.
The process of harvesting search engine result page data is usually called "search engine scraping" or in a general form "web crawling" and generates the data SEO related companies need to evaluate website competitive organic and sponsored rankings. This data can be used to track the position of websites and show the effectiveness of SEO as well as keywords that may need more SEO investment to rank higher.
User intent:
User intent or query intent is the identification and categorization of what a user online intended or wanted when they typed their search terms into an online web search engine for the purpose of search engine optimization or conversion rate optimization. When a user goes online, the goal can be fact-checking, comparison shopping, filling downtime, or other activity.
Types:
Though there are various ways of classifying or naming the categories of the different types of user intent, overall they seem to follow the same clusters. In general and up until the rise and explosion of mobile search, there are and were three very broad categories: informational, transactional, and navigational. However over time and with the rise of mobile search, other categories have appeared or categories have segmented into more specific categorization.
See also:
CAPTCHA
YouTube Video: How Does CAPTCHA Work?
Pictured: Example of CAPTCHA phrase for website visitor to enter to ensure human (not robot) user.
A CAPTCHA (a backronym for "Completely Automated Public Turing test to tell Computers and Humans Apart") is a type of challenge-response test used in computing to determine whether or not the user is human.
The term was coined in 2003 by Luis von Ahn, Manuel Blum, Nicholas J. Hopper, and John Langford.
The most common type of CAPTCHA was first invented in 1997 by Mark D. Lillibridge, Martin Abadi, Krishna Bharat, and Andrei Z. Broder. This form of CAPTCHA requires that the user type the letters of a distorted image, sometimes with the addition of an obscured sequence of letters or digits that appears on the screen.
Because the test is administered by a computer, in contrast to the standard Turing test that is administered by a human, a CAPTCHA is sometimes described as a reverse Turing test. This term is ambiguous because it could also mean a Turing test in which the participants are both attempting to prove they are the computer.
This user identification procedure has received many criticisms, especially from disabled people, but also from other people who feel that their everyday work is slowed down by distorted words that are difficult to read. It takes the average person approximately 10 seconds to solve a typical CAPTCHA.
Click on any of the following blue hyperlinks for more about CAPTCHA:
The term was coined in 2003 by Luis von Ahn, Manuel Blum, Nicholas J. Hopper, and John Langford.
The most common type of CAPTCHA was first invented in 1997 by Mark D. Lillibridge, Martin Abadi, Krishna Bharat, and Andrei Z. Broder. This form of CAPTCHA requires that the user type the letters of a distorted image, sometimes with the addition of an obscured sequence of letters or digits that appears on the screen.
Because the test is administered by a computer, in contrast to the standard Turing test that is administered by a human, a CAPTCHA is sometimes described as a reverse Turing test. This term is ambiguous because it could also mean a Turing test in which the participants are both attempting to prove they are the computer.
This user identification procedure has received many criticisms, especially from disabled people, but also from other people who feel that their everyday work is slowed down by distorted words that are difficult to read. It takes the average person approximately 10 seconds to solve a typical CAPTCHA.
Click on any of the following blue hyperlinks for more about CAPTCHA:
- Origin and inventorship
- Relation to AI
- Accessibility
- Circumvention
- Alternative CAPTCHAs schemas
- See also:
Internet as a Global Phenomenon
Published by MIT Technology Review (by Manuel Castells September 8, 2014)
YouTube Video of Eric Schmidt* & Jared Cohen**: The Impact of Internet and Technology
* -- Eric Schimdt
** -- Jared Cohen
Pictured: The Future of Privacy Forum: "Almost every time we go online, using our computers or mobile devices, each of us produces data in some form. This data may contain only oblique information about who we are and what we are doing, but when enough of it is aggregated, facts about us which we believed were private has the potential to become known to and used by others." Click here to read more.
"The Internet is the decisive technology of the Information Age, and with the explosion of wireless communication in the early twenty-first century, we can say that humankind is now almost entirely connected, albeit with great levels of inequality in bandwidth, efficiency, and price.
People, companies, and institutions feel the depth of this technological change, but the speed and scope of the transformation has triggered all manner of utopian and dystopian perceptions that, when examined closely through methodologically rigorous empirical research, turn out not to be accurate. For instance, media often report that intense use of the Internet increases the risk of isolation, alienation, and withdrawal from society, but available evidence shows that the Internet neither isolates people nor reduces their sociability; it actually increases sociability, civic engagement, and the intensity of family and friendship relationships, in all cultures.
Our current “network society” is a product of the digital revolution and some major sociocultural changes. One of these is the rise of the “Me-centered society,” marked by an increased focus on individual growth and a decline in community understood in terms of space, work, family, and ascription in general. But individuation does not mean isolation, or the end of community. Instead, social relationships are being reconstructed on the basis of individual interests, values, and projects. Community is formed through individuals’ quests for like-minded people in a process that combines online interaction with offline interaction, cyberspace, and the local space.
Globally, time spent on social networking sites surpassed time spent on e-mail in November 2007, and the number of social networking users surpassed the number of e-mail users in July 2009. Today, social networking sites are the preferred platforms for all kinds of activities, both business and personal, and sociability has dramatically increased — but it is a different kind of sociability. Most Facebook users visit the site daily, and they connect on multiple dimensions, but only on the dimensions they choose. The virtual life is becoming more social than the physical life, but it is less a virtual reality than a real virtuality, facilitating real-life work and urban living.
Because people are increasingly at ease in the Web’s multidimensionality, marketers, government, and civil society are migrating massively to the networks people construct by themselves and for themselves. At root, social-networking entrepreneurs are really selling spaces in which people can freely and autonomously construct their lives. Sites that attempt to impede free communication are soon abandoned by many users in favor of friendlier and less restricted spaces.
Perhaps the most telling expression of this new freedom is the Internet’s transformation of sociopolitical practices. Messages no longer flow solely from the few to the many, with little interactivity. Now, messages also flow from the many to the many, multimodally and interactively. By disintermediating government and corporate control of communication, horizontal communication networks have created a new landscape of social and political change.
Networked social movements have been particularly active since 2010, notably in the Arab revolutions against dictatorships and the protests against the management of the financial crisis. Online and particularly wireless communication has helped social movements pose more of a challenge to state power.
The Internet and the Web constitute the technological infrastructure of the global network society, and the understanding of their logic is a key field of research. It is only scholarly research that will enable us to cut through the myths surrounding this digital communication technology that is already a second skin for young people, yet continues to feed the fears and the fantasies of those who are still in charge of a society that they barely understand....
Read the full article here.
People, companies, and institutions feel the depth of this technological change, but the speed and scope of the transformation has triggered all manner of utopian and dystopian perceptions that, when examined closely through methodologically rigorous empirical research, turn out not to be accurate. For instance, media often report that intense use of the Internet increases the risk of isolation, alienation, and withdrawal from society, but available evidence shows that the Internet neither isolates people nor reduces their sociability; it actually increases sociability, civic engagement, and the intensity of family and friendship relationships, in all cultures.
Our current “network society” is a product of the digital revolution and some major sociocultural changes. One of these is the rise of the “Me-centered society,” marked by an increased focus on individual growth and a decline in community understood in terms of space, work, family, and ascription in general. But individuation does not mean isolation, or the end of community. Instead, social relationships are being reconstructed on the basis of individual interests, values, and projects. Community is formed through individuals’ quests for like-minded people in a process that combines online interaction with offline interaction, cyberspace, and the local space.
Globally, time spent on social networking sites surpassed time spent on e-mail in November 2007, and the number of social networking users surpassed the number of e-mail users in July 2009. Today, social networking sites are the preferred platforms for all kinds of activities, both business and personal, and sociability has dramatically increased — but it is a different kind of sociability. Most Facebook users visit the site daily, and they connect on multiple dimensions, but only on the dimensions they choose. The virtual life is becoming more social than the physical life, but it is less a virtual reality than a real virtuality, facilitating real-life work and urban living.
Because people are increasingly at ease in the Web’s multidimensionality, marketers, government, and civil society are migrating massively to the networks people construct by themselves and for themselves. At root, social-networking entrepreneurs are really selling spaces in which people can freely and autonomously construct their lives. Sites that attempt to impede free communication are soon abandoned by many users in favor of friendlier and less restricted spaces.
Perhaps the most telling expression of this new freedom is the Internet’s transformation of sociopolitical practices. Messages no longer flow solely from the few to the many, with little interactivity. Now, messages also flow from the many to the many, multimodally and interactively. By disintermediating government and corporate control of communication, horizontal communication networks have created a new landscape of social and political change.
Networked social movements have been particularly active since 2010, notably in the Arab revolutions against dictatorships and the protests against the management of the financial crisis. Online and particularly wireless communication has helped social movements pose more of a challenge to state power.
The Internet and the Web constitute the technological infrastructure of the global network society, and the understanding of their logic is a key field of research. It is only scholarly research that will enable us to cut through the myths surrounding this digital communication technology that is already a second skin for young people, yet continues to feed the fears and the fantasies of those who are still in charge of a society that they barely understand....
Read the full article here.
Website Builders
YouTube Video: Free website builders: a comparison of the best ones
Pictured: Logos of three prominent website builders from L-R: Silex, WordPress, and Weebly
Website builders are tools that typically allow the construction of websites without manual code editing. They fall into two categories:
Click on any of the following hyperlinks to read more: note that the link "Website Builder Comparison" includes a comprehensive breakdown of website builder company capabilities/limitations.
- online proprietary tools provided by web hosting companies. These are typically intended for users to build their private site. Some companies allow the site owner to install alternative tools (commercial or open source) - the more complex of these may also be described as Content Management Systems;
- offline software which runs on a computer, creating pages and which can then publish these pages on any host. (These are often considered to be "website design software" rather than "website builders".)
Click on any of the following hyperlinks to read more: note that the link "Website Builder Comparison" includes a comprehensive breakdown of website builder company capabilities/limitations.
- History
- Online vs. offline
- List of notable online website builders
- Example for offline website builder software
- Website builder comparison
- HTML editor
- Comparison of HTML editors
- Web design
World Wide Web
YouTube Video: What is the world wide web? - Twila Camp (TED-Ed)
Pictured: Graphic representation of a minute fraction of the WWW, demonstrating hyperlinks
The World Wide Web (WWW) is an information space where documents and other web resources are identified by URLs, interlinked by hypertext links, and can be accessed via the Internet.
The World Wide Web was invented by English scientist Tim Berners-Lee in 1989. He wrote the first web browser in 1990 while employed at CERN in Switzerland.
It has become known simply as the Web. When used attributively (as in web page, web browser, website, web server, web traffic, web search, web user, web technology, etc.) it is invariably written in lower case. Otherwise the initial capital is often retained (‘the Web’), but lower case is becoming increasingly common (‘the web’).
The World Wide Web was central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet.
Web pages are primarily text documents formatted and annotated with Hypertext Markup Language (HTML). In addition to formatted text, web pages may contain images, video, and software components that are rendered in the user's web browser as coherent pages of multimedia content.
Embedded hyperlinks permit users to navigate between web pages. Multiple web pages with a common theme, a common domain name, or both, may be called a website. Website content can largely be provided by the publisher, or interactive where users contribute content or the content depends upon the user or their actions. Websites may be mostly informative, primarily for entertainment, or largely for commercial purposes.
For amplification, click on any of the following:
The World Wide Web was invented by English scientist Tim Berners-Lee in 1989. He wrote the first web browser in 1990 while employed at CERN in Switzerland.
It has become known simply as the Web. When used attributively (as in web page, web browser, website, web server, web traffic, web search, web user, web technology, etc.) it is invariably written in lower case. Otherwise the initial capital is often retained (‘the Web’), but lower case is becoming increasingly common (‘the web’).
The World Wide Web was central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet.
Web pages are primarily text documents formatted and annotated with Hypertext Markup Language (HTML). In addition to formatted text, web pages may contain images, video, and software components that are rendered in the user's web browser as coherent pages of multimedia content.
Embedded hyperlinks permit users to navigate between web pages. Multiple web pages with a common theme, a common domain name, or both, may be called a website. Website content can largely be provided by the publisher, or interactive where users contribute content or the content depends upon the user or their actions. Websites may be mostly informative, primarily for entertainment, or largely for commercial purposes.
For amplification, click on any of the following:
- History
- Function
- Web security
- Privacy
- Standards
- Accessibility
- Internationalization
- Statistics
- Speed issues
- Web caching
- See also:
Internet Radio including a List of Internet Radio Stations
YouTube Video: Demi Lovato & Brad Paisley - Stone Cold (Live At The 2016 iHeartRadio Music Awards)
Click here for a List of Internet Radio Stations
Internet radio (also web radio, net radio, streaming radio, e-radio, online radio, webcasting) is an audio service transmitted via the Internet. Broadcasting on the Internet is usually referred to as webcasting since it is not transmitted broadly through wireless means.
Internet radio involves streaming media, presenting listeners with a continuous stream of audio that typically cannot be paused or replayed, much like traditional broadcast media; in this respect, it is distinct from on-demand file serving. Internet radio is also distinct from podcasting, which involves downloading rather than streaming.
Internet radio services offer news, sports, talk, and various genres of music—every format that is available on traditional broadcast radio stations.
Many Internet radio services are associated with a corresponding traditional (terrestrial) radio station or radio network, although low start-up and ongoing costs have allowed a substantial proliferation of independent Internet-only radio stations.
Click on any of the following blue hyperlinks for additional information about Internet Radio:
Internet radio (also web radio, net radio, streaming radio, e-radio, online radio, webcasting) is an audio service transmitted via the Internet. Broadcasting on the Internet is usually referred to as webcasting since it is not transmitted broadly through wireless means.
Internet radio involves streaming media, presenting listeners with a continuous stream of audio that typically cannot be paused or replayed, much like traditional broadcast media; in this respect, it is distinct from on-demand file serving. Internet radio is also distinct from podcasting, which involves downloading rather than streaming.
Internet radio services offer news, sports, talk, and various genres of music—every format that is available on traditional broadcast radio stations.
Many Internet radio services are associated with a corresponding traditional (terrestrial) radio station or radio network, although low start-up and ongoing costs have allowed a substantial proliferation of independent Internet-only radio stations.
Click on any of the following blue hyperlinks for additional information about Internet Radio:
- Internet radio technology
- Popularity
- History including US royalty controversy
- See also:
Internet Troll and other Internet Slang
YouTube Video: Top 10 Types of Internet Trolls
Pictured: The advice to ignore rather than engage with a troll is sometimes phrased as "Please do not feed the trolls."
Click here for more about Internet Slang.
Below, we cover the topic "Internet Troll":
In Internet slang, a troll is a person who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a newsgroup, forum, chat room, or blog) with the deliberate intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion, often for their own amusement.
This sense of the word "troll" and its associated verb trolling are associated with Internet discourse, but have been used more widely.
Media attention in recent years has equated trolling with online harassment. For example, mass media has used troll to describe "a person who defaces Internet tribute sites with the aim of causing grief to families."
In addition, depictions of trolling have been included in popular fictional works such as the HBO television program The Newsroom, in which a main character encounters harassing individuals online and tries to infiltrate their circles by posting negative sexual comments himself.
Click on any of the following blue hyperlinks for more about Internet Troll:
Below, we cover the topic "Internet Troll":
In Internet slang, a troll is a person who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a newsgroup, forum, chat room, or blog) with the deliberate intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion, often for their own amusement.
This sense of the word "troll" and its associated verb trolling are associated with Internet discourse, but have been used more widely.
Media attention in recent years has equated trolling with online harassment. For example, mass media has used troll to describe "a person who defaces Internet tribute sites with the aim of causing grief to families."
In addition, depictions of trolling have been included in popular fictional works such as the HBO television program The Newsroom, in which a main character encounters harassing individuals online and tries to infiltrate their circles by posting negative sexual comments himself.
Click on any of the following blue hyperlinks for more about Internet Troll:
- Usage
- Origin and etymology including In other languages
- Trolling, identity, and anonymity
- Corporate, political, and special interest sponsored trolls
- Psychological characteristics
- Concern troll
- Troll sites
- Media coverage and controversy
- Examples
- See also:
Internet in the United States including a PewResearchCenter* Survey of Technology Device Ownership: 2015.
* -- PewResearchCenter
YouTube Video: How to Compare Internet Service Providers
Pictured: ISP choices available to Americans -- Reflecting a monopoly hold on U.S. Internet Users by licensed cable company providers by region
The Internet in the United States grew out of the ARPANET, a network sponsored by the Advanced Research Projects Agency of the U.S. Department of Defense during the 1960s. The Internet in the United States in turn provided the foundation for the world-wide Internet of today.
For more details on this topic, see History of the Internet. Internet access in the United States is largely provided by the private sector and is available in a variety of forms, using a variety of technologies, at a wide range of speeds and costs. In 2014, 87.4% of Americans were using the Internet, which ranks the U.S. 18th out of 211 countries in the world.
A large number of people in the US have little or no choice at all on who provides their internet access. The country suffers from a severe lack of competition in the broadband business. Nearly one-third of households in the United States have either no choice for home broadband Internet service, or no options at all.
Internet top-level domain names specific to the U.S. include:
Many U.S.-based organizations and individuals also use generic top-level domains (.com, .net, .org, .name, ...).
Click on any of the following blue hyperlinks for amplification:
For more details on this topic, see History of the Internet. Internet access in the United States is largely provided by the private sector and is available in a variety of forms, using a variety of technologies, at a wide range of speeds and costs. In 2014, 87.4% of Americans were using the Internet, which ranks the U.S. 18th out of 211 countries in the world.
A large number of people in the US have little or no choice at all on who provides their internet access. The country suffers from a severe lack of competition in the broadband business. Nearly one-third of households in the United States have either no choice for home broadband Internet service, or no options at all.
Internet top-level domain names specific to the U.S. include:
- .us,
- .edu,
- .gov,
- .mil,
- .as (American Samoa),
- .gu (Guam),
- .mp (Northern Mariana Islands),
- .pr (Puerto Rico),
- and
- .vi (U.S. Virgin Islands).
Many U.S.-based organizations and individuals also use generic top-level domains (.com, .net, .org, .name, ...).
Click on any of the following blue hyperlinks for amplification:
- Overview
- Broadband providers
- Government policy and programs
- See also:
- Satellite Internet access
- Broadband mapping in the United States
- Communications Assistance for Law Enforcement Act (CALEA)
- Communications in the United States
- Internet in American Samoa
- Internet in Guam
- Internet in Puerto Rico
- Internet in the United States Virgin Islands
- Mass surveillance in the United States
- Municipal broadband
- National broadband plans from around the world
Cyber Culture
YouTube: Cyberculture - what is it really? [HD]
Cyberculture or computer culture is the culture that has emerged, or is emerging, from the use of computer networks for communication, entertainment, and business.
Internet culture is also the study of various social phenomena associated with the Internet and other new forms of the network communication, such as :
Since the boundaries of cyberculture are difficult to define, the term is used flexibly, and its application to specific circumstances can be controversial. It generally refers at least to the cultures of virtual communities, but extends to a wide range of cultural issues relating to "cyber-topics", e.g. cybernetics, and the perceived or predicted cyborgization of the human body and human society itself. It can also embrace associated intellectual and cultural movements, such as cyborg theory and cyberpunk. The term often incorporates an implicit anticipation of the future.
The Oxford English Dictionary lists the earliest usage of the term "cyberculture" in 1963, when A.M. Hilton wrote the following, "In the era of cyberculture, all the plows pull themselves and the fried chickens fly right onto our plates."
This example, and all others, up through 1995 are used to support the definition of cyberculture as "the social conditions brought about by automation and computerization."
The American Heritage Dictionary broadens the sense in which "cyberculture" is used by defining it as, "The culture arising from the use of computer networks, as for communication, entertainment, work, and business".
However, what both the OED and the American Heritage Dictionary miss is that cyberculture is the culture within and among users of computer networks. This cyberculture may be purely an online culture or it may span both virtual and physical worlds.
This is to say, that cyberculture is a culture endemic to online communities; it is not just the culture that results from computer use, but culture that is directly mediated by the computer. Another way to envision cyberculture is as the electronically enabled linkage of like-minded, but potentially geographically disparate (or physically disabled and hence less mobile) persons.
Cyberculture is a wide social and cultural movement closely linked to advanced information science and information technology, their emergence, development and rise to social and cultural prominence between the 1960s and the 1990s.
Cyberculture was influenced at its genesis by those early users of the internet, frequently including the architects of the original project. These individuals were often guided in their actions by the hacker ethic. While early cyberculture was based on a small cultural sample, and its ideals, the modern cyberculture is a much more diverse group of users and the ideals that they espouse.
Numerous specific concepts of cyberculture have been formulated by such authors as Lev Manovich, Arturo Escobar and Fred Forest.
However, most of these concepts concentrate only on certain aspects, and they do not cover these in great detail. Some authors aim to achieve a more comprehensive understanding distinguished between early and contemporary cyberculture (Jakub Macek), or between cyberculture as the cultural context of information technology and cyberculture (more specifically cyberculture studies) as "a particular approach to the study of the 'culture + technology' complex" (David Lister et al.).
Manifestations of cyberculture include various human interactions mediated by computer networks. They can be activities, pursuits, games, places and metaphors, and include a diverse base of applications. Some are supported by specialized software and others work on commonly accepted web protocols. Examples include but are not limited to:
Click on any of the following blue hyperlinks to learn more about Cyberculture:
Internet culture is also the study of various social phenomena associated with the Internet and other new forms of the network communication, such as :
- online communities,
- online multi-player gaming,
- wearable computing,
- social gaming,
- social media,
- mobile apps,
- augmented reality,
- and texting,
- and includes issues related to identity, privacy, and network formation.
Since the boundaries of cyberculture are difficult to define, the term is used flexibly, and its application to specific circumstances can be controversial. It generally refers at least to the cultures of virtual communities, but extends to a wide range of cultural issues relating to "cyber-topics", e.g. cybernetics, and the perceived or predicted cyborgization of the human body and human society itself. It can also embrace associated intellectual and cultural movements, such as cyborg theory and cyberpunk. The term often incorporates an implicit anticipation of the future.
The Oxford English Dictionary lists the earliest usage of the term "cyberculture" in 1963, when A.M. Hilton wrote the following, "In the era of cyberculture, all the plows pull themselves and the fried chickens fly right onto our plates."
This example, and all others, up through 1995 are used to support the definition of cyberculture as "the social conditions brought about by automation and computerization."
The American Heritage Dictionary broadens the sense in which "cyberculture" is used by defining it as, "The culture arising from the use of computer networks, as for communication, entertainment, work, and business".
However, what both the OED and the American Heritage Dictionary miss is that cyberculture is the culture within and among users of computer networks. This cyberculture may be purely an online culture or it may span both virtual and physical worlds.
This is to say, that cyberculture is a culture endemic to online communities; it is not just the culture that results from computer use, but culture that is directly mediated by the computer. Another way to envision cyberculture is as the electronically enabled linkage of like-minded, but potentially geographically disparate (or physically disabled and hence less mobile) persons.
Cyberculture is a wide social and cultural movement closely linked to advanced information science and information technology, their emergence, development and rise to social and cultural prominence between the 1960s and the 1990s.
Cyberculture was influenced at its genesis by those early users of the internet, frequently including the architects of the original project. These individuals were often guided in their actions by the hacker ethic. While early cyberculture was based on a small cultural sample, and its ideals, the modern cyberculture is a much more diverse group of users and the ideals that they espouse.
Numerous specific concepts of cyberculture have been formulated by such authors as Lev Manovich, Arturo Escobar and Fred Forest.
However, most of these concepts concentrate only on certain aspects, and they do not cover these in great detail. Some authors aim to achieve a more comprehensive understanding distinguished between early and contemporary cyberculture (Jakub Macek), or between cyberculture as the cultural context of information technology and cyberculture (more specifically cyberculture studies) as "a particular approach to the study of the 'culture + technology' complex" (David Lister et al.).
Manifestations of cyberculture include various human interactions mediated by computer networks. They can be activities, pursuits, games, places and metaphors, and include a diverse base of applications. Some are supported by specialized software and others work on commonly accepted web protocols. Examples include but are not limited to:
Click on any of the following blue hyperlinks to learn more about Cyberculture:
Internet Television or (Online Television) including a List of Internet Television Providers
YouTube Video: CBS All Access For Roku demonstration Video
Pictured: Example of online television by Watch USA Online*
* -- "Watch online to United States TV stations including KCAL 9, WFAA-TV Channel 8,WWE ,USA NETWORK,STAR MOVIES,SYFY,SHOW TIME,NBA,MTV,HLN,NEWS,HBO,FOX SPORTS,FOX NEWS,DISCOVERY,AXN,ABC,AMC,A&E,ABC FAMILY, WBAL-TV 11, FOX 6, WSVN 7 and many more..."
Click here for a List of Internet Television Providers in the United States.
Internet television (or online television) is the digital distribution of television content, such as TV shows, via the public Internet (which also carries other types of data), as opposed to dedicated terrestrial television via an over the air aerial system, cable television, and/or satellite television systems. It is also sometimes called web television, though this phrase is also used to describe the genre of TV shows broadcast only online.
Internet television is a type of over-the-top content (OTT content). "Over-the-top" (OTT) is the delivery of audio, video, and other media over the Internet without the involvement of a multiple-system operator (such as a cable television provider) in the control or distribution of the content. It has several elements:
Content Provider:
Examples include:
Internet:
The public Internet, which is used for transmission from the streaming servers to the consumer end-user.
Receiver:
The receiver must have an Internet connection, typically by Wi-fi or Ethernet, and could be:
Not all receiver devices can access all content providers. Most have websites that allow viewing of content in a web browser, but sometimes this is not done due to digital rights management concerns or restrictions. While a web browser has access to any website, some consumers find it inconvenient to control and interact with content with a mouse and keyboard, inconvenient to connect a computer to their television, or confusing.
Many providers have mobile software applications ("apps") dedicated to receive only their own content. Manufacturers of SmartTVs, boxes, sticks, and players must decide which providers to support, typically based either on popularity, common corporate ownership, or receiving payment from the provider.
Display Device:
A display device, which could be:
Comparison with Internet Protocol television (IPTV)
As described above, "Internet television" is "over-the-top technology" (OTT). It is delivered through the open, unmanaged Internet, with the "last-mile" telecom company acting only as the Internet service provider. Both OTT and IPTV use the Internet protocol suite over a packet-switched network to transmit data, but IPTV operates in a closed system - a dedicated, managed network controlled by the local cable, satellite, telephone, or fiber company.
In its simplest form, IPTV simply replaces traditional circuit switched analog or digital television channels with digital channels which happen to use packet-switched transmission. In both the old and new systems, subscribers have set-top boxes or other customer-premises equipment that talks directly over company-owned or dedicated leased lines with central-office servers. Packets never travel over the public Internet, so the television provider can guarantee enough local bandwidth for each customer's needs.
The Internet Protocol is a cheap, standardized way to provide two-way communication and also provide different data (e.g., TV show files) to different customers. This supports DVR-like features for time shifting television, for example to catch up on a TV show that was broadcast hours or days ago, or to replay the current TV show from its beginning. It also supports video on demand - browsing a catalog of videos (such as movies or syndicated television shows) which might be unrelated to the company's scheduled broadcasts. IPTV has an ongoing standardization process (for example, at the European Telecommunications Standards Institute).
Click on any of the following for further amplification about Internet Television:
Internet television (or online television) is the digital distribution of television content, such as TV shows, via the public Internet (which also carries other types of data), as opposed to dedicated terrestrial television via an over the air aerial system, cable television, and/or satellite television systems. It is also sometimes called web television, though this phrase is also used to describe the genre of TV shows broadcast only online.
Internet television is a type of over-the-top content (OTT content). "Over-the-top" (OTT) is the delivery of audio, video, and other media over the Internet without the involvement of a multiple-system operator (such as a cable television provider) in the control or distribution of the content. It has several elements:
Content Provider:
Examples include:
- An independent service, such as:
- Netflix or
- Amazon Video,
- Hotstar,
- Google Play Movies,
- myTV (Arabic),
- Sling TV,
- Sony LIV,
- Viewster, or
- Qello (which specializes in concerts).
- A service owned by a traditional terrestrial, cable, or satellite provider, such as DittoTV (owned by Dish TV)
- An international movies brand, such as Eros International or Eros Now
- A service owned by a traditional film or television network, television channel, or content conglomerate, such as BBC Three since 17 Jan 2016, CBSN, CNNGo, HBO Now, Now TV (UK) (owned by Sky), PlayStation Vue (owned by Sony), or Hulu (a joint venture)
- A peer-to-peer video hosting service such as YouTube, Vimeo, or Crunchyroll
- Combination services like TV UOL which combines a Brazilian Internet-only TV station with user-uploaded content, or Crackle, which combines content owned by Sony Pictures with user uploaded content
- Audio-only services like Spotify, though not "Internet television" per se, are sometimes accessible through video-capable devices in the same way
Internet:
The public Internet, which is used for transmission from the streaming servers to the consumer end-user.
Receiver:
The receiver must have an Internet connection, typically by Wi-fi or Ethernet, and could be:
- A web browser running on a personal computer (typically controlled by computer mouse and keyboard) or mobile device, such as Firefox, Google Chrome, or Internet Explorer
- A mobile app running on a smartphone or tablet computer
- A dedicated digital media player, typically with remote control. These can take the form of a small box, or even a stick that plugs directly into an HDMI port. Examples include Roku, Amazon Fire, Apple TV, Google TV, Boxee, and WD TV. Sometimes these boxes allow streaming of content from the local network or storage drive, typically providing an indirect connection between a television and computer or USB stick
- A SmartTV which has Internet capability and built-in software accessed with the remote control
- A Video Game Console connected to the internet such as the Xbox One and PS4.
- A DVD player, Blu-ray player with Internet capabilities in addition to its primary function of playing content from physical discs
- A set-top box or digital video recorder provided by the cable or satellite company or an independent party like TiVo, which has Internet capabilities in addition to its primary function of receiving and recording programming from the non-Internet cable or satellite connection
Not all receiver devices can access all content providers. Most have websites that allow viewing of content in a web browser, but sometimes this is not done due to digital rights management concerns or restrictions. While a web browser has access to any website, some consumers find it inconvenient to control and interact with content with a mouse and keyboard, inconvenient to connect a computer to their television, or confusing.
Many providers have mobile software applications ("apps") dedicated to receive only their own content. Manufacturers of SmartTVs, boxes, sticks, and players must decide which providers to support, typically based either on popularity, common corporate ownership, or receiving payment from the provider.
Display Device:
A display device, which could be:
- A television set or video projector linked to the receiver with a video connector (typically HDMI)
- A smart TV screen
- A computer monitor
- The built-in display of a smartphone or tablet computer.
Comparison with Internet Protocol television (IPTV)
As described above, "Internet television" is "over-the-top technology" (OTT). It is delivered through the open, unmanaged Internet, with the "last-mile" telecom company acting only as the Internet service provider. Both OTT and IPTV use the Internet protocol suite over a packet-switched network to transmit data, but IPTV operates in a closed system - a dedicated, managed network controlled by the local cable, satellite, telephone, or fiber company.
In its simplest form, IPTV simply replaces traditional circuit switched analog or digital television channels with digital channels which happen to use packet-switched transmission. In both the old and new systems, subscribers have set-top boxes or other customer-premises equipment that talks directly over company-owned or dedicated leased lines with central-office servers. Packets never travel over the public Internet, so the television provider can guarantee enough local bandwidth for each customer's needs.
The Internet Protocol is a cheap, standardized way to provide two-way communication and also provide different data (e.g., TV show files) to different customers. This supports DVR-like features for time shifting television, for example to catch up on a TV show that was broadcast hours or days ago, or to replay the current TV show from its beginning. It also supports video on demand - browsing a catalog of videos (such as movies or syndicated television shows) which might be unrelated to the company's scheduled broadcasts. IPTV has an ongoing standardization process (for example, at the European Telecommunications Standards Institute).
Click on any of the following for further amplification about Internet Television:
- Comparison tables
- Technologies used
- Stream quality
- Usage
- Market competitors
- Control
- Archives
- Broadcasting rights
- Profits and costs
- Overview of platforms and availability
- See also:
- Comparison of streaming media systems
- Comparison of video hosting services
- Content delivery network
- Digital television
- Interactive television
- Internet radio
- Home theatre PC
- List of free television software
- List of Internet television providers
- List of streaming media systems
- Multicast
- P2PTV
- Protection of Broadcasts and Broadcasting Organizations Treaty
- Push technology
- Smart TV
- Software as a service
- Television network
- Video advertising
- Web-to-TV
- Media Psychology
- Webcast
- WPIX, Inc. v. ivi, Inc.
How Web 2.0 Has Changed the Way We Use the Internet
YouTube Video The importance of Web 2.0 for business by Cisco Technologies*
* -- Willie Oosthuysen, Director Technical Operations at Cisco Systems, discusses the importance of web2.0 technologies for businesses along with a view on the future of this technology.
Pictured: Some of the better known Websites using Web 2.0 Technology
Web 2.0 describes World Wide Web websites that emphasize user-generated content, usability (ease of use, even by non-experts), and interoperability (this means that a website can work well with other products, systems and devices) for end users.
The term was popularized by Tim O'Reilly and Dale Dougherty at the O'Reilly Media Web 2.0 Conference in late 2004, though it was coined by Darcy DiNucci in 1999.
Web 2.0 does not refer to an update to any technical specification, but to changes in the way Web pages are designed and used.
A Web 2.0 website may allow users to interact and collaborate with each other in a social media dialogue as creators of user-generated content in a virtual community, in contrast to the first generation of Web 1.0-era websites where people were limited to the passive viewing of content.
As well, in contrast to Web 1.0-era websites, in which the text was often unlinked, users of Web 2.0 websites can often "click" on words in the text to access additional content on the website or be linked to an external website.
Examples of Web 2.0 include social networking sites and social media sites (e.g., Facebook), blogs, wikis, folksonomies ("tagging" keywords on websites and links), video sharing sites (e.g., YouTube), hosted services, Web applications ("apps"), collaborative consumption platforms, and mashup applications, that allow users to blend the digital audio from multiple songs together to create new music.
Whether Web 2.0 is substantively different from prior Web technologies has been challenged by World Wide Web inventor Tim Berners-Lee, who describes the term as jargon. His original vision of the Web was "a collaborative medium, a place where we [could] all meet and read and write". On the other hand, the term Semantic Web (sometimes referred to as Web 3.0) was coined by Berners-Lee to refer to a web of data that can be processed by machines.
Click on any of the following blue hyperlinks for more about Web 2.0:
The term was popularized by Tim O'Reilly and Dale Dougherty at the O'Reilly Media Web 2.0 Conference in late 2004, though it was coined by Darcy DiNucci in 1999.
Web 2.0 does not refer to an update to any technical specification, but to changes in the way Web pages are designed and used.
A Web 2.0 website may allow users to interact and collaborate with each other in a social media dialogue as creators of user-generated content in a virtual community, in contrast to the first generation of Web 1.0-era websites where people were limited to the passive viewing of content.
As well, in contrast to Web 1.0-era websites, in which the text was often unlinked, users of Web 2.0 websites can often "click" on words in the text to access additional content on the website or be linked to an external website.
Examples of Web 2.0 include social networking sites and social media sites (e.g., Facebook), blogs, wikis, folksonomies ("tagging" keywords on websites and links), video sharing sites (e.g., YouTube), hosted services, Web applications ("apps"), collaborative consumption platforms, and mashup applications, that allow users to blend the digital audio from multiple songs together to create new music.
Whether Web 2.0 is substantively different from prior Web technologies has been challenged by World Wide Web inventor Tim Berners-Lee, who describes the term as jargon. His original vision of the Web was "a collaborative medium, a place where we [could] all meet and read and write". On the other hand, the term Semantic Web (sometimes referred to as Web 3.0) was coined by Berners-Lee to refer to a web of data that can be processed by machines.
Click on any of the following blue hyperlinks for more about Web 2.0:
- History
- "Web 1.0" including Characteristics
- Web 2.0
- Characteristics including Comparison with Web 1.0
- Technologies
- Concepts
- Usage and Marketing
- Education
- Web-based applications and desktops
- Distribution of media
- Criticism
- Trademark
- See also:
- Cloud computing
- Collective intelligence
- Connectivity of social media
- Crowd computing
- Enterprise social software
- Mass collaboration
- New media
- Office suite
- Open source governance
- Privacy issues of social networking sites
- Social commerce
- Social shopping
- Web 2.0 for development (web2fordev)
- You (Time Person of the Year)
- Libraries in Second Life
- List of free software for Web 2.0 Services
- Cute cat theory of digital activism
- OSW3
- Application domains:
- Sci-Mate
- Business 2.0
- E-learning 2.0
- e-Government (Government 2.0)
- Health 2.0
- Science 2.0
Social Media including a List of the Most Popular Social Media Websites
YouTube Video: The Best Way to Share Videos On Facebook
Pictured: Images of Logos for Some of the Most Popular Social Media Websites
Social media are computer-mediated technologies that allow the creating and sharing of information, ideas, career interests and other forms of expression via virtual communities and networks. The variety of stand-alone and built-in social media services currently available introduces challenges of definition. However, there are some common features.
Social media use web-based technologies, desktop computers and mobile technologies (e.g., smartphones and tablet computers) to create highly interactive platforms through which individuals, communities and organizations can share, co-create, discuss, and modify user-generated content or pre-made content posted online.
They introduce substantial and pervasive changes to communication between businesses, organizations, communities and individuals. Social media changes the way individuals and large organizations communicate.
These changes are the focus of the emerging field of technoself studies. In America, a survey reported that 84 percent of adolescents in America have a Facebook account. Over 60% of 13 to 17-year-olds have at least one profile on social media, with many spending more than two hours a day on social networking sites.
According to Nielsen, Internet users continue to spend more time on social media sites than on any other type of site. At the same time, the total time spent on social media sites in the U.S. across PCs as well as on mobile devices increased by 99 percent to 121 billion minutes in July 2012 compared to 66 billion minutes in July 2011.
For content contributors, the benefits of participating in social media have gone beyond simply social sharing to building reputation and bringing in career opportunities and monetary income.
Social media differ from paper-based or traditional electronic media such as TV broadcasting in many ways, including quality, reach, frequency, usability, immediacy, and permanence. Social media operate in a dialogic transmission system (many sources to many receivers).
This is in contrast to traditional media which operates under a monologic transmission model (one source to many receivers), such as a paper newspaper which is delivered to many subscribers. Some of the most popular social media websites are:
These social media websites have more than 100 million registered users.
Observers have noted a range of positive and negative impacts from social media use. Social media can help to improve individuals' sense of connectedness with real and/or online communities and social media can be an effective communications (or marketing) tool for corporations, entrepreneurs, nonprofit organizations, including advocacy groups and political parties and governments.
At the same time, concerns have been raised about possible links between heavy social media use and depression, and even the issues of cyberbullying, online harassment and "trolling".
Currently, about half of young adults have been cyberbullied and of those, 20 percent said that they have been cyberbullied on a regular basis. Another survey was carried out among 7th grade students in America which is known as the Precaution Process Adoption Model. According to this study 69 percent of 7th grade students claim to have experienced cyberbullying and they also said that it is worse than face to face bullying.
Click on any of the following blue hyperlinks to more information about Social Media Websites:
- Social media are interactive Web 2.0 Internet-based applications.
- User-generated content, such as text posts or comments, digital photos or videos, and data generated through all online interactions, are the lifeblood of social media.
- Users create service-specific profiles for the website or app that are designed and maintained by the social media organization.
- Social media facilitate the development of online social networks by connecting a user's profile with those of other individuals and/or groups.
Social media use web-based technologies, desktop computers and mobile technologies (e.g., smartphones and tablet computers) to create highly interactive platforms through which individuals, communities and organizations can share, co-create, discuss, and modify user-generated content or pre-made content posted online.
They introduce substantial and pervasive changes to communication between businesses, organizations, communities and individuals. Social media changes the way individuals and large organizations communicate.
These changes are the focus of the emerging field of technoself studies. In America, a survey reported that 84 percent of adolescents in America have a Facebook account. Over 60% of 13 to 17-year-olds have at least one profile on social media, with many spending more than two hours a day on social networking sites.
According to Nielsen, Internet users continue to spend more time on social media sites than on any other type of site. At the same time, the total time spent on social media sites in the U.S. across PCs as well as on mobile devices increased by 99 percent to 121 billion minutes in July 2012 compared to 66 billion minutes in July 2011.
For content contributors, the benefits of participating in social media have gone beyond simply social sharing to building reputation and bringing in career opportunities and monetary income.
Social media differ from paper-based or traditional electronic media such as TV broadcasting in many ways, including quality, reach, frequency, usability, immediacy, and permanence. Social media operate in a dialogic transmission system (many sources to many receivers).
This is in contrast to traditional media which operates under a monologic transmission model (one source to many receivers), such as a paper newspaper which is delivered to many subscribers. Some of the most popular social media websites are:
- Facebook (and its associated Facebook Messenger),
- WhatsApp,
- Tumblr,
- Instagram,
- Twitter,
- Baidu Tieba,
- Pinterest,
- LinkedIn,
- Gab,
- Google+,
- YouTube,
- Viber,
- Snapchat,
- and WeChat.
These social media websites have more than 100 million registered users.
Observers have noted a range of positive and negative impacts from social media use. Social media can help to improve individuals' sense of connectedness with real and/or online communities and social media can be an effective communications (or marketing) tool for corporations, entrepreneurs, nonprofit organizations, including advocacy groups and political parties and governments.
At the same time, concerns have been raised about possible links between heavy social media use and depression, and even the issues of cyberbullying, online harassment and "trolling".
Currently, about half of young adults have been cyberbullied and of those, 20 percent said that they have been cyberbullied on a regular basis. Another survey was carried out among 7th grade students in America which is known as the Precaution Process Adoption Model. According to this study 69 percent of 7th grade students claim to have experienced cyberbullying and they also said that it is worse than face to face bullying.
Click on any of the following blue hyperlinks to more information about Social Media Websites:
- Definition and classification
- Distinction from other media
- Monitoring, tracking and analysis
- Building "social authority" and vanity
- Data mining
- Global usage
- Criticisms
- Negative effects
- Positive effects
- Impact on job seeking
- College admission
- Political effects
- Patents
- In the classroom
- Advertising including Tweets containing advertising
- Censorship incidents
- Effects on youth communication
- See also:
- Arab Spring, where social media played a defining role
- Citizen media
- Coke Zero Facial Profiler
- Connectivism (learning theory)
- Connectivity of social media
- Culture jamming
- Human impact of Internet use
- Internet and political revolutions
- List of photo sharing websites
- List of video sharing websites
- List of social networking websites
- Media psychology
- Metcalfe's law
- MMORPG
- Networked learning
- New media
- Online presence management
- Online research community
- Participatory media
- Social media marketing
- Social media mining
- Social media optimization
- Social media surgery
Wikipedia including (1) Editorial Oversight and Control to Assure Accuracy and (2) List of Free Online Resources
YouTube Video: The History of Wikipedia (in two minutes)
YouTube Video: This is Wikipedia
Pictured: Wikipedia Android App on Google Play
Wikipedia is a free online encyclopedia that aims to allow anyone to edit articles. Wikipedia is the largest and most popular general reference work on the Internet and is ranked among the ten most popular websites. Wikipedia is owned by the nonprofit Wikimedia Foundation.
Wikipedia was launched on January 15, 2001, by Jimmy Wales and Larry Sanger. Sanger coined its name, a portmanteau of wiki and encyclopedia. There was only the English language version initially, but it quickly developed similar versions in other languages, which differ in content and in editing practices.
With 5,332,436 articles, the English Wikipedia is the largest of the more than 290 Wikipedia encyclopedias. Overall, Wikipedia consists of more than 40 million articles in more than 250 different languages and, as of February 2014, it had 18 billion page views and nearly 500 million unique visitors each month.
In 2005, Nature published a peer review comparing 42 science articles from Encyclopædia Britannica and Wikipedia, and found that Wikipedia's level of accuracy approached Encyclopædia Britannica's. Criticism of Wikipedia includes claims that it exhibits systemic bias, presents a mixture of "truths, half truths, and some falsehoods", and that, in controversial topics, it is subject to manipulation and spin.
Click on any of the following blue hyperlinks to learn more about about Wikipedia:
Wikipedia; Editorial Oversight and Control:
This page summarizes the various processes and structures by which Wikipedia articles and their editing are editorially controlled, and the processes which are built into that model to ensure quality of article content.
Rather than one sole form of control, Wikipedia relies upon multiple approaches, and these overlap to provide more robust coverage and resilience.
Click on any of the following blue hyperlinks for additional information about Wikipedia Editorial Oversight and Control:
For dealing with vandalism see Wikipedia:Vandalism.
For editing Wikipedia yourself to fix obvious vandalism and errors, see Wikipedia:Contributing to Wikipedia.
General editorial groups:
Specialized working groups:
Philosophy and broader structure: ___________________________________________________________________________
Click on any of the following blue hyperlinks for more about
Wikipedia's List of Free Online Resources:
Wikipedia was launched on January 15, 2001, by Jimmy Wales and Larry Sanger. Sanger coined its name, a portmanteau of wiki and encyclopedia. There was only the English language version initially, but it quickly developed similar versions in other languages, which differ in content and in editing practices.
With 5,332,436 articles, the English Wikipedia is the largest of the more than 290 Wikipedia encyclopedias. Overall, Wikipedia consists of more than 40 million articles in more than 250 different languages and, as of February 2014, it had 18 billion page views and nearly 500 million unique visitors each month.
In 2005, Nature published a peer review comparing 42 science articles from Encyclopædia Britannica and Wikipedia, and found that Wikipedia's level of accuracy approached Encyclopædia Britannica's. Criticism of Wikipedia includes claims that it exhibits systemic bias, presents a mixture of "truths, half truths, and some falsehoods", and that, in controversial topics, it is subject to manipulation and spin.
Click on any of the following blue hyperlinks to learn more about about Wikipedia:
- History
- Openness
- Policies and laws including Content policies and guidelines
- Governance
- Community including Diversity
- Language editions
- Critical reception
- Operation
- Access to content
- Cultural impact
- Related projects
- See also:
- Outline of Wikipedia – guide to the subject of Wikipedia presented as a tree structured list of its subtopics; for an outline of the contents of Wikipedia, see Portal: Contents/Outlines
- Conflict-of-interest editing on Wikipedia
- Democratization of knowledge
- Interpedia, an early proposal for a collaborative Internet encyclopedia
- List of Internet encyclopedias
- Network effect
- Print Wikipedia art project to visualize how big Wikipedia is. In cooperation with Wikimedia foundation.
- QRpedia – multilingual, mobile interface to Wikipedia
- Wikipedia Review
Wikipedia; Editorial Oversight and Control:
This page summarizes the various processes and structures by which Wikipedia articles and their editing are editorially controlled, and the processes which are built into that model to ensure quality of article content.
Rather than one sole form of control, Wikipedia relies upon multiple approaches, and these overlap to provide more robust coverage and resilience.
Click on any of the following blue hyperlinks for additional information about Wikipedia Editorial Oversight and Control:
- Overview of editorial structure
- Wikipedia's editorial control process
- Types of control
- Effects of control systems
- Types of access
- Individual editors' power to control and correct poor editorship
- Editorial quality review and article improvement
- Examples
For dealing with vandalism see Wikipedia:Vandalism.
For editing Wikipedia yourself to fix obvious vandalism and errors, see Wikipedia:Contributing to Wikipedia.
- About Wikipedia
- Wikipedia:Quality control
- Researching with Wikipedia and Reliability of Wikipedia
- Wikipedia:User access levels
General editorial groups:
- Category:Wikipedians in the Cleanup Taskforce
- Category:Wikipedian new page patrollers
- Category:Wikipedians in the Counter Vandalism Unit (c. 2,600 editors)
- Category:Wikipedian recent changes patrollers (c. 3,000 editors)
Specialized working groups:
- Category:WikiProjects - index of subject-based editorial taskforces ("WikiProjects") on Wikipedia
- Category:Wikipedians by WikiProject - members of the various specialist subject area taskforces
- Editorial assistance software coded for Wikipedia: Category:Wikipedia bots
Philosophy and broader structure: ___________________________________________________________________________
Click on any of the following blue hyperlinks for more about
Wikipedia's List of Free Online Resources:
- General resources and link lists
- Newspapers and news agencies
- Biographies
- Information and library science
- Philosophy
- Science, mathematics, medicine & nature
- Social sciences
- Sports
- See also:
- Wikipedia:Advanced source searching
- Wikipedia:Reliable sources/Noticeboard
- Wikipedia:Verifiability
- Wikipedia:WikiProject Resource Exchange - a project where Wikipedians offer to search in their resources for the reference that you are looking for
- Wiktionary:Wiktionary:Other dictionaries on the Web
Blogs including a List of Blogs and a Glossary of Blogging
YouTube Video: How to Make a Blog - Step by Step - 2015
YouTube Video by Andrew Sullivan* on quitting blogging: "It was killing me."
* -- Andrew Sullivan
For an alphanumerical List of Blogs, click here.
For a Glossary of Blogging Terms, click here.
A blog (a truncation of the expression weblog) is a discussion or informational website published on the World Wide Web consisting of discrete, often informal diary-style text entries ("posts").
Posts are typically displayed in reverse chronological order, so that the most recent post appears first, at the top of the web page. Until 2009, blogs were usually the work of a single individual, occasionally of a small group, and often covered a single subject or topic.
In the 2010s, "multi-author blogs" (MABs) have developed, with posts written by large numbers of authors and sometimes professionally edited. MABs from newspapers, other media outlets, universities, think tanks, advocacy groups, and similar institutions account for an increasing quantity of blog traffic. The rise of Twitter and other "microblogging" systems helps integrate MABs and single-author blogs into the news media. Blog can also be used as a verb, meaning to maintain or add content to a blog.
The emergence and growth of blogs in the late 1990s coincided with the advent of web publishing tools that facilitated the posting of content by non-technical users who did not have much experience with HTML or computer programming. Previously, a knowledge of such technologies as HTML and File Transfer Protocol had been required to publish content on the Web, and as such, early Web users tended to be hackers and computer enthusiasts.
In the 2010s, majority of blogs are interactive Web 2.0 websites, allowing visitors to leave online comments and even message each other via GUI widgets on the blogs, and it is this interactivity that distinguishes them from other static websites.
In that sense, blogging can be seen as a form of social networking service. Indeed, bloggers do not only produce content to post on their blogs, but also build social relations with their readers and other bloggers.
However, there are high-readership blogs which do not allow comments.
Many blogs provide commentary on a particular subject or topic, ranging from politics to sports. Others function as more personal online diaries, and others function more as online brand advertising of a particular individual or company. A typical blog combines text, digital images, and links to other blogs, web pages, and other media related to its topic.
The ability of readers to leave comments in an interactive format is an important contribution to the popularity of many blogs. However, blog owners or authors need to moderate and filter online comments to remove hate speech or other offensive content. Most blogs are primarily textual, although some focus on art (art blogs), photographs (photoblogs), videos (video blogs or "vlogs"), music (MP3 blogs), and audio (podcasts). Microblogging is another type of blogging, featuring very short posts.
In education, blogs can be used as instructional resources. These blogs are referred to as edublogs. On 16 February 2011, there were over 156 million public blogs in existence. On 20 February 2014, there were around 172 million Tumblr and 75.8 million WordPress blogs in existence worldwide.
According to critics and other bloggers, Blogger is the most popular blogging service used today. However, Blogger does not offer public statistics. Technorati has 1.3 million blogs as of February 22, 2014.
Click on any of the following bluehyperlinks for additional information about Blogs:
For a Glossary of Blogging Terms, click here.
A blog (a truncation of the expression weblog) is a discussion or informational website published on the World Wide Web consisting of discrete, often informal diary-style text entries ("posts").
Posts are typically displayed in reverse chronological order, so that the most recent post appears first, at the top of the web page. Until 2009, blogs were usually the work of a single individual, occasionally of a small group, and often covered a single subject or topic.
In the 2010s, "multi-author blogs" (MABs) have developed, with posts written by large numbers of authors and sometimes professionally edited. MABs from newspapers, other media outlets, universities, think tanks, advocacy groups, and similar institutions account for an increasing quantity of blog traffic. The rise of Twitter and other "microblogging" systems helps integrate MABs and single-author blogs into the news media. Blog can also be used as a verb, meaning to maintain or add content to a blog.
The emergence and growth of blogs in the late 1990s coincided with the advent of web publishing tools that facilitated the posting of content by non-technical users who did not have much experience with HTML or computer programming. Previously, a knowledge of such technologies as HTML and File Transfer Protocol had been required to publish content on the Web, and as such, early Web users tended to be hackers and computer enthusiasts.
In the 2010s, majority of blogs are interactive Web 2.0 websites, allowing visitors to leave online comments and even message each other via GUI widgets on the blogs, and it is this interactivity that distinguishes them from other static websites.
In that sense, blogging can be seen as a form of social networking service. Indeed, bloggers do not only produce content to post on their blogs, but also build social relations with their readers and other bloggers.
However, there are high-readership blogs which do not allow comments.
Many blogs provide commentary on a particular subject or topic, ranging from politics to sports. Others function as more personal online diaries, and others function more as online brand advertising of a particular individual or company. A typical blog combines text, digital images, and links to other blogs, web pages, and other media related to its topic.
The ability of readers to leave comments in an interactive format is an important contribution to the popularity of many blogs. However, blog owners or authors need to moderate and filter online comments to remove hate speech or other offensive content. Most blogs are primarily textual, although some focus on art (art blogs), photographs (photoblogs), videos (video blogs or "vlogs"), music (MP3 blogs), and audio (podcasts). Microblogging is another type of blogging, featuring very short posts.
In education, blogs can be used as instructional resources. These blogs are referred to as edublogs. On 16 February 2011, there were over 156 million public blogs in existence. On 20 February 2014, there were around 172 million Tumblr and 75.8 million WordPress blogs in existence worldwide.
According to critics and other bloggers, Blogger is the most popular blogging service used today. However, Blogger does not offer public statistics. Technorati has 1.3 million blogs as of February 22, 2014.
Click on any of the following bluehyperlinks for additional information about Blogs:
- History
- Types
- Community and cataloging
- Popularity
- Blurring with the mass media
- Consumer-generated advertising
- Legal and social consequences
- See also:
- Bitter Lawyer
- Blog award
- BROG
- Chat room
- Citizen journalism
- Collaborative blog
- Comparison of free blog hosting services
- Customer engagement
- Interactive journalism
- Internet think tank
- Israblog
- Bernando LaPallo
- List of family-and-homemaking blogs
- Mass collaboration
- Prison blogs
- Sideblog
- Social blogging
- Webmaster
- Web template system
- Web traffic
Voice over Internet Protocol (VoIP)
YouTube Video: Cell Phone Facts : How Does Vonage Phone Service Work?
Voice over Internet Protocol (Voice over IP, VoIP and IP telephony) is a methodology and group of technologies for the delivery of voice communications and multimedia sessions over Internet Protocol (IP) networks, such as the Internet.
The terms Internet telephony, broadband telephony, and broadband phone service specifically refer to the provisioning of communications services (voice, fax, SMS, voice-messaging) over the public Internet, rather than via the public switched telephone network (PSTN).
The steps and principles involved in originating VoIP telephone calls are similar to traditional digital telephony and involve signaling, channel setup, digitization of the analog voice signals, and encoding. Instead of being transmitted over a circuit-switched network; however, the digital information is packetized, and transmission occurs as IP packets over a packet-switched network.
They transport audio streams using special media delivery protocols that encode audio and video with audio codecs, and video codecs. Various codecs exist that optimize the media stream based on application requirements and network bandwidth; some implementations rely on narrowband and compressed speech, while others support high fidelity stereo codecs.
Some popular codecs include μ-law and a-law versions of G.711, G.722, a popular open source voice codec known as iLBC, a codec that only uses 8 kbit/s each way called G.729, and many others.
Early providers of voice-over-IP services offered business models and technical solutions that mirrored the architecture of the legacy telephone network.
Second-generation providers, such as Skype, have built closed networks for private user bases, offering the benefit of free calls and convenience while potentially charging for access to other communication networks, such as the PSTN. This has limited the freedom of users to mix-and-match third-party hardware and software.
Third-generation providers, such as Google Talk, have adopted the concept of federated VoIP—which is a departure from the architecture of the legacy networks. These solutions typically allow dynamic interconnection between users on any two domains on the Internet when a user wishes to place a call.
In addition to VoIP phones, VoIP is available on many smartphones, personal computers, and on Internet access devices. Calls and SMS text messages may be sent over 3G/4G or Wi-Fi.
Click on any of the following blue hyperlinks for further information about Voice over Internet Protocol (VoIP):
The terms Internet telephony, broadband telephony, and broadband phone service specifically refer to the provisioning of communications services (voice, fax, SMS, voice-messaging) over the public Internet, rather than via the public switched telephone network (PSTN).
The steps and principles involved in originating VoIP telephone calls are similar to traditional digital telephony and involve signaling, channel setup, digitization of the analog voice signals, and encoding. Instead of being transmitted over a circuit-switched network; however, the digital information is packetized, and transmission occurs as IP packets over a packet-switched network.
They transport audio streams using special media delivery protocols that encode audio and video with audio codecs, and video codecs. Various codecs exist that optimize the media stream based on application requirements and network bandwidth; some implementations rely on narrowband and compressed speech, while others support high fidelity stereo codecs.
Some popular codecs include μ-law and a-law versions of G.711, G.722, a popular open source voice codec known as iLBC, a codec that only uses 8 kbit/s each way called G.729, and many others.
Early providers of voice-over-IP services offered business models and technical solutions that mirrored the architecture of the legacy telephone network.
Second-generation providers, such as Skype, have built closed networks for private user bases, offering the benefit of free calls and convenience while potentially charging for access to other communication networks, such as the PSTN. This has limited the freedom of users to mix-and-match third-party hardware and software.
Third-generation providers, such as Google Talk, have adopted the concept of federated VoIP—which is a departure from the architecture of the legacy networks. These solutions typically allow dynamic interconnection between users on any two domains on the Internet when a user wishes to place a call.
In addition to VoIP phones, VoIP is available on many smartphones, personal computers, and on Internet access devices. Calls and SMS text messages may be sent over 3G/4G or Wi-Fi.
Click on any of the following blue hyperlinks for further information about Voice over Internet Protocol (VoIP):
- Protocols
- Adoption
- Quality of service
- VoIP performance metrics
- PSTN integration
- Fax support
- Power requirements
- Security
- Caller ID
- Compatibility with traditional analog telephone sets
- Support for other telephony devices
- Operational cost
- Regulatory and legal issues in the United States
- History including Milestones
- See also:
- Audio over IP
- Communications Assistance For Law Enforcement Act
- Comparison of audio network protocols
- Comparison of VoIP software
- Differentiated services
- High bit rate audio video over Internet Protocol
- Integrated services
- Internet fax
- IP Multimedia Subsystem
- List of VoIP companies
- Mobile VoIP
- Network Voice Protocol
- RTP audio video profile
- SIP Trunking
- UNIStim
- Voice VPN
- VoiceXML
- VoIP recording
Online Dating Websites, including a Comparison (as well as a Report about Online Dating Sites by Consumer Reports, February, 2017 Issue.
YouTube Video about Online Dating - A Funny Online Dating Video
For a comparison of Online Dating Websites, click here.
Click here to read the magazine article "Match Me if You Can" in the February, 2017 Issue of Consumer Reports
Online dating or Internet dating is a personal introductory system where individuals can find and contact each other over the Internet to arrange a date, usually with the objective of developing a personal, romantic, or sexual relationship.
Online dating services usually provide non-moderated matchmaking over the Internet, through the use of personal computers or cell phones. Users of an online dating service would usually provide personal information, to enable them to search the service provider's database for other individuals. Members use criteria other members set, such as age range, gender and location.
Online dating sites use market metaphors to match people. Match metaphors are conceptual frameworks that allow individuals to make sense of new concepts by drawing upon familiar experiences and frame-works. This metaphor of the marketplace – a place where people go to "shop" for potential romantic partners and to "sell" themselves in hopes of creating a successful romantic relationship – is highlighted by the layout and functionality of online dating websites.
The marketplace metaphor may also resonate with participants' conceptual orientation towards the process of finding a romantic partner. Most sites allow members to upload photos or videos of themselves and browse the photos and videos of others. Sites may offer additional services, such as webcasts, online chat, telephone chat (VOIP), and message boards.
Some sites provide free registration, but may offer services which require a monthly fee. Other sites depend on advertising for their revenue. Some sites such as OkCupid.com, POF.com and Badoo.com are free and offer additional paid services in a freemium revenue model.
Some sites are broad-based, with members coming from a variety of backgrounds looking for different types of relationships. Other sites are more specific, based on the type of members, interests, location, or relationship desired.
A 2005 study of data collected by the Pew Internet & American Life Project found that individuals are more likely to use an online dating service if they use the internet for a greater amount of tasks and less likely to use such a service if they are trusting of others.
Click on any of the following blue hyperlinks for additional information about online dating services:
Click here to read the magazine article "Match Me if You Can" in the February, 2017 Issue of Consumer Reports
Online dating or Internet dating is a personal introductory system where individuals can find and contact each other over the Internet to arrange a date, usually with the objective of developing a personal, romantic, or sexual relationship.
Online dating services usually provide non-moderated matchmaking over the Internet, through the use of personal computers or cell phones. Users of an online dating service would usually provide personal information, to enable them to search the service provider's database for other individuals. Members use criteria other members set, such as age range, gender and location.
Online dating sites use market metaphors to match people. Match metaphors are conceptual frameworks that allow individuals to make sense of new concepts by drawing upon familiar experiences and frame-works. This metaphor of the marketplace – a place where people go to "shop" for potential romantic partners and to "sell" themselves in hopes of creating a successful romantic relationship – is highlighted by the layout and functionality of online dating websites.
The marketplace metaphor may also resonate with participants' conceptual orientation towards the process of finding a romantic partner. Most sites allow members to upload photos or videos of themselves and browse the photos and videos of others. Sites may offer additional services, such as webcasts, online chat, telephone chat (VOIP), and message boards.
Some sites provide free registration, but may offer services which require a monthly fee. Other sites depend on advertising for their revenue. Some sites such as OkCupid.com, POF.com and Badoo.com are free and offer additional paid services in a freemium revenue model.
Some sites are broad-based, with members coming from a variety of backgrounds looking for different types of relationships. Other sites are more specific, based on the type of members, interests, location, or relationship desired.
A 2005 study of data collected by the Pew Internet & American Life Project found that individuals are more likely to use an online dating service if they use the internet for a greater amount of tasks and less likely to use such a service if they are trusting of others.
Click on any of the following blue hyperlinks for additional information about online dating services:
- Trends
- Social networking
- Problems
- Comparisons in marriage health: traditional versus online first encounters
- Government regulation
- Online introduction services
- Free dating
- In popular culture
- See also:
Meetup Social Networking*
* -- Click here to visit Meetup.com
YouTube Video: Meetup.com Basics - Video Tutorial
YouTube Video: 10 Keys for Success If You're a Meetup.com Organizer
Pictured: Example Meetup groups
Meetup is an online social networking portal that facilitates offline group meetings in various localities around the world. Meetup allows members to find and join groups unified by a common interest, such as politics, books, games, movies, health, pets, careers or hobbies.
The company is based in New York City and was co-founded in 2002 by Scott Heiferman and Matt Meeker. Meetup was designed as a way for organizers to manage the many functions associated with in-person meetings and for individuals to find groups that fit their interests.
Users enter their city or their postal code and tag the topic they want to meet about. The website/app helps them locate a group to arrange a place and time to meet. Topic listings are also available for users who only enter a location.
The service is free of charge to individuals who log in as members. They have the ability to join different groups as defined by the rules of the individual groups themselves.
Meetup receives revenue by charging fees to organizers of groups. Currently US$9.99/month for their basic plan, which includes a maximum of 4 organizers and maximum of 50 members. The unlimited pricing starts at US$14.99/month or six months for $90, which gives the organizer up to three groups.
Organizers can customize the Meetup site by selecting from a variety of templates for the overall appearance of their site. They can also create customized pages within the group's Meetup site.
Site group functions include:
The website and associated app also allow users to contact meetup group members through a messaging platform and comments left on individual event listings. After each event and email is shared that allows users to click "Good to see you" and establish further connection with group members.
History:
Following the September 11 attacks in 2001, the site's co-founder Scott Heiferman publicly stated in 2011 that the manner in which people in New York City came together in the aftermath of that traumatic event inspired him to use the Internet to make it easier for people to connect with strangers in their community.
Launching on June 12, 2002, it quickly became an organizing tool for a variety of common interests including fan groups, outdoor enthusiasts, community activists, support groups, and more.
The Howard Dean campaign incorporated internet-based grassroots organization after learning Meetup members were outpacing traditional organization methods. Having changed the political landscape, it is still being used for political campaigns today.
On February 27 and March 1, 2014, a denial-of-service attack forced Meetup's website offline.
On July 10, 2015, Meetup announced a new pricing plan update. Smaller Meetups pay a little less and larger Meetups pay a little more.
As of August 2015 the company claimed to have 22.77 million members in 180 countries and 210,240 groups, although these figures may include inactive members and groups.
Other meeting exchange networks:
The company is based in New York City and was co-founded in 2002 by Scott Heiferman and Matt Meeker. Meetup was designed as a way for organizers to manage the many functions associated with in-person meetings and for individuals to find groups that fit their interests.
Users enter their city or their postal code and tag the topic they want to meet about. The website/app helps them locate a group to arrange a place and time to meet. Topic listings are also available for users who only enter a location.
The service is free of charge to individuals who log in as members. They have the ability to join different groups as defined by the rules of the individual groups themselves.
Meetup receives revenue by charging fees to organizers of groups. Currently US$9.99/month for their basic plan, which includes a maximum of 4 organizers and maximum of 50 members. The unlimited pricing starts at US$14.99/month or six months for $90, which gives the organizer up to three groups.
Organizers can customize the Meetup site by selecting from a variety of templates for the overall appearance of their site. They can also create customized pages within the group's Meetup site.
Site group functions include:
- Schedule meetings and automate notices to members for the same
- The ability to assign different leadership responsibilities and access to the group data
- The ability to accept RSVPs for an event
- The ability to monetize groups, accept and track membership and/or meeting payments through WePay
- Create a file repository for group access
- Post photo libraries of events
- Manage communications between group members
- Post group polls
The website and associated app also allow users to contact meetup group members through a messaging platform and comments left on individual event listings. After each event and email is shared that allows users to click "Good to see you" and establish further connection with group members.
History:
Following the September 11 attacks in 2001, the site's co-founder Scott Heiferman publicly stated in 2011 that the manner in which people in New York City came together in the aftermath of that traumatic event inspired him to use the Internet to make it easier for people to connect with strangers in their community.
Launching on June 12, 2002, it quickly became an organizing tool for a variety of common interests including fan groups, outdoor enthusiasts, community activists, support groups, and more.
The Howard Dean campaign incorporated internet-based grassroots organization after learning Meetup members were outpacing traditional organization methods. Having changed the political landscape, it is still being used for political campaigns today.
On February 27 and March 1, 2014, a denial-of-service attack forced Meetup's website offline.
On July 10, 2015, Meetup announced a new pricing plan update. Smaller Meetups pay a little less and larger Meetups pay a little more.
As of August 2015 the company claimed to have 22.77 million members in 180 countries and 210,240 groups, although these figures may include inactive members and groups.
Other meeting exchange networks:
Podcasts, including a List of Podcasting Companies
YouTube Video: Podcasting 101: How to Make a Podcast
Pictured: Illustration featuring the Podcasting Company "How Stuff Works": click here to visit website
Click here for a List of Podcasting Companies.
A podcast is an episodic series of digital media files which a user can set up so that new episodes are automatically downloaded via web syndication to the user's own local computer or portable media player.
The word arose as a portmanteau of "iPod" (a brand of media player) and "broadcast".
Thus, the files distributed are typically in audio or video formats, but may sometimes include other file formats such as PDF or ePub.
The distributor of a podcast maintains a central list of the files on a server as a web feed that can be accessed through the Internet. The listener or viewer uses special client application software on a computer or media player, known as a podcatcher, which accesses this web feed, checks it for updates, and downloads any new files in the series.
This process can be automated so that new files are downloaded automatically, which may seem to the user as though new episodes are broadcast or "pushed" to them. Files are stored locally on the user's device, ready for offline use.
Podcasting contrasts with webcasting or streaming which do not allow for offline listening, although most podcasts may also be streamed on demand as an alternative to download. Many podcast players (apps as well as dedicated devices) allow listeners to adjust the playback speed.
Some have labeled podcasting as a converged medium bringing together audio, the web, and portable media players, as well as a disruptive technology that has caused some people in the radio business to reconsider established practices and preconceptions about audiences, consumption, production, and distribution.
Podcasts are usually free of charge to listeners and can often be created for little to no cost, which sets them apart from the traditional model of "gate-kept" media and production tools. It is very much a horizontal media form: producers are consumers, consumers may become producers, and both can engage in conversations with each other.
Click on any of the following blue hyperlinks for additional information about Podcasts:
A podcast is an episodic series of digital media files which a user can set up so that new episodes are automatically downloaded via web syndication to the user's own local computer or portable media player.
The word arose as a portmanteau of "iPod" (a brand of media player) and "broadcast".
Thus, the files distributed are typically in audio or video formats, but may sometimes include other file formats such as PDF or ePub.
The distributor of a podcast maintains a central list of the files on a server as a web feed that can be accessed through the Internet. The listener or viewer uses special client application software on a computer or media player, known as a podcatcher, which accesses this web feed, checks it for updates, and downloads any new files in the series.
This process can be automated so that new files are downloaded automatically, which may seem to the user as though new episodes are broadcast or "pushed" to them. Files are stored locally on the user's device, ready for offline use.
Podcasting contrasts with webcasting or streaming which do not allow for offline listening, although most podcasts may also be streamed on demand as an alternative to download. Many podcast players (apps as well as dedicated devices) allow listeners to adjust the playback speed.
Some have labeled podcasting as a converged medium bringing together audio, the web, and portable media players, as well as a disruptive technology that has caused some people in the radio business to reconsider established practices and preconceptions about audiences, consumption, production, and distribution.
Podcasts are usually free of charge to listeners and can often be created for little to no cost, which sets them apart from the traditional model of "gate-kept" media and production tools. It is very much a horizontal media form: producers are consumers, consumers may become producers, and both can engage in conversations with each other.
Click on any of the following blue hyperlinks for additional information about Podcasts:
C/Net including its Website
YouTube Video of Top 5 Most Anticipated Products Presented by C/Net
Pictured: C/Net Logo
CNET (stylized as c|net) is an American media website that publishes reviews, news, articles, blogs, podcasts and videos on technology and consumer electronics globally.
Founded in 1994 by Halsey Minor and Shelby Bonnie, it was the flagship brand of CNET Networks and became a brand of CBS Interactive through CNET Networks' acquisition in 2008.
CNET originally produced content for radio and television in addition to its website and now uses new media distribution methods through its Internet television network, CNET Video, and its podcast and blog networks.
In addition CNET currently has region-specific and language-specific editions. These include the United Kingdom, Australia, China, Japan, French, German, Korean and Spanish.
According to third-party web analytics providers, Alexa and SimilarWeb, CNET is the highest-read technology news source on the Web, with over 200 million readers per month, being among the 200 most visited websites globally, as of 2015.
Click on any of the following blue hyperlinks for further information about C/Net:
Founded in 1994 by Halsey Minor and Shelby Bonnie, it was the flagship brand of CNET Networks and became a brand of CBS Interactive through CNET Networks' acquisition in 2008.
CNET originally produced content for radio and television in addition to its website and now uses new media distribution methods through its Internet television network, CNET Video, and its podcast and blog networks.
In addition CNET currently has region-specific and language-specific editions. These include the United Kingdom, Australia, China, Japan, French, German, Korean and Spanish.
According to third-party web analytics providers, Alexa and SimilarWeb, CNET is the highest-read technology news source on the Web, with over 200 million readers per month, being among the 200 most visited websites globally, as of 2015.
Click on any of the following blue hyperlinks for further information about C/Net:
- History
- Malware Infection in Downloads
- Dispute with Snap Technologies
- Hopper controversy
- Sections
- See also:
Online Shopping Websites
YouTube Video: Tips for safe and simple online shopping
Online shopping is a form of electronic commerce which allows consumers to directly buy goods or services from a seller over the Internet using a web browser.
Consumers find a product of interest by visiting the website of the retailer directly or by searching among alternative vendors using a shopping search engine, which displays the same product's availability and pricing at different e-retailers. As of 2016, customers can shop online using a range of different computers and devices, including desktop computers, laptops, tablet computers and smartphones.
An online shop evokes the physical analogy of buying products or services at a regular "bricks-and-mortar" retailer or shopping center; the process is called business-to-consumer (B2C) online shopping. When an online store is set up to enable businesses to buy from another businesses, the process is called business-to-business (B2B) online shopping. A typical online store enables the customer to browse the firm's range of products and services, view photos or images of the products, along with information about the product specifications, features and prices.
Online stores typically enable shoppers to use "search" features to find specific models, brands or items. Online customers must have access to the Internet and a valid method of payment in order to complete a transaction, such as a credit card, an Interac-enabled debit card, or a service such as PayPal.
For physical products (e.g., paperback books or clothes), the e-tailer ships the products to the customer; for digital products, such as digital audio files of songs or software, the e-tailer typically sends the file to the customer over the Internet.
The largest of these online retailing corporations are Alibaba, Amazon.com, and eBay.
Terminology:
Alternative names for the activity are "e-tailing", a shortened form of "electronic retail" or "e-shopping", a shortened form of "electronic shopping".
An online store may also be called an e-web-store, e-shop, e-store, Internet shop, web-shop, web-store, online store, online storefront and virtual store.
Mobile commerce (or m-commerce) describes purchasing from an online retailer's mobile device-optimized website or software application ("app"). These websites or apps are designed to enable customers to browse through a companies' products and services on tablet computers and smartphones.
Click on any of the following blue hyperlinks for additional information about Online Shopping:
Consumers find a product of interest by visiting the website of the retailer directly or by searching among alternative vendors using a shopping search engine, which displays the same product's availability and pricing at different e-retailers. As of 2016, customers can shop online using a range of different computers and devices, including desktop computers, laptops, tablet computers and smartphones.
An online shop evokes the physical analogy of buying products or services at a regular "bricks-and-mortar" retailer or shopping center; the process is called business-to-consumer (B2C) online shopping. When an online store is set up to enable businesses to buy from another businesses, the process is called business-to-business (B2B) online shopping. A typical online store enables the customer to browse the firm's range of products and services, view photos or images of the products, along with information about the product specifications, features and prices.
Online stores typically enable shoppers to use "search" features to find specific models, brands or items. Online customers must have access to the Internet and a valid method of payment in order to complete a transaction, such as a credit card, an Interac-enabled debit card, or a service such as PayPal.
For physical products (e.g., paperback books or clothes), the e-tailer ships the products to the customer; for digital products, such as digital audio files of songs or software, the e-tailer typically sends the file to the customer over the Internet.
The largest of these online retailing corporations are Alibaba, Amazon.com, and eBay.
Terminology:
Alternative names for the activity are "e-tailing", a shortened form of "electronic retail" or "e-shopping", a shortened form of "electronic shopping".
An online store may also be called an e-web-store, e-shop, e-store, Internet shop, web-shop, web-store, online store, online storefront and virtual store.
Mobile commerce (or m-commerce) describes purchasing from an online retailer's mobile device-optimized website or software application ("app"). These websites or apps are designed to enable customers to browse through a companies' products and services on tablet computers and smartphones.
Click on any of the following blue hyperlinks for additional information about Online Shopping:
- History
- International statistics
- Customers
- Product selection
- Payment
- Product delivery
- Shopping cart systems
- Design
- Market share
- Advantages
- Disadvantages
- Product suitability
- Aggregation
- Impact of reviews on consumer behaviour
- See also:
- Bricks and clicks business model
- Comparison of free software e-commerce web application frameworks
- Dark store
- Direct imports
- Digital distribution
- Electronic business
- Online auction business model
- Online music store
- Online pharmacy
- Online shopping malls
- Online shopping rewards
- Open catalogue
- Personal shopper
- Retail therapy
- Types of retail outlets
- Tourist trap
HowStuffWorks (Website)*
*-- Go to Website HowStuffWorks
YouTube Video about "How Stuff Works"
HowStuffWorks is an American commercial educational website founded by Marshall Brain to provide its target audience an insight into the way many things work. The site uses various media to explain complex concepts, terminology, and mechanisms—including photographs, diagrams, videos, animations, and articles.
A documentary television series with the same name also premiered in November 2008 on the Discovery Channel.
Click on any of the following blue hyperlinks for more about the website "HowStuffWorks":
A documentary television series with the same name also premiered in November 2008 on the Discovery Channel.
Click on any of the following blue hyperlinks for more about the website "HowStuffWorks":
Social and Cultural Phenomena Specific to the Internet, including a List
YouTube Video: Global Internet Phenomena Facts
Pictured: Telecommunications fraud is an oft-talked about, but little-understood issue facing communications service providers (CSPs). Telecommunications fraud by its industry definition is the use of voice, data, or other telecommunication services by a subscriber with no intention for payment of that usage. Industry studies have estimated that fraud costs operators billions of dollars each year, with the costs of that usage having to be absorbed by CSP as passed down to both residential and commercial subscribers.
An Internet Phenomenon is an activity, concept, catchphrase or piece of media which spreads, often as mimicry, from person to person via the Internet. Some examples include posting a photo of people lying down in public places (called "planking") and uploading a short video of people dancing to the Harlem Shake.
A meme is "an idea, behavior, or style that spreads from person to person within a culture".
An Internet meme may take the form of an image (typically an image macro), hyperlink, video, website, or hashtag. It may be just a word or phrase, including an intentional misspelling.
These small movements tend to spread from person to person via social networks, blogs, direct email, or news sources. They may relate to various existing Internet cultures or subcultures, often created or spread on various websites, or by Usenet boards and other such early-internet communications facilities. Fads and sensations tend to grow rapidly on the Internet, because the instant communication facilitates word-of-mouth transmission.
The word meme was coined by Richard Dawkins in his 1976 book The Selfish Gene, as an attempt to explain the way cultural information spreads; Internet memes are a subset of this general meme concept specific to the culture and environment of the Internet.
The concept of the Internet meme was first proposed by Mike Godwin in the June 1993 issue of Wired. In 2013 Dawkins characterized an Internet meme as being a meme deliberately altered by human creativity—distinguished from biological genes and Dawkins' pre-Internet concept of a meme which involved mutation by random change and spreading through accurate replication as in Darwinian selection.
Dawkins explained that Internet memes are thus a "hijacking of the original idea", the very idea of a meme having mutated and evolved in this new direction. Further, Internet memes carry an additional property that ordinary memes do not—Internet memes leave a footprint in the media through which they propagate (for example, social networks) that renders them traceable and analyzable.
Internet memes are a subset that Susan Blackmore called temes—memes which live in technological artifacts instead of the human mind.
Image macros are often confused with internet memes and are often miscited as such, usually by their creators. However, there is a key distinction between the two. Primarily this distinction lies within the subject's recognizability in internet pop-culture. While such an image may display an existing meme, or in fact a macro itself may even eventually become a meme, it does not qualify as one until it reaches approximately the same level of mass recognition as required for a person to be considered a celebrity.
Click on any of the below blue hyperlinks for examples of Internet Phenomena:
Click on any of the following blue hyperlinks to learn more about Internet Phenomena:
A meme is "an idea, behavior, or style that spreads from person to person within a culture".
An Internet meme may take the form of an image (typically an image macro), hyperlink, video, website, or hashtag. It may be just a word or phrase, including an intentional misspelling.
These small movements tend to spread from person to person via social networks, blogs, direct email, or news sources. They may relate to various existing Internet cultures or subcultures, often created or spread on various websites, or by Usenet boards and other such early-internet communications facilities. Fads and sensations tend to grow rapidly on the Internet, because the instant communication facilitates word-of-mouth transmission.
The word meme was coined by Richard Dawkins in his 1976 book The Selfish Gene, as an attempt to explain the way cultural information spreads; Internet memes are a subset of this general meme concept specific to the culture and environment of the Internet.
The concept of the Internet meme was first proposed by Mike Godwin in the June 1993 issue of Wired. In 2013 Dawkins characterized an Internet meme as being a meme deliberately altered by human creativity—distinguished from biological genes and Dawkins' pre-Internet concept of a meme which involved mutation by random change and spreading through accurate replication as in Darwinian selection.
Dawkins explained that Internet memes are thus a "hijacking of the original idea", the very idea of a meme having mutated and evolved in this new direction. Further, Internet memes carry an additional property that ordinary memes do not—Internet memes leave a footprint in the media through which they propagate (for example, social networks) that renders them traceable and analyzable.
Internet memes are a subset that Susan Blackmore called temes—memes which live in technological artifacts instead of the human mind.
Image macros are often confused with internet memes and are often miscited as such, usually by their creators. However, there is a key distinction between the two. Primarily this distinction lies within the subject's recognizability in internet pop-culture. While such an image may display an existing meme, or in fact a macro itself may even eventually become a meme, it does not qualify as one until it reaches approximately the same level of mass recognition as required for a person to be considered a celebrity.
Click on any of the below blue hyperlinks for examples of Internet Phenomena:
- Advertising and products
- Animation and comics
- Challenges
- Dance
- Film
- Gaming
- Images
- Music
- Politics
- Videos
- Other phenomena
Click on any of the following blue hyperlinks to learn more about Internet Phenomena:
How the Internet is Governed
YouTube Video: How It Works: Internet of Things
Internet governance is the development and application of shared principles, norms, rules, decision-making procedures, and programs that shape the evolution and use of the Internet.
This article describes how the Internet was and is currently governed, some of the controversies that occurred along the way, and the ongoing debates about how the Internet should or should not be governed in the future.
Internet governance should not be confused with E-Governance, which refers to governments' use of technology to carry out their governing duties.
Background:
No one person, company, organization or government runs the Internet. It is a globally distributed network comprising many voluntarily interconnected autonomous networks. It operates without a central governing body with each constituent network setting and enforcing its own policies.
The Internet's governance is conducted by a decentralized and international multi-stakeholder network of interconnected autonomous groups drawing from civil society, the private sector, governments, the academic and research communities and national and international organizations. They work cooperatively from their respective roles to create shared policies and standards that maintain the Internet's global interoperability for the public good.
However, to help ensure inter-operability, several key technical and policy aspects of the underlying core infrastructure and the principal namespaces are administered by the Internet Corporation for Assigned Names and Numbers (ICANN), which is headquartered in Los Angeles, California. ICANN oversees the assignment of globally unique identifiers on the Internet, including domain names, Internet protocol addresses, application port numbers in the transport protocols, and many other parameters. This seeks to create a globally unified namespace to ensure the global reach of the Internet.
ICANN is governed by an international board of directors drawn from across the Internet's technical, business, academic, and other non-commercial communities. However, the National Telecommunications and Information Administration, an agency of the U.S. Department of Commerce, continues to have final approval over changes to the DNS root zone. This authority over the root zone file makes ICANN one of a few bodies with global, centralized influence over the otherwise distributed Internet.
In the 30 September 2009 Affirmation of Commitments by the Department of Commerce and ICANN, the Department of Commerce finally affirmed that a "private coordinating process…is best able to flexibly meet the changing needs of the Internet and of Internet users" (para. 4).
While ICANN itself interpreted this as a declaration of its independence, scholars still point out that this is not yet the case. Considering that the U.S. Department of Commerce can unilaterally terminate the Affirmation of Commitments with ICANN, the authority of DNS administration is likewise seen as revocable and derived from a single State, namely the United States.
The technical underpinning and standardization of the Internet's core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.
On 16 November 2005, the United Nations-sponsored World Summit on the Information Society (WSIS), held in Tunis, established the Internet Governance Forum (IGF) to open an ongoing, non-binding conversation among multiple stakeholders about the future of Internet governance. Since WSIS, the term "Internet governance" has been broadened beyond narrow technical concerns to include a wider range of Internet-related policy issues.
Definition of Internet Governance follows:
The definition of Internet governance has been contested by differing groups across political and ideological lines. One of the main debates concerns the authority and participation of certain actors, such as national governments, corporate entities and civil society, to play a role in the Internet's governance.
A working group established after a UN-initiated World Summit on the Information Society (WSIS) proposed the following definition of Internet governance as part of its June 2005 report: Internet governance is the development and application by Governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programs that shape the evolution and use of the Internet.
Law professor Yochai Benkler developed a conceptualization of Internet governance by the idea of three "layers" of governance:
Professors Jovan Kurbalija and Laura DeNardis also offer comprehensive definitions to "Internet Governance". According to Kurbalija, the broad approach to Internet Governance goes "beyond Internet infrastructural aspects and address other legal, economic, developmental, and sociocultural issues"; along similar lines, DeNardis argues that "Internet Governance generally refers to policy and technical coordination issues related to the exchange of information over the Internet".
One of the more policy-relevant questions today is exactly whether the regulatory responses are appropriate to police the content delivered through the Internet: it includes important rules for the improvement of Internet safety and for dealing with threats such as cyber-bullying, copyright infringement, data protection and other illegal or disruptive activities.
Click on any of the following blue hyperlinks for more about Internet Governance:
This article describes how the Internet was and is currently governed, some of the controversies that occurred along the way, and the ongoing debates about how the Internet should or should not be governed in the future.
Internet governance should not be confused with E-Governance, which refers to governments' use of technology to carry out their governing duties.
Background:
No one person, company, organization or government runs the Internet. It is a globally distributed network comprising many voluntarily interconnected autonomous networks. It operates without a central governing body with each constituent network setting and enforcing its own policies.
The Internet's governance is conducted by a decentralized and international multi-stakeholder network of interconnected autonomous groups drawing from civil society, the private sector, governments, the academic and research communities and national and international organizations. They work cooperatively from their respective roles to create shared policies and standards that maintain the Internet's global interoperability for the public good.
However, to help ensure inter-operability, several key technical and policy aspects of the underlying core infrastructure and the principal namespaces are administered by the Internet Corporation for Assigned Names and Numbers (ICANN), which is headquartered in Los Angeles, California. ICANN oversees the assignment of globally unique identifiers on the Internet, including domain names, Internet protocol addresses, application port numbers in the transport protocols, and many other parameters. This seeks to create a globally unified namespace to ensure the global reach of the Internet.
ICANN is governed by an international board of directors drawn from across the Internet's technical, business, academic, and other non-commercial communities. However, the National Telecommunications and Information Administration, an agency of the U.S. Department of Commerce, continues to have final approval over changes to the DNS root zone. This authority over the root zone file makes ICANN one of a few bodies with global, centralized influence over the otherwise distributed Internet.
In the 30 September 2009 Affirmation of Commitments by the Department of Commerce and ICANN, the Department of Commerce finally affirmed that a "private coordinating process…is best able to flexibly meet the changing needs of the Internet and of Internet users" (para. 4).
While ICANN itself interpreted this as a declaration of its independence, scholars still point out that this is not yet the case. Considering that the U.S. Department of Commerce can unilaterally terminate the Affirmation of Commitments with ICANN, the authority of DNS administration is likewise seen as revocable and derived from a single State, namely the United States.
The technical underpinning and standardization of the Internet's core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.
On 16 November 2005, the United Nations-sponsored World Summit on the Information Society (WSIS), held in Tunis, established the Internet Governance Forum (IGF) to open an ongoing, non-binding conversation among multiple stakeholders about the future of Internet governance. Since WSIS, the term "Internet governance" has been broadened beyond narrow technical concerns to include a wider range of Internet-related policy issues.
Definition of Internet Governance follows:
The definition of Internet governance has been contested by differing groups across political and ideological lines. One of the main debates concerns the authority and participation of certain actors, such as national governments, corporate entities and civil society, to play a role in the Internet's governance.
A working group established after a UN-initiated World Summit on the Information Society (WSIS) proposed the following definition of Internet governance as part of its June 2005 report: Internet governance is the development and application by Governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programs that shape the evolution and use of the Internet.
Law professor Yochai Benkler developed a conceptualization of Internet governance by the idea of three "layers" of governance:
- Physical infrastructure layer (through which information travels)
- Code or logical layer (controls the infrastructure)
- Content layer (contains the information signaled through the network)
Professors Jovan Kurbalija and Laura DeNardis also offer comprehensive definitions to "Internet Governance". According to Kurbalija, the broad approach to Internet Governance goes "beyond Internet infrastructural aspects and address other legal, economic, developmental, and sociocultural issues"; along similar lines, DeNardis argues that "Internet Governance generally refers to policy and technical coordination issues related to the exchange of information over the Internet".
One of the more policy-relevant questions today is exactly whether the regulatory responses are appropriate to police the content delivered through the Internet: it includes important rules for the improvement of Internet safety and for dealing with threats such as cyber-bullying, copyright infringement, data protection and other illegal or disruptive activities.
Click on any of the following blue hyperlinks for more about Internet Governance:
Electronic governance (or E-governance)
YouTube Video 1: The eGovernment Revolution
YouTube Video 2: Jennifer Pahlka* (TED Talks**): "Coding a Better Government"
* -- Jennifer Pahlka
**-- TED Talks
Electronic governance or e-governance is the application of information and communication technology (ICT) for delivering the following:
Through e-governance, government services will be made available to citizens in a convenient, efficient and transparent manner. The three main target groups that can be distinguished in governance concepts are government, citizens and businesses/interest groups. In e-governance there are no distinct boundaries.
Generally four basic models are available:
Distinction from E-Government:
Both terms are treated to be the same; however, there is a difference between the two. "E-government" is the use of the ICTs in public administration – combined with organizational change and new skills – to improve public services and democratic processes and to strengthen support to public.
The problem in this definition to be congruence definition of e-governance is that there is no provision for governance of ICTs. As a matter of fact, the governance of ICTs requires most probably a substantial increase in regulation and policy-making capabilities, with all the expertise and opinion-shaping processes along the various social stakeholders of these concerns.
So, the perspective of the e-governance is "the use of the technologies that both help governing and have to be governed". The public–private partnership (PPP)-based e-governance projects are hugely successful in India.
Many countries are looking forward to a corruption-free government. E-government is one-way communication protocol whereas e-governance is two-way communication protocol.
The essence of e-governance is to reach the beneficiary and ensure that the services intended to reach the desired individual has been met with. There should be an auto-response to support the essence of e-governance, whereby the Government realizes the efficacy of its governance: E-governance is by the governed, for the governed and of the governed.
Establishing the identity of the end beneficiary is a challenge in all citizen-centric services. Statistical information published by governments and world bodies does not always reveal the facts.
The best form of e-governance cuts down on unwanted interference of too many layers while delivering governmental services. It depends on good infrastructural setup with the support of local processes and parameters for governments to reach their citizens or end beneficiaries. Budget for planning, development and growth can be derived from well laid out e-governance systems.
Government to Citizen:
The goal of government-to-customer (G2C) e-governance is to offer a variety of ICT services to citizens in an efficient and economical manner, and to strengthen the relationship between government and citizens using technology.
There are several methods of government-to-customer e-governance. Two-way communication allows citizens to instant message directly with public administrators, and cast remote electronic votes (electronic voting) and instant opinion voting. Transactions such as payment of services, such as city utilities, can be completed online or over the phone.
Mundane services such as name or address changes, applying for services or grants, or transferring existing services are more convenient and no longer have to be completed face to face.
The Federal Government of the United States has a broad framework of G2C technology to enhance citizen access to Government information and services. Benefits.Gov is an official US government website that informs citizens of benefits they are eligible for and provides information of how to apply assistance.
US State Governments also engage in G2C interaction through the following:
As with e-Governance on the global level, G2C services vary from state to state.
The Digital States Survey ranks states on social measures, digital democracy, e-commerce, taxation, and revenue. The 2012 report shows Michigan and Utah in the lead and Florida and Idaho with the lowest scores.
Municipal governments in the United States also use government-to-customer technology to complete transactions and inform the public.
Much like states, cities are awarded for innovative technology. Government Technology's "Best of the Web 2012" named Louissville, KY, Arvada, CO, Raleigh, NC, Riverside, CA, and Austin, TX the top five G2C city portals.
Click on any of the following blue hyperlinks more further amplification on E-Governance:
- government services,
- exchange of information,
- communication transactions,
- integration of various stand-alone systems and services between government-to-customer (G2C),
- government-to-business (G2B),
- government-to-government (G2G)
- as well as back office processes and interactions within the entire government framework.
Through e-governance, government services will be made available to citizens in a convenient, efficient and transparent manner. The three main target groups that can be distinguished in governance concepts are government, citizens and businesses/interest groups. In e-governance there are no distinct boundaries.
Generally four basic models are available:
- government-to-citizen (customer),
- government-to-employees,
- government-to-government
- and government-to-business.
Distinction from E-Government:
Both terms are treated to be the same; however, there is a difference between the two. "E-government" is the use of the ICTs in public administration – combined with organizational change and new skills – to improve public services and democratic processes and to strengthen support to public.
The problem in this definition to be congruence definition of e-governance is that there is no provision for governance of ICTs. As a matter of fact, the governance of ICTs requires most probably a substantial increase in regulation and policy-making capabilities, with all the expertise and opinion-shaping processes along the various social stakeholders of these concerns.
So, the perspective of the e-governance is "the use of the technologies that both help governing and have to be governed". The public–private partnership (PPP)-based e-governance projects are hugely successful in India.
Many countries are looking forward to a corruption-free government. E-government is one-way communication protocol whereas e-governance is two-way communication protocol.
The essence of e-governance is to reach the beneficiary and ensure that the services intended to reach the desired individual has been met with. There should be an auto-response to support the essence of e-governance, whereby the Government realizes the efficacy of its governance: E-governance is by the governed, for the governed and of the governed.
Establishing the identity of the end beneficiary is a challenge in all citizen-centric services. Statistical information published by governments and world bodies does not always reveal the facts.
The best form of e-governance cuts down on unwanted interference of too many layers while delivering governmental services. It depends on good infrastructural setup with the support of local processes and parameters for governments to reach their citizens or end beneficiaries. Budget for planning, development and growth can be derived from well laid out e-governance systems.
Government to Citizen:
The goal of government-to-customer (G2C) e-governance is to offer a variety of ICT services to citizens in an efficient and economical manner, and to strengthen the relationship between government and citizens using technology.
There are several methods of government-to-customer e-governance. Two-way communication allows citizens to instant message directly with public administrators, and cast remote electronic votes (electronic voting) and instant opinion voting. Transactions such as payment of services, such as city utilities, can be completed online or over the phone.
Mundane services such as name or address changes, applying for services or grants, or transferring existing services are more convenient and no longer have to be completed face to face.
The Federal Government of the United States has a broad framework of G2C technology to enhance citizen access to Government information and services. Benefits.Gov is an official US government website that informs citizens of benefits they are eligible for and provides information of how to apply assistance.
US State Governments also engage in G2C interaction through the following:
- Department of Transportation,
- Department of Public Safety,
- United States Department of Health and Human Services,
- United States Department of Education,
- and others.
As with e-Governance on the global level, G2C services vary from state to state.
The Digital States Survey ranks states on social measures, digital democracy, e-commerce, taxation, and revenue. The 2012 report shows Michigan and Utah in the lead and Florida and Idaho with the lowest scores.
Municipal governments in the United States also use government-to-customer technology to complete transactions and inform the public.
Much like states, cities are awarded for innovative technology. Government Technology's "Best of the Web 2012" named Louissville, KY, Arvada, CO, Raleigh, NC, Riverside, CA, and Austin, TX the top five G2C city portals.
Click on any of the following blue hyperlinks more further amplification on E-Governance:
- Concerns
- Government to employees
- Government to government
- Government to business
- Challenges – international position
- See also:
Internet Pioneers as recognized by: The Internet Hall of Fame, (as governed by the Internet Society), along with a List of Internet Pioneers; and The Webby Awards
YouTube Video: Tim Berners-Lee (TED 2009*) and "The Next Web"
* -- TED 2009.
Pictured: LEFT: Internet Pioneers Vint Cerf and Robert Kahn (both considered as “Fathers of the Internet”) being awarded the Presidential Medal Of Freedom by President George W. Bush; RIGHT: Tim Berners-Lee is recognized as the inventor of the World Wide Web
The Internet Society (ISOC) is an American, non-profit organization founded in 1992 to provide leadership in Internet-related standards, education, access, and policy. It states that its mission is "to promote the open development, evolution and use of the Internet for the benefit of all people throughout the world".
The Internet Society has its headquarters in Reston, Virginia, United States, (near Washington, D.C.), and offices in Geneva, Switzerland. It has a membership base of more than 140 organizations and more than 80,000 individual members. Members also form "chapters" based on either common geographical location or special interests. There are over 110 chapters around the world.
The Internet Hall of Fame is an honorary lifetime achievement award administered by the Internet Society (ISOC) in recognition of individuals who have made significant contributions to the development and advancement of the Internet.
Click here for a List of Internet Pioneers.
___________________________________________________________________________
A Webby Award is an award for excellence on the Internet presented annually by The International Academy of Digital Arts and Sciences, a judging body composed of over one thousand industry experts and technology innovators.
Categories include websites; advertising and media; online film and video; mobile sites and apps; and social.
Two winners are selected in each category, one by members of The International Academy of Digital Arts and Sciences, and one by the public who cast their votes during Webby People’s Voice voting. Each winner presents a five-word acceptance speech, a trademark of the annual awards show.
Hailed as the "Internet’s highest honor," the award is one of the older Internet-oriented awards, and is associated with the phrase "The Oscars of the Internet."
Click here for a List of Webby Award Winners.
The Internet Society has its headquarters in Reston, Virginia, United States, (near Washington, D.C.), and offices in Geneva, Switzerland. It has a membership base of more than 140 organizations and more than 80,000 individual members. Members also form "chapters" based on either common geographical location or special interests. There are over 110 chapters around the world.
The Internet Hall of Fame is an honorary lifetime achievement award administered by the Internet Society (ISOC) in recognition of individuals who have made significant contributions to the development and advancement of the Internet.
Click here for a List of Internet Pioneers.
___________________________________________________________________________
A Webby Award is an award for excellence on the Internet presented annually by The International Academy of Digital Arts and Sciences, a judging body composed of over one thousand industry experts and technology innovators.
Categories include websites; advertising and media; online film and video; mobile sites and apps; and social.
Two winners are selected in each category, one by members of The International Academy of Digital Arts and Sciences, and one by the public who cast their votes during Webby People’s Voice voting. Each winner presents a five-word acceptance speech, a trademark of the annual awards show.
Hailed as the "Internet’s highest honor," the award is one of the older Internet-oriented awards, and is associated with the phrase "The Oscars of the Internet."
Click here for a List of Webby Award Winners.
Internet censorship in the United States
YouTube Video: How Internet Censorshop Works*
* -- Berkman Klein Center
Picture: America's internet is incredibly free compared to most countries
Internet censorship in the United States is the suppression of information published or viewed on the Internet in the United States. The U.S. possesses protection of freedom of speech and expression against federal, state, and local government censorship; a right protected by the First Amendment of the United States Constitution.
These protections extend to the Internet, however, the U.S. government has censored sites in the past and they are increasing in number to this day.However, in 2014, the United States was added to Reporters Without Borders (RWB)'s list of "Enemies of the Internet", a group of countries with the highest level of Internet censorship and surveillance.
RWB stated that the U.S. "… has undermined confidence in the Internet and its own standards of security" and that "U.S. surveillance practices and decryption activities are a direct threat to investigative journalists, especially those who work with sensitive sources for whom confidentiality is paramount and who are already under pressure."
In Freedom House's "Freedom of the Net" report from 2016, the United States was rated as the 4th most free internet on FreedomHouse's "65 Country Score Comparison".
Click on any of the following blue hyperlinks for more about Internet Censorship in the United States:
These protections extend to the Internet, however, the U.S. government has censored sites in the past and they are increasing in number to this day.However, in 2014, the United States was added to Reporters Without Borders (RWB)'s list of "Enemies of the Internet", a group of countries with the highest level of Internet censorship and surveillance.
RWB stated that the U.S. "… has undermined confidence in the Internet and its own standards of security" and that "U.S. surveillance practices and decryption activities are a direct threat to investigative journalists, especially those who work with sensitive sources for whom confidentiality is paramount and who are already under pressure."
In Freedom House's "Freedom of the Net" report from 2016, the United States was rated as the 4th most free internet on FreedomHouse's "65 Country Score Comparison".
Click on any of the following blue hyperlinks for more about Internet Censorship in the United States:
- Overview
- Federal laws
- Proposed federal legislation that has not become law
- Censorship by institutions
- See also:
- Internet censorship and surveillance by country
- Communications Assistance for Law Enforcement Act (CALEA)
- Mass surveillance in the United States
- Global Integrity: Internet Censorship, A Comparative Study; puts US online censorship in cross-country context.
Net Neutrality: Google and Facebook Join Net Neutrality Day to Protest FCC’s Proposed Rollback (by NBC News July 12, 2017)
Click on NBC Video: "FCC Chairman Announces Push to Target Net Neutrality Rules"
Picture Courtesy of Geneva Internet Platform
Net neutrality is the principle that Internet service providers and governments regulating the Internet should treat all data on the Internet the same, not discriminating or charging differentially by user, content, website, platform, application, type of attached equipment, or mode of communication.
The term was coined by Columbia University media law professor Tim Wu in 2003, as an extension of the longstanding concept of a common carrier, which was used to describe the role of telephone systems.
A widely-cited example of a violation of net neutrality principles was when the Internet service provider Comcast was secretly slowing (a.k.a. "throttling") uploads from peer-to-peer file sharing (P2P) applications by using forged packets. Comcast didn't stop blocking these protocols like BitTorrent until the FCC ordered them to do so.
In 2004, The Madison River Communications company was fined $15,000 by the FCC for restricting their customer’s access to Vonage which was rivaling their own services. AT&T was also caught limiting access to FaceTime, so only those users who paid for the new shared data plans could access the application.
In April 2017, an attempt to compromise net neutrality in the United States is being considered by the newly appointed FCC chairman, Ajit Varadaraj Pai. On May 16, 2017, a process began to roll back Open Internet rules, in place since 2015. This rule-making process includes a public comment period that lasts sixty days: thirty days for public comment and thirty days for the FCC to respond.
Research suggests that a combination of policy instruments will help realize the range of valued political and economic objectives central to the network neutrality debate. Combined with strong public opinion, this has led some governments to regulate broadband Internet services as a public utility, similar to the way electricity, gas and water supply is regulated, along with limiting providers and regulating the options those providers can offer.
NBC News July 12, 2017 Article: by Ben Popken
Tech companies are banding together to make a final push to stop a fait accompli.
Google, Facebook, Netflix, Twitter, and 80,000 other top websites and organizations have joined together for a "Day of Action" to protest a retreat from the concept of "net neutrality." They are angry that the Trump administration wants to roll back regulations requiring internet service providers to treat all data and customers equally online, and the companies are encouraging consumers to give their feedback on the Federal Communications Commission's website.
Banners, pop-ups, push notifications and videos will be seen across participating websites, urging visitors to tell the FCC what they think. Don't be surprised if you see your friends changing their social media avatars or profile pictures, too.
Google declined to share specifics about its involvement, but users searching "what is net neutrality" or "net neutrality day of action" will get a call-out box with information at the top of their results.
A discreet text banner on the top of Netflix.com directed users to a call-to-action page created by the Internet Association, an industry trade group.
Smaller websites published stories featured on their front page, or added an extra pop-up when users tried to submit standard comments.
"In true internet fashion, every site is participating in its own way," Evan Greer, campaign director of Fight for the Future, told NBC News. "Most are using our widgets that allow visitors to easily submit comments to the FCC and Congress without ever leaving the page that they're on. Many are getting creative and writing their own code or displaying their own banners in support of net neutrality that point to action tools."
What's It All About?The critical issue is whether the internet be an all-you-can-eat buffet of information, videos, and LOLCAT memes; or an à la carte menu. Should internet providers be allowed to strike deals to deliver some kinds of content, such as their own or those partners have paid them for, at a faster speed? Should the internet be more like cable, where you subscribe to a package of sports, entertainment, and news websites?
"No," said millions of consumers the last time the FCC took public comment. Over 4 million consumers lodged their comments.
Internet service providers and cable companies argue that Obama-era regulations enacted in 2015 intended to protect net neutrality are the wrong approach, and that a "light touch" is preferred.
"The internet has succeeded up until this point because it has been free to grow, innovate, and change largely free from government oversight," wrote the NCTA, the Internet & Television Association, the primary broadband and cable industry trade group, in its statement on net neutrality.
It proposes scrapping the regulations, which treat broadband like a "common carrier," required to transport "passengers" of data at the same rate, with a different set of rules. The NCTA said its proposal "empowers the internet industry to continue to innovate without putting handcuffs on its most pioneering companies."
A poll from Morning Consult and the NCTA found that 61 percent of consumers either strongly or somewhat support net neutrality rules.
The view of President Donald J. Trump's appointed FCC chair Ajit Pai is in line with that of the ISPs. He vowed earlier this year to roll back the new rules in order to protect consumers.
“It’s basic economics. The more heavily you regulate something, the less of it you’re likely to get,” he said in a speech at the Newseum in Washington in April.
FCC spokesman Mark Wigfield declined a NBC News request for comment on today's planned action.
As part of the day of protest, Silicon Valley companies and website operators are raising concerns over the rule change, fearful it will favor Goliaths over Davids.
With the FCC chair in favor of revising the rules, committee votes on his side, a thumbs-up from President Donald Trump, and a Republican-controlled Congress, today's actions aren't likely to sway the commission.
Instead the number of consumer comments flooding the FCC today will become a data point used by net neutrality proponents if the rule changes end up in court, as activists have vowed.
"Consumers hate buffering and slow-loading and will abandon videos and services if they're not getting a good viewing experience," said Michael Chea, general counsel for Vimeo, a video-sharing site favored by independent artists.
"[Companies] can throttle some websites over others, favor their own content," said Chea. "This is another thing that will reduce choice, increase costs, and reduce innovation."
Click on any of the following blue hyperlinks for more about Net Neutrality:
The term was coined by Columbia University media law professor Tim Wu in 2003, as an extension of the longstanding concept of a common carrier, which was used to describe the role of telephone systems.
A widely-cited example of a violation of net neutrality principles was when the Internet service provider Comcast was secretly slowing (a.k.a. "throttling") uploads from peer-to-peer file sharing (P2P) applications by using forged packets. Comcast didn't stop blocking these protocols like BitTorrent until the FCC ordered them to do so.
In 2004, The Madison River Communications company was fined $15,000 by the FCC for restricting their customer’s access to Vonage which was rivaling their own services. AT&T was also caught limiting access to FaceTime, so only those users who paid for the new shared data plans could access the application.
In April 2017, an attempt to compromise net neutrality in the United States is being considered by the newly appointed FCC chairman, Ajit Varadaraj Pai. On May 16, 2017, a process began to roll back Open Internet rules, in place since 2015. This rule-making process includes a public comment period that lasts sixty days: thirty days for public comment and thirty days for the FCC to respond.
Research suggests that a combination of policy instruments will help realize the range of valued political and economic objectives central to the network neutrality debate. Combined with strong public opinion, this has led some governments to regulate broadband Internet services as a public utility, similar to the way electricity, gas and water supply is regulated, along with limiting providers and regulating the options those providers can offer.
NBC News July 12, 2017 Article: by Ben Popken
Tech companies are banding together to make a final push to stop a fait accompli.
Google, Facebook, Netflix, Twitter, and 80,000 other top websites and organizations have joined together for a "Day of Action" to protest a retreat from the concept of "net neutrality." They are angry that the Trump administration wants to roll back regulations requiring internet service providers to treat all data and customers equally online, and the companies are encouraging consumers to give their feedback on the Federal Communications Commission's website.
Banners, pop-ups, push notifications and videos will be seen across participating websites, urging visitors to tell the FCC what they think. Don't be surprised if you see your friends changing their social media avatars or profile pictures, too.
Google declined to share specifics about its involvement, but users searching "what is net neutrality" or "net neutrality day of action" will get a call-out box with information at the top of their results.
A discreet text banner on the top of Netflix.com directed users to a call-to-action page created by the Internet Association, an industry trade group.
Smaller websites published stories featured on their front page, or added an extra pop-up when users tried to submit standard comments.
"In true internet fashion, every site is participating in its own way," Evan Greer, campaign director of Fight for the Future, told NBC News. "Most are using our widgets that allow visitors to easily submit comments to the FCC and Congress without ever leaving the page that they're on. Many are getting creative and writing their own code or displaying their own banners in support of net neutrality that point to action tools."
What's It All About?The critical issue is whether the internet be an all-you-can-eat buffet of information, videos, and LOLCAT memes; or an à la carte menu. Should internet providers be allowed to strike deals to deliver some kinds of content, such as their own or those partners have paid them for, at a faster speed? Should the internet be more like cable, where you subscribe to a package of sports, entertainment, and news websites?
"No," said millions of consumers the last time the FCC took public comment. Over 4 million consumers lodged their comments.
Internet service providers and cable companies argue that Obama-era regulations enacted in 2015 intended to protect net neutrality are the wrong approach, and that a "light touch" is preferred.
"The internet has succeeded up until this point because it has been free to grow, innovate, and change largely free from government oversight," wrote the NCTA, the Internet & Television Association, the primary broadband and cable industry trade group, in its statement on net neutrality.
It proposes scrapping the regulations, which treat broadband like a "common carrier," required to transport "passengers" of data at the same rate, with a different set of rules. The NCTA said its proposal "empowers the internet industry to continue to innovate without putting handcuffs on its most pioneering companies."
A poll from Morning Consult and the NCTA found that 61 percent of consumers either strongly or somewhat support net neutrality rules.
The view of President Donald J. Trump's appointed FCC chair Ajit Pai is in line with that of the ISPs. He vowed earlier this year to roll back the new rules in order to protect consumers.
“It’s basic economics. The more heavily you regulate something, the less of it you’re likely to get,” he said in a speech at the Newseum in Washington in April.
FCC spokesman Mark Wigfield declined a NBC News request for comment on today's planned action.
As part of the day of protest, Silicon Valley companies and website operators are raising concerns over the rule change, fearful it will favor Goliaths over Davids.
With the FCC chair in favor of revising the rules, committee votes on his side, a thumbs-up from President Donald Trump, and a Republican-controlled Congress, today's actions aren't likely to sway the commission.
Instead the number of consumer comments flooding the FCC today will become a data point used by net neutrality proponents if the rule changes end up in court, as activists have vowed.
"Consumers hate buffering and slow-loading and will abandon videos and services if they're not getting a good viewing experience," said Michael Chea, general counsel for Vimeo, a video-sharing site favored by independent artists.
"[Companies] can throttle some websites over others, favor their own content," said Chea. "This is another thing that will reduce choice, increase costs, and reduce innovation."
Click on any of the following blue hyperlinks for more about Net Neutrality:
- Definition and related principles
- By issue
- Legal aspects
- By country: United States
- Arguments in favor
- Arguments against
- Related issues
- Concentration of media ownership
- Digital rights
- Economic rent
- Industrial information economy
- Killswitch (film)
- Municipal broadband
- Search neutrality
- Switzerland (software)
- Wikipedia Zero
- Day of Action to Save Net Neutrality
- Technological Neutrality and Conceptual Singularity
- Why Consumers Should Be Worried About Net Neutrality
- The FCC on Net Neutrality: Be Careful What You Wish For
- Financial backers of pro neutrality groups
- Killerswitch - film advocating in favor of Net Neutrality
- Battle for the Net - website advocating net neutrality by Fight for the Future
- Don't Break The Net - website advocating against net neutrality by TechFreedom with monetary support from telcos (see answer to corresponding question on website's "About TechFreedom" section)
- La Quadrature du Net – complex dossier and links about net neutrality
- Net Neutrality – What it is and why you should care. – comic explaining net neutrality.
- Check Your Internet
Virtual Private Network including a List of United States mobile virtual network operators
YouTube Video: How a VPN Works and What It Does for You
Pictured: VPN connectivity overview
Click here for a List of United States mobile virtual network operators.
A virtual private network (VPN) extends a private network across a public network, and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network. ("In the simplest terms, it creates a secure, encrypted connection, which can be thought of as a tunnel, between your computer and a server operated by the VPN service.")
Applications running across the VPN may therefore benefit from the functionality, security, and management of the private network.
VPNs may allow employees to securely access a corporate intranet while located outside the office. They are used to securely connect geographically separated offices of an organization, creating one cohesive network. Individual Internet users may secure their wireless transactions with a VPN, to circumvent geo-restrictions and censorship, or to connect to proxy servers for the purpose of protecting personal identity and location.
However, some Internet sites block access to known VPN technology to prevent the circumvention of their geo-restrictions.
A VPN is created by establishing a virtual point-to-point connection through the use of dedicated connections, virtual tunneling protocols, or traffic encryption. A VPN available from the public Internet can provide some of the benefits of a wide area network (WAN).
From a user perspective, the resources available within the private network can be accessed remotely.
Traditional VPNs are characterized by a point-to-point topology, and they do not tend to support or connect broadcast domains, so services such as Microsoft Windows NetBIOS may not be fully supported or work as they would on a local area network (LAN). Designers have developed VPN variants, such as Virtual Private LAN Service (VPLS), and layer-2 tunneling protocols, to overcome this limitation.
Early data networks allowed VPN-style remote connectivity through dial-up modem or through leased line connections utilizing Frame Relay and Asynchronous Transfer Mode (ATM) virtual circuits, provisioned through a network owned and operated by telecommunication carriers.
These networks are not considered true VPNs because they passively secure the data being transmitted by the creation of logical data streams. They have been replaced by VPNs based on IP and IP/Multi-protocol Label Switching (MPLS) Networks, due to significant cost-reductions and increased bandwidth provided by new technologies such as Digital Subscriber Line (DSL) and fiber-optic networks.
VPNs can be either remote-access (connecting a computer to a network) or site-to-site (connecting two networks). In a corporate setting, remote-access VPNs allow employees to access their company's intranet from home or while travelling outside the office, and site-to-site VPNs allow employees in geographically disparate offices to share one cohesive virtual network.
A VPN can also be used to interconnect two similar networks over a dissimilar middle network; for example, two IPv6 networks over an IPv4 network.
VPN systems may be classified by:
Click here for a List of the 20 Biggest Hacking Attacks of All Time (Courtesy of VPN Mentor)
Click on any of the following blue hyperlinks for more about Virtual Private Networks:
A virtual private network (VPN) extends a private network across a public network, and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network. ("In the simplest terms, it creates a secure, encrypted connection, which can be thought of as a tunnel, between your computer and a server operated by the VPN service.")
Applications running across the VPN may therefore benefit from the functionality, security, and management of the private network.
VPNs may allow employees to securely access a corporate intranet while located outside the office. They are used to securely connect geographically separated offices of an organization, creating one cohesive network. Individual Internet users may secure their wireless transactions with a VPN, to circumvent geo-restrictions and censorship, or to connect to proxy servers for the purpose of protecting personal identity and location.
However, some Internet sites block access to known VPN technology to prevent the circumvention of their geo-restrictions.
A VPN is created by establishing a virtual point-to-point connection through the use of dedicated connections, virtual tunneling protocols, or traffic encryption. A VPN available from the public Internet can provide some of the benefits of a wide area network (WAN).
From a user perspective, the resources available within the private network can be accessed remotely.
Traditional VPNs are characterized by a point-to-point topology, and they do not tend to support or connect broadcast domains, so services such as Microsoft Windows NetBIOS may not be fully supported or work as they would on a local area network (LAN). Designers have developed VPN variants, such as Virtual Private LAN Service (VPLS), and layer-2 tunneling protocols, to overcome this limitation.
Early data networks allowed VPN-style remote connectivity through dial-up modem or through leased line connections utilizing Frame Relay and Asynchronous Transfer Mode (ATM) virtual circuits, provisioned through a network owned and operated by telecommunication carriers.
These networks are not considered true VPNs because they passively secure the data being transmitted by the creation of logical data streams. They have been replaced by VPNs based on IP and IP/Multi-protocol Label Switching (MPLS) Networks, due to significant cost-reductions and increased bandwidth provided by new technologies such as Digital Subscriber Line (DSL) and fiber-optic networks.
VPNs can be either remote-access (connecting a computer to a network) or site-to-site (connecting two networks). In a corporate setting, remote-access VPNs allow employees to access their company's intranet from home or while travelling outside the office, and site-to-site VPNs allow employees in geographically disparate offices to share one cohesive virtual network.
A VPN can also be used to interconnect two similar networks over a dissimilar middle network; for example, two IPv6 networks over an IPv4 network.
VPN systems may be classified by:
- The protocols used to tunnel the traffic
- The tunnel's termination point location, e.g., on the customer edge or network-provider edge
- The type of topology of connections, such as site-to-site or network-to-network
- The levels of security provided
- The OSI layer they present to the connecting network, such as Layer 2 circuits or Layer 3 network connectivity
- The number of simultaneous connections
Click here for a List of the 20 Biggest Hacking Attacks of All Time (Courtesy of VPN Mentor)
Click on any of the following blue hyperlinks for more about Virtual Private Networks:
- Security mechanisms
- Routing
- User-visible PPVPN services
- Trusted delivery networks
- VPNs in mobile environments
- VPN on routers
- Networking limitations
- See also:
WatchMojo.com
YouTube Video: Top 10 WatchMojo Top 10s of 2013
Pictured: WatchMojo.com Logo courtesy of Wikipedia.
WatchMojo.com is a Canadian-based privately held video content producer, publisher, and syndicator. With over 7.9 billion all-time video views and 14 million subscribers (making it the 37th most-subscribed channel), WatchMojo has one of the largest channels on YouTube. 60% of its viewers and subscribers are male, whereas 40% are female, and 50% hail from English speaking countries.
WatchMojo.com was founded in June 2005 by Ashkan Karbasfrooshan, Raphael Daigneault, and Christine Voulieris. Other early key employees include Kevin Havill and Derek Allen.
The WatchMojo.com website was launched on 14 June 2005 and its YouTube channel was launched on 25 January 2007. WatchMojo is an independent channel, it is neither a Multi-Channel Network (MCN) nor part of one.
According to the CEO Karbasfrooshan, WatchMojo employed 23 full-time employees and a team of 100-plus freelance writers and video editors by October 2014. By March 2017, that number jumped to 50-plus.
The videos it produces are typically suggestions supplied by visitors of the site on its suggestion tool or its YouTube, Facebook, and Twitter pages. It hit 1 million subscribers on 30 October 2013 and then 5 million subscribers on 29 August 2014.
In December 2014, on the day its YouTube channel surpassed 6 million subscribers, it announced a representation deal with talent agency William Morris Endeavor. It surpassed 10 million subscribers on 5 December 2015.
During the 2016–17 regular season of the NHL, WatchMojo sponsored the NY Islanders.
In October 2016, Karbasfrooshan published The 10-Year Overnight Success: An Entrepreneurship’s Manifesto - How WatchMojo Built the Most Successful Media Brand on YouTube on the company's new publishing imprint, as it ventured into digital books and guides.
Content:
WatchMojo.com does not feature user-generated content nor does it allow a mechanism for users to upload videos onto its site. The website produces daily "Top Ten" videos as well as videos summarizing the history of specific niche topics.
These topics can be one of the following categories:
Each day it publishes over 5 videos for 60–75 minutes of original content. In February 2016, it launched the MsMojo channel to better serve female viewers and fans. It also launched multiple non-English channels for the Spanish, French, German, Turkish and Polish markets.
On April 15, 2017, WatchMojo debuted The Lineup, a game show that combined ranking top 10 lists with elements of fantasy draft and sports talk radio banter. It won a Telly Award for Best Series in the Web Series category.
On May 31, 2017, WatchMojo live-streamed its first ever live show, called WatchMojo Live At YouTube Space at Chelsea Market. The show consisted of an afternoon industry track covering online media, advertising, and VR. It was then followed by an evening show featuring DJ Killa Jewel, DJ Dan Deacon, Puddles Pity Party and Caveman.
On July 12, 2017, it followed up with WatchMojo Live at YouTube Space in London at King's Cross Station, featuring musical acts by Llew Eyre, Bluey Robinson and Leif Erikson. Speakers at the industry track included Hussain Manawer, Ben Jones and Kim Snow.
Business Model:
WatchMojo.com lost money the first six years of operations, broke even in 2012, and has generated a profit since 2013.
Due to the 2007–2009 recession, WatchMojo.com had de-emphasized an ad-supported model in favor of licensing fees paid by other media companies to access and use their media. Later that year Beet.TV featured WatchMojo.com alongside Magnify.net as examples of companies which successfully switched from ad-based revenue models to licensing fee based revenue models.
In 2012, it shifted its focus to YouTube and as a result of its growth in subscribers and views, it became profitable.
WatchMojo.com was founded in June 2005 by Ashkan Karbasfrooshan, Raphael Daigneault, and Christine Voulieris. Other early key employees include Kevin Havill and Derek Allen.
The WatchMojo.com website was launched on 14 June 2005 and its YouTube channel was launched on 25 January 2007. WatchMojo is an independent channel, it is neither a Multi-Channel Network (MCN) nor part of one.
According to the CEO Karbasfrooshan, WatchMojo employed 23 full-time employees and a team of 100-plus freelance writers and video editors by October 2014. By March 2017, that number jumped to 50-plus.
The videos it produces are typically suggestions supplied by visitors of the site on its suggestion tool or its YouTube, Facebook, and Twitter pages. It hit 1 million subscribers on 30 October 2013 and then 5 million subscribers on 29 August 2014.
In December 2014, on the day its YouTube channel surpassed 6 million subscribers, it announced a representation deal with talent agency William Morris Endeavor. It surpassed 10 million subscribers on 5 December 2015.
During the 2016–17 regular season of the NHL, WatchMojo sponsored the NY Islanders.
In October 2016, Karbasfrooshan published The 10-Year Overnight Success: An Entrepreneurship’s Manifesto - How WatchMojo Built the Most Successful Media Brand on YouTube on the company's new publishing imprint, as it ventured into digital books and guides.
Content:
WatchMojo.com does not feature user-generated content nor does it allow a mechanism for users to upload videos onto its site. The website produces daily "Top Ten" videos as well as videos summarizing the history of specific niche topics.
These topics can be one of the following categories:
- automotive,
- business,
- comedy,
- education,
- fashion,
- film,
- anime,
- Hentai,
- health and fitness,
- lifestyle,
- music,
- parenting,
- politics and economy,
- space and science,
- sports,
- technology,
- travel,
- and video games.
Each day it publishes over 5 videos for 60–75 minutes of original content. In February 2016, it launched the MsMojo channel to better serve female viewers and fans. It also launched multiple non-English channels for the Spanish, French, German, Turkish and Polish markets.
On April 15, 2017, WatchMojo debuted The Lineup, a game show that combined ranking top 10 lists with elements of fantasy draft and sports talk radio banter. It won a Telly Award for Best Series in the Web Series category.
On May 31, 2017, WatchMojo live-streamed its first ever live show, called WatchMojo Live At YouTube Space at Chelsea Market. The show consisted of an afternoon industry track covering online media, advertising, and VR. It was then followed by an evening show featuring DJ Killa Jewel, DJ Dan Deacon, Puddles Pity Party and Caveman.
On July 12, 2017, it followed up with WatchMojo Live at YouTube Space in London at King's Cross Station, featuring musical acts by Llew Eyre, Bluey Robinson and Leif Erikson. Speakers at the industry track included Hussain Manawer, Ben Jones and Kim Snow.
Business Model:
WatchMojo.com lost money the first six years of operations, broke even in 2012, and has generated a profit since 2013.
Due to the 2007–2009 recession, WatchMojo.com had de-emphasized an ad-supported model in favor of licensing fees paid by other media companies to access and use their media. Later that year Beet.TV featured WatchMojo.com alongside Magnify.net as examples of companies which successfully switched from ad-based revenue models to licensing fee based revenue models.
In 2012, it shifted its focus to YouTube and as a result of its growth in subscribers and views, it became profitable.
Rotten Tomatoes
Rotten Tomatoes Video of the Movie Trailer for "War for the Planet of the Apes"
Pictured: part of the home page for the RottenTomatoes Website as of 7-16-17
Rotten Tomatoes is an American review aggregator website for film and television. The company was launched in August 1998 by Senh Duong and since January 2010 has been owned by Flixster, which was, in turn, acquired in 2011 by Warner Bros.
In February 2016, Rotten Tomatoes and its parent site Flixster were sold to Comcast's Fandango. Warner Bros. retained a minority stake in the merged entities, including Fandango. Since 2007, the website's editor-in-chief has been Matt Atchity. The name, Rotten Tomatoes, derives from the practice of audiences' throwing rotten tomatoes when disapproving of a poor stage performance.
From early 2008 to September 2010, Current Television aired the weekly The Rotten Tomatoes Show, featuring hosts and material from the website. A shorter segment was incorporated into the weekly show, InfoMania, which ended in 2011. In September 2013, the website introduced "TV Zone", a section for reviewing scripted TV shows.
Rotten Tomatoes was launched on August 12, 1998, as a spare-time project by Senh Duong. His goal in creating Rotten Tomatoes was "to create a site where people can get access to reviews from a variety of critics in the U.S." As a fan of Jackie Chan's, Duong was inspired to create the website after collecting all the reviews of Chan's movies as they were being published in the United States.
The first movie whose reviews were featured on Rotten Tomatoes was Your Friends & Neighbors (1998). The website was an immediate success, receiving mentions by Netscape, Yahoo!, and USA Today within the first week of its launch; it attracted "600–1000 daily unique visitors" as a result.
Duong teamed up with University of California, Berkeley classmates Patrick Y. Lee and Stephen Wang, his former partners at the Berkeley, California–based web design firm Design Reactor, to pursue Rotten Tomatoes on a full-time basis. They officially launched it on April 1, 2000.
In June 2004, IGN Entertainment acquired rottentomatoes.com for an undisclosed sum. In September 2005, IGN was bought by News Corp's Fox Interactive Media.
In January 2010, IGN sold the website to Flixster. The combined reach of both companies is 30 million unique visitors a month across all different platforms, according to the companies.
In May 2011, Flixster was acquired by Warner Bros.
In early 2009, Current Television launched the televised version of the web review site, The Rotten Tomatoes Show. It was hosted by Brett Erlich and Ellen Fox and written by Mark Ganek. The show aired every Thursday at 10:30 EST on the Current TV network. The last episode aired on September 16, 2010. It returned as a much shorter segment of InfoMania, a satirical news show that ended in 2011.
By late 2009, the website was designed to enable Rotten Tomatoes users to create and join groups to discuss various aspects of film. One group, "The Golden Oyster Awards", accepted votes of members for various awards, spoofing the better-known Oscars or Golden Globes. When Flixster bought the company, they disbanded the groups, announcing: "The Groups area has been discontinued to pave the way for new community features coming soon. In the meantime, please use the Forums to continue your conversations about your favorite movie topics."
As of February 2011, new community features have been added and others removed. For example, users can no longer sort films by Fresh Ratings from Rotten Ratings, and vice versa. On September 17, 2013, a section devoted to scripted television series, called "TV Zone", was created as a subsection of the website.
In February 2016, Rotten Tomatoes and its parent site Flixster were sold to Comcast's Fandango. Warner Bros retained a minority stake in the merged entities, including Fandango.
Click on any of the following blue hyperlinks for more about the Rotten Tomatoes Website:
In February 2016, Rotten Tomatoes and its parent site Flixster were sold to Comcast's Fandango. Warner Bros. retained a minority stake in the merged entities, including Fandango. Since 2007, the website's editor-in-chief has been Matt Atchity. The name, Rotten Tomatoes, derives from the practice of audiences' throwing rotten tomatoes when disapproving of a poor stage performance.
From early 2008 to September 2010, Current Television aired the weekly The Rotten Tomatoes Show, featuring hosts and material from the website. A shorter segment was incorporated into the weekly show, InfoMania, which ended in 2011. In September 2013, the website introduced "TV Zone", a section for reviewing scripted TV shows.
Rotten Tomatoes was launched on August 12, 1998, as a spare-time project by Senh Duong. His goal in creating Rotten Tomatoes was "to create a site where people can get access to reviews from a variety of critics in the U.S." As a fan of Jackie Chan's, Duong was inspired to create the website after collecting all the reviews of Chan's movies as they were being published in the United States.
The first movie whose reviews were featured on Rotten Tomatoes was Your Friends & Neighbors (1998). The website was an immediate success, receiving mentions by Netscape, Yahoo!, and USA Today within the first week of its launch; it attracted "600–1000 daily unique visitors" as a result.
Duong teamed up with University of California, Berkeley classmates Patrick Y. Lee and Stephen Wang, his former partners at the Berkeley, California–based web design firm Design Reactor, to pursue Rotten Tomatoes on a full-time basis. They officially launched it on April 1, 2000.
In June 2004, IGN Entertainment acquired rottentomatoes.com for an undisclosed sum. In September 2005, IGN was bought by News Corp's Fox Interactive Media.
In January 2010, IGN sold the website to Flixster. The combined reach of both companies is 30 million unique visitors a month across all different platforms, according to the companies.
In May 2011, Flixster was acquired by Warner Bros.
In early 2009, Current Television launched the televised version of the web review site, The Rotten Tomatoes Show. It was hosted by Brett Erlich and Ellen Fox and written by Mark Ganek. The show aired every Thursday at 10:30 EST on the Current TV network. The last episode aired on September 16, 2010. It returned as a much shorter segment of InfoMania, a satirical news show that ended in 2011.
By late 2009, the website was designed to enable Rotten Tomatoes users to create and join groups to discuss various aspects of film. One group, "The Golden Oyster Awards", accepted votes of members for various awards, spoofing the better-known Oscars or Golden Globes. When Flixster bought the company, they disbanded the groups, announcing: "The Groups area has been discontinued to pave the way for new community features coming soon. In the meantime, please use the Forums to continue your conversations about your favorite movie topics."
As of February 2011, new community features have been added and others removed. For example, users can no longer sort films by Fresh Ratings from Rotten Ratings, and vice versa. On September 17, 2013, a section devoted to scripted television series, called "TV Zone", was created as a subsection of the website.
In February 2016, Rotten Tomatoes and its parent site Flixster were sold to Comcast's Fandango. Warner Bros retained a minority stake in the merged entities, including Fandango.
Click on any of the following blue hyperlinks for more about the Rotten Tomatoes Website:
- Website
- Hollywood reaction
- Criticism
- See also:
Metacritic
YouTube Video: Bombshell for PC Review (click on movie trailer)
Pictured: Part of the home page for the Metacritic Website as of 7-16-17
Metacritic is a website that aggregates reviews of media products: music albums, video games, films, TV shows, and formerly, books. For each product, the scores from each review are averaged (a weighted average).
Metacritic was created by Jason Dietz, Marc Doyle, and Julie Doyle Roberts in 1999. The site provides an excerpt from each review and hyperlinks to its source. A color of Green, Yellow or Red summarizes the critics' recommendations. It has been described as the video game industry's "premier" review aggregator.
Metacritic's scoring converts each review into a percentage, either mathematically from the mark given, or which the site decides subjectively from a qualitative review. Before being averaged, the scores are weighted according to the critic's fame, stature, and volume of reviews.
Metacritic was launched in July 1999 by Marc Doyle, his sister Julie Doyle Roberts, and a classmate from the University of Southern California law school, Jason Dietz. Rotten Tomatoes was already compiling movie reviews, but Doyle, Roberts, and Dietz saw an opportunity to cover a broader range of media. They sold Metacritic to CNET in 2005.
CNET and Metacritic are now owned by the CBS Corporation.
Nick Wingfield of The Wall Street Journal wrote in September 2004: "Mr. Doyle, 36, is now a senior product manager at CNET but he also acts as games editor of Metacritic".
Speaking of video games, Doyle said: "A site like ours helps people cut through...unobjective promotional language". "By giving consumers, and web users specifically, early information on the objective quality of a game, not only are they more educated about their choices, but it forces publishers to demand more from their developers, license owners to demand more from their licensees, and eventually, hopefully, the games get better".
He added that the review process was not taken as seriously when unconnected magazines and websites provided reviews in isolation.
In August 2010, the website's appearance was revamped; reaction from users was overwhelmingly negative
Click on any of the following blue hyperlinks for more about Metacritic:
Metacritic was created by Jason Dietz, Marc Doyle, and Julie Doyle Roberts in 1999. The site provides an excerpt from each review and hyperlinks to its source. A color of Green, Yellow or Red summarizes the critics' recommendations. It has been described as the video game industry's "premier" review aggregator.
Metacritic's scoring converts each review into a percentage, either mathematically from the mark given, or which the site decides subjectively from a qualitative review. Before being averaged, the scores are weighted according to the critic's fame, stature, and volume of reviews.
Metacritic was launched in July 1999 by Marc Doyle, his sister Julie Doyle Roberts, and a classmate from the University of Southern California law school, Jason Dietz. Rotten Tomatoes was already compiling movie reviews, but Doyle, Roberts, and Dietz saw an opportunity to cover a broader range of media. They sold Metacritic to CNET in 2005.
CNET and Metacritic are now owned by the CBS Corporation.
Nick Wingfield of The Wall Street Journal wrote in September 2004: "Mr. Doyle, 36, is now a senior product manager at CNET but he also acts as games editor of Metacritic".
Speaking of video games, Doyle said: "A site like ours helps people cut through...unobjective promotional language". "By giving consumers, and web users specifically, early information on the objective quality of a game, not only are they more educated about their choices, but it forces publishers to demand more from their developers, license owners to demand more from their licensees, and eventually, hopefully, the games get better".
He added that the review process was not taken as seriously when unconnected magazines and websites provided reviews in isolation.
In August 2010, the website's appearance was revamped; reaction from users was overwhelmingly negative
Click on any of the following blue hyperlinks for more about Metacritic:
Cybernetics
YouTube Video: What is Cybernetics?
Pictured: ASIMO uses sensors and sophisticated algorithms to avoid obstacles and navigate stairs.
Cybernetics is a transdisciplinary approach for exploring regulatory systems, their structures, constraints, and possibilities. In the 21st century, the term is often used in a rather loose way to imply "control of any system using technology;" this has blunted its meaning to such an extent that many writers avoid using it.
Cybernetics is relevant to the study of systems, such as mechanical, physical, biological, cognitive, and social systems. Cybernetics is applicable when a system being analyzed incorporates a closed signaling loop; that is, where action by the system generates some change in its environment and that change is reflected in that system in some manner (feedback) that triggers a system change, originally referred to as a "circular causal" relationship.
System dynamics, a related field, originated with applications of electrical engineering control theory to other kinds of simulation models (especially business systems) by Jay Forrester at MIT in the 1950s.
Concepts studied by cyberneticists include, but are not limited to:
These concepts are studied by other subjects such as engineering and biology, but in cybernetics these are abstracted from the context of the individual organism or device.
Norbert Wiener defined cybernetics in 1948 as "the scientific study of control and communication in the animal and the machine." The word cybernetics comes from Greek κυβερνητική (kybernetike), meaning "governance", i.e., all that are pertinent to κυβερνάω (kybernao), the latter meaning "to steer, navigate or govern", hence κυβέρνησις (kybernesis), meaning "government", is the government while κυβερνήτης (kybernetes) is the governor or the captain.
Contemporary cybernetics began as an interdisciplinary study connecting the fields of control systems, electrical network theory, mechanical engineering, logic modeling, evolutionary biology, neuroscience, anthropology, and psychology in the 1940s, often attributed to the Macy Conferences.
During the second half of the 20th century cybernetics evolved in ways that distinguish first-order cybernetics (about observed systems) from second-order cybernetics (about observing systems).
More recently there is talk about a third-order cybernetics (doing in ways that embraces first and second-order).
Fields of study which have influenced or been influenced by cybernetics include,
Click on any of the following blue hyperlinks for further amplification:
Cybernetics is relevant to the study of systems, such as mechanical, physical, biological, cognitive, and social systems. Cybernetics is applicable when a system being analyzed incorporates a closed signaling loop; that is, where action by the system generates some change in its environment and that change is reflected in that system in some manner (feedback) that triggers a system change, originally referred to as a "circular causal" relationship.
System dynamics, a related field, originated with applications of electrical engineering control theory to other kinds of simulation models (especially business systems) by Jay Forrester at MIT in the 1950s.
Concepts studied by cyberneticists include, but are not limited to:
- learning,
- cognition,
- adaptation,
- social control,
- emergence,
- communication,
- efficiency,
- efficacy,
- and connectivity.
These concepts are studied by other subjects such as engineering and biology, but in cybernetics these are abstracted from the context of the individual organism or device.
Norbert Wiener defined cybernetics in 1948 as "the scientific study of control and communication in the animal and the machine." The word cybernetics comes from Greek κυβερνητική (kybernetike), meaning "governance", i.e., all that are pertinent to κυβερνάω (kybernao), the latter meaning "to steer, navigate or govern", hence κυβέρνησις (kybernesis), meaning "government", is the government while κυβερνήτης (kybernetes) is the governor or the captain.
Contemporary cybernetics began as an interdisciplinary study connecting the fields of control systems, electrical network theory, mechanical engineering, logic modeling, evolutionary biology, neuroscience, anthropology, and psychology in the 1940s, often attributed to the Macy Conferences.
During the second half of the 20th century cybernetics evolved in ways that distinguish first-order cybernetics (about observed systems) from second-order cybernetics (about observing systems).
More recently there is talk about a third-order cybernetics (doing in ways that embraces first and second-order).
Fields of study which have influenced or been influenced by cybernetics include,
- game theory,
- system theory (a mathematical counterpart to cybernetics),
- perceptual control theory,
- sociology,
- psychology (especially neuropsychology, behavioral psychology, cognitive psychology),
- philosophy,
- architecture,
- and organizational theory.
Click on any of the following blue hyperlinks for further amplification:
- Definitions
- Etymology
- History:
- Subdivisions of the field:
- Related fields:
- See also:
- Artificial life
- Automation
- Autonomous Agency Theory
- Brain–computer interface
- Chaos theory
- Connectionism
- Decision theory
- Gaia hypothesis
- Industrial ecology
- Intelligence amplification
- Management science
- Principia Cybernetica
- Semiotics
- Superorganisms
- Synergetics (Haken)
- Variety (cybernetics)
- Viable System Theory
- Viable systems approach
Video on Demand including websites that offer VoD
YouTube Video: What is VIDEO ON DEMAND? What does VIDEO ON DEMAND mean?
Video on demand (display) (VOD) are systems which allow users to select and watch/listen to video or audio content such as movies and TV shows when they choose to, rather than having to watch at a specific broadcast time, which was the prevalent approach with over-the-air broadcasting during much of the 20th century. IPTV technology is often used to bring video on demand to televisions and personal computers.
Television VOD systems can either "stream" content through a set-top box, a computer or other device, allowing viewing in real time, or download it to a device such as a computer, digital video recorder (also called a personal video recorder) or portable media player for viewing at any time.
The majority of cable- and telephone company-based television providers offer both VOD streaming, including pay-per-view and free content, whereby a user buys or selects a movie or television program and it begins to play on the television set almost instantaneously, or downloading to a digital video recorder (DVR) rented or purchased from the provider, or downloaded onto a PC or to a portable device, for viewing in the future.
Internet television, using the Internet, is an increasingly popular form of video on demand. VOD can also be accessed via desktop client applications such as the Apple iTunes online content store.
Some airlines offer VOD as in-flight entertainment to passengers through individually controlled video screens embedded in seatback so or armrests or offered via portable media players. Some video on demand services, such as Netflix, use a subscription model that requires users to pay a monthly fee to access a bundled set of content, which is mainly movies and TV shows. Other services use an advertising-based model, where access is free for Internet users, and the platforms rely on selling advertisements as their main revenue stream.
Functionality:
Downloading and streaming video on demand systems provide the user with all of the features of Portable media players and DVD players. Some VOD systems that store and stream programs from hard disk drives use a memory buffer to allow the user to fast forward and rewind digital videos. It is possible to put video servers on local area networks, in which case they can provide very rapid response to users.
Streaming video servers can also serve a wider community via a WAN, in which case the responsiveness may be reduced. Download VOD services are practical to homes equipped with cable modems or DSL connections. Servers for traditional cable and telco VOD services are usually placed at the cable head-end serving a particular market as well as cable hubs in larger markets. In the telco world, they are placed in either the central office, or a newly created location called a Video Head-End Office (VHO).
Types of Video on Demand:
Transactional video on demand (TVOD) is a distribution method by which customers pay for each individual piece of video on demand content. For example, a customer would pay a fee for each individual movie or TV show that they watch. TVOD has two sub-categories: electronic sell-through (EST), by which customers can permanently access a piece of content once purchased via Internet; and download to rent (DTR), by which customers can access the content for a limited time upon renting. Examples of TVOD services include Apple's iTunes online store and Google's Google Play service.
Catch Up TV:
A growing number of TV stations offer "catch-up TV" as a way for viewers to watch TV shows though their VOD service hours or even days after the original television broadcast.
This enables viewers to watch a program when they have free time, even if this is not when the program was originally aired. Some studies show that catch up TV is starting to represent a large amount of the views and hours watched, and that users tend to watch catch up TV programs for longer, when compared to live TV (e.g., regular scheduled broadcast TV).
Subscription VOD (SVOD) services use a subscription business model, where subscribers are charged a monthly fee to access unlimited programs. These services include the following:
Near video on demand (NVOD) is a pay-per-view consumer video technique used by multi-channel broadcasters using high-bandwidth distribution mechanisms such as satellite and cable television. Multiple copies of a program are broadcast at short time intervals (typically 10–20 minutes) providing convenience for viewers, who can watch the program without needing to tune in at only scheduled point in time.
A viewer may only have to wait a few minutes before the next time a movie will be programmed. This form is very bandwidth-intensive and is generally provided only by large operators with a great deal of redundant capacity and has been reduced in popularity as video on demand is implemented.
Only the satellite services Dish Network and DirecTV continue to provide NVOD experiences. These satellite services provide NVOD because many of their customers have no access to the services' broadband VOD services.
Before the rise of video on demand, the pay-per-view provider In Demand provided up to 40 channels in 2002, with several films receiving up to four channels on the staggered schedule to provide the NVOD experience for viewers.
As of 2014, only four channels (two in high definition, two in standard definition) are provided to facilitate live and event coverage, along with existing league out-of-market sports coverage channels (varied by provider) by the service. In Australia, pay TV broadcaster Foxtel offers NVOD for new release movies.
Push video on demand is so-named because the provider "pushes" the content out to the viewer's set-top box without the viewer having requested the content. This technique used by a number of broadcasters on systems that lack the connectivity and bandwidth to provide true "streaming" video on demand.
Push VOD is also used by broadcasters who want to optimize their video streaming infrastructure by pre-loading the most popular contents (e.g., that week's top ten films or shows) to the consumers' set-top device. In this way, the most popular content is already loaded onto a consumer's set-top DVR. That way, if the consumer requests one of these films, it is already loaded on her/his DVR.
A push VOD system uses a personal video recorder (PVR) to store a selection of content, often transmitted in spare capacity overnight or all day long at low bandwidth. Users can watch the downloaded content at the time they desire, immediately and without any buffering issues. Push VOD depends on the viewer recording content, so choices can be limited.
As content occupies space on the PVR hard drive, downloaded content is usually deleted after a week to make way for newer programs or movies. The limited space on a PVR hard drive means that the selection of programs is usually restricted to the most popular content. A new generation of Push VOD solution recently appeared on the market which, by using efficient error correction mechanisms, can free significant amount of bandwidth and that can deliver more than video e.g. digital version of magazines and interactive applications.
Advertising video on demand is a VOD model which uses an advertising-based revenue model. This allows companies that advertise on broadcast and cable channels to reach people who watch shows using VOD. As well, this model allows people to watch programs without paying subscription fees.
Hulu has been one of the major AVOD companies, though the company ended free service in August 2016. Ads still run on the subscription service.
Yahoo View continues to offer a free AVOD model. Advertisers may find that people watching on VOD services do not want the same ads to appear multiple times.
Crackle has introduced the concept of a series of ads for the same company that tie in to what is being watched.
Click on any of the following blue hyperlinks for more about Video On Demand:
Television VOD systems can either "stream" content through a set-top box, a computer or other device, allowing viewing in real time, or download it to a device such as a computer, digital video recorder (also called a personal video recorder) or portable media player for viewing at any time.
The majority of cable- and telephone company-based television providers offer both VOD streaming, including pay-per-view and free content, whereby a user buys or selects a movie or television program and it begins to play on the television set almost instantaneously, or downloading to a digital video recorder (DVR) rented or purchased from the provider, or downloaded onto a PC or to a portable device, for viewing in the future.
Internet television, using the Internet, is an increasingly popular form of video on demand. VOD can also be accessed via desktop client applications such as the Apple iTunes online content store.
Some airlines offer VOD as in-flight entertainment to passengers through individually controlled video screens embedded in seatback so or armrests or offered via portable media players. Some video on demand services, such as Netflix, use a subscription model that requires users to pay a monthly fee to access a bundled set of content, which is mainly movies and TV shows. Other services use an advertising-based model, where access is free for Internet users, and the platforms rely on selling advertisements as their main revenue stream.
Functionality:
Downloading and streaming video on demand systems provide the user with all of the features of Portable media players and DVD players. Some VOD systems that store and stream programs from hard disk drives use a memory buffer to allow the user to fast forward and rewind digital videos. It is possible to put video servers on local area networks, in which case they can provide very rapid response to users.
Streaming video servers can also serve a wider community via a WAN, in which case the responsiveness may be reduced. Download VOD services are practical to homes equipped with cable modems or DSL connections. Servers for traditional cable and telco VOD services are usually placed at the cable head-end serving a particular market as well as cable hubs in larger markets. In the telco world, they are placed in either the central office, or a newly created location called a Video Head-End Office (VHO).
Types of Video on Demand:
Transactional video on demand (TVOD) is a distribution method by which customers pay for each individual piece of video on demand content. For example, a customer would pay a fee for each individual movie or TV show that they watch. TVOD has two sub-categories: electronic sell-through (EST), by which customers can permanently access a piece of content once purchased via Internet; and download to rent (DTR), by which customers can access the content for a limited time upon renting. Examples of TVOD services include Apple's iTunes online store and Google's Google Play service.
Catch Up TV:
A growing number of TV stations offer "catch-up TV" as a way for viewers to watch TV shows though their VOD service hours or even days after the original television broadcast.
This enables viewers to watch a program when they have free time, even if this is not when the program was originally aired. Some studies show that catch up TV is starting to represent a large amount of the views and hours watched, and that users tend to watch catch up TV programs for longer, when compared to live TV (e.g., regular scheduled broadcast TV).
Subscription VOD (SVOD) services use a subscription business model, where subscribers are charged a monthly fee to access unlimited programs. These services include the following:
Near video on demand (NVOD) is a pay-per-view consumer video technique used by multi-channel broadcasters using high-bandwidth distribution mechanisms such as satellite and cable television. Multiple copies of a program are broadcast at short time intervals (typically 10–20 minutes) providing convenience for viewers, who can watch the program without needing to tune in at only scheduled point in time.
A viewer may only have to wait a few minutes before the next time a movie will be programmed. This form is very bandwidth-intensive and is generally provided only by large operators with a great deal of redundant capacity and has been reduced in popularity as video on demand is implemented.
Only the satellite services Dish Network and DirecTV continue to provide NVOD experiences. These satellite services provide NVOD because many of their customers have no access to the services' broadband VOD services.
Before the rise of video on demand, the pay-per-view provider In Demand provided up to 40 channels in 2002, with several films receiving up to four channels on the staggered schedule to provide the NVOD experience for viewers.
As of 2014, only four channels (two in high definition, two in standard definition) are provided to facilitate live and event coverage, along with existing league out-of-market sports coverage channels (varied by provider) by the service. In Australia, pay TV broadcaster Foxtel offers NVOD for new release movies.
Push video on demand is so-named because the provider "pushes" the content out to the viewer's set-top box without the viewer having requested the content. This technique used by a number of broadcasters on systems that lack the connectivity and bandwidth to provide true "streaming" video on demand.
Push VOD is also used by broadcasters who want to optimize their video streaming infrastructure by pre-loading the most popular contents (e.g., that week's top ten films or shows) to the consumers' set-top device. In this way, the most popular content is already loaded onto a consumer's set-top DVR. That way, if the consumer requests one of these films, it is already loaded on her/his DVR.
A push VOD system uses a personal video recorder (PVR) to store a selection of content, often transmitted in spare capacity overnight or all day long at low bandwidth. Users can watch the downloaded content at the time they desire, immediately and without any buffering issues. Push VOD depends on the viewer recording content, so choices can be limited.
As content occupies space on the PVR hard drive, downloaded content is usually deleted after a week to make way for newer programs or movies. The limited space on a PVR hard drive means that the selection of programs is usually restricted to the most popular content. A new generation of Push VOD solution recently appeared on the market which, by using efficient error correction mechanisms, can free significant amount of bandwidth and that can deliver more than video e.g. digital version of magazines and interactive applications.
Advertising video on demand is a VOD model which uses an advertising-based revenue model. This allows companies that advertise on broadcast and cable channels to reach people who watch shows using VOD. As well, this model allows people to watch programs without paying subscription fees.
Hulu has been one of the major AVOD companies, though the company ended free service in August 2016. Ads still run on the subscription service.
Yahoo View continues to offer a free AVOD model. Advertisers may find that people watching on VOD services do not want the same ads to appear multiple times.
Crackle has introduced the concept of a series of ads for the same company that tie in to what is being watched.
Click on any of the following blue hyperlinks for more about Video On Demand:
Internet Movie Database (IMDB)
YouTube Video: IMDB Top 250 in 2 1/2 Minutes
The Internet Movie Database (abbreviated IMDb) is an online database of information related to films, television programs and video games, including cast, production crew, fictional characters, biographies, plot summaries, trivia and reviews, operated by IMDb.com, Inc., a subsidiary of Amazon. As of June 2017, IMDb has approximately 4.4 million titles (including episodes), 8 million personalities in its database, as well as 75 million registered users.
The site enables registered users to submit new material and edits to existing entries. Although all data is checked before going live, the system has been open to abuse and occasional errors are acknowledged. Users are also invited to rate any film on a scale of 1 to 10, and the totals are converted into a weighted mean-rating that is displayed beside each title, with online filters employed to deter ballot-stuffing.
The site also featured message boards for authenticated users which IMDb shutdown permanently on February 20, 2017.
Anyone with an internet connection can view the movie and talent pages of IMDb. A registration process is necessary however, to contribute info to the site. A registered user chooses a site name for themselves, and is given a profile page.
This profile page also shows how long a registered user has been a member, as well as personal movie ratings (should the user decide to display them), and is also awarded badges representing how many contributions any particular registered user has submitted.
These badges range from total contributions made, to independent categories such as photos, trivia, bios, etc. If a registered user or visitor happens to be in the entertainment industry, and has an IMDb page, that user/visitor can add photos to that page by enrolling in IMDbPRO.
Click on any of the following blue hyperlinks for more about IMDb:
The site enables registered users to submit new material and edits to existing entries. Although all data is checked before going live, the system has been open to abuse and occasional errors are acknowledged. Users are also invited to rate any film on a scale of 1 to 10, and the totals are converted into a weighted mean-rating that is displayed beside each title, with online filters employed to deter ballot-stuffing.
The site also featured message boards for authenticated users which IMDb shutdown permanently on February 20, 2017.
Anyone with an internet connection can view the movie and talent pages of IMDb. A registration process is necessary however, to contribute info to the site. A registered user chooses a site name for themselves, and is given a profile page.
This profile page also shows how long a registered user has been a member, as well as personal movie ratings (should the user decide to display them), and is also awarded badges representing how many contributions any particular registered user has submitted.
These badges range from total contributions made, to independent categories such as photos, trivia, bios, etc. If a registered user or visitor happens to be in the entertainment industry, and has an IMDb page, that user/visitor can add photos to that page by enrolling in IMDbPRO.
Click on any of the following blue hyperlinks for more about IMDb:
- History
- IMDbPRO
- Television episodes
- Characters' filmography
- Instant viewing
- Content and format
- Ancillary features
- Litigation
- See also:
- AllMovie
- AllMusic – a similar database, but for music
- All Media Network – a commercial database launched by the Rovi Corporation that compiles information from the former services AllMovie and AllMusic
- Animator.ru
- Big Cartoon DataBase
- DBCult Film Institute
- Discogs
- Filmweb
- FindAnyFilm.com
- Flickchart
- Goodreads
- Internet Adult Film Database
- Internet Movie Cars Database (IMCDb)
- Internet Movie Firearms Database (IMFDb)
- Internet Book Database (IBookDb)
- Internet Broadway Database (IBDb)
- Internet Off-Broadway Database (IOBDb)
- Internet Speculative Fiction Database (ISFDb)
- Internet Theatre Database (ITDb)
- Letterboxd
- List of films considered the best
- List of films considered the worst
- TheTVDB
Travel Websites, including a List
YouTube Video: How to Book Your Own Flight
Click here for a List of Online Travel Websites.
A travel website is a website on the world wide web that is dedicated to travel. The site may be focused on travel reviews, trip fares, or a combination of both. Approximately seventy million consumers researched travel plans online in July 2006. Travel bookings are the single largest component of e-commerce, according to Forrester Research.
Many travel websites are online travelogues or travel journals, usually created by individual travelers and hosted by companies that generally provide their information to consumers for free. These companies generate revenue through advertising or by providing services to other businesses. This medium produces a wide variety of styles, often incorporating graphics, photography, maps, and other unique content.
Some examples of websites that use a combination of travel reviews and the booking of travel are TripAdvisor, Priceline, Liberty Holidays, and Expedia.
Service Providers:
Individual airlines, hotels, bed and breakfasts, cruise lines, automobile rental companies, and other travel-related service providers often maintain their own web sites providing retail sales. Many with complex offerings include some sort of search engine technology to look for bookings within a certain time frame, service class, geographic location, or price range.
Online Travel Agencies:
An online travel agency (OTA) specializes in offering planning sources and booking capabilities. Major OTAs include:
Fare aggregators and metasearch engines:
The average consumer visits 3.6 sites when shopping for an airline ticket online, according to PhoCusWright, a Sherman, CT-based travel technology firm.
Yahoo claims 76% of all online travel purchases are preceded by some sort of search function, according to Malcolmson, director of product development for Yahoo Travel.
The 2004 Travel Consumer Survey published by Jupiter Research reported that "nearly two in five online travel consumers say they believe that no one site has the lowest rates or fares.
Thus a niche has existed for aggregate travel search to find the lowest rates from multiple travel sites, obviating the need for consumers to cross-shop from site to site, with traveling searching occurring quite frequently.
Metasearch engines are so named as they conduct searches across multiple independent search engines. Metasearch engines often make use of "screen scraping" to get live availability of flights. Screen scraping is a way of crawling through the airline websites, getting content from those sites by extracting data from the same HTML feed used by consumers for browsing (rather than using a Semantic Web or database feed designed to be machine-readable).
Metasearch engines usually process incoming data to eliminate duplicate entries, but may not expose "advanced search" options in the underlying databases (because not all databases support the same options).
Fare aggregators redirect the users to an airline, cruise, hotel, or car rental site or Online Travel Agent for the final purchase of a ticket. Aggregators' business models include getting feeds from major OTAs, then displaying to the users all of the results on one screen. The OTA then fulfills the ticket. Aggregators generate revenues through advertising and charging OTAs for referring clients.
Examples of aggregate sites include:
Kayak.com is unusual in linking to online travel agencies and hotel web sites alike, allowing the customer to choose whether to book directly on the hotel web site or through an online travel agency. Google Hotel Finder is an experiment that allows to find hotel prices with Google, however it does not offer to book hotels, merely to compare rates.
The difference between a "fare aggregator" and "metasearch engine" is unclear, though different terms may imply different levels of cooperation between the companies involved.
In 2008, Ryanair threatened to cancel all bookings made on Ryanair flights made through metasearch engines, but later allowed the sites to operate as long as they did not resell tickets or overload Ryanair's servers.
In 2015, Lufthansa Group (including Lufthansa, Austrian Airlines, Brussels Airlines and Swiss) announced adding surcharge for flights booked on other sites.
Bargain Websites:
Travel bargain websites collect and publish bargain rates by advising consumers where to find them online (sometimes but not always through a direct link). Rather than providing detailed search tools, these sites generally focus on offering advertised specials, such as last-minute sales from travel suppliers eager to deplete unused inventory; therefore, these sites often work best for consumers who are flexible about destinations and other key itinerary components.
Travel and tourism guides:
Many websites take the form of a digital version of a traditional guide book, aiming to provide advice on which destinations, attractions, accommodations, and so on, are worth a visit and providing information on how to access them.
Most states, provinces and countries have their own convention and visitor bureaus, which usually sponsor a website dedicated to promoting tourism in their respective regions. Cities that rely on tourism also operate websites promoting their destinations, such as VEGAS.com for Las Vegas, Nevada.
Student travel agencies:
Some travel websites cater specifically to the college student audience and list exclusive airfare deals and travel products. Significant sites in this area include StudentUniverse and STA Travel.
Social travel website:
A social travel website is a type of travel website that will look at where the user is going and pair them with other places they want to go based on where other people have gone. This can help the traveler gain insight of the destination, people, culture before travel and become aware of the places the user is willing to visit.
Copyleft travel websites:
There are two travel websites where the rationale of the crowdsourcing is clear for the contributor as all edits to these are under copyleft license (CC-BY-SA): the ad-free Wikivoyage operated by Wikimedia Foundation and Wikitravel by a for-profit entity.
See also:
A travel website is a website on the world wide web that is dedicated to travel. The site may be focused on travel reviews, trip fares, or a combination of both. Approximately seventy million consumers researched travel plans online in July 2006. Travel bookings are the single largest component of e-commerce, according to Forrester Research.
Many travel websites are online travelogues or travel journals, usually created by individual travelers and hosted by companies that generally provide their information to consumers for free. These companies generate revenue through advertising or by providing services to other businesses. This medium produces a wide variety of styles, often incorporating graphics, photography, maps, and other unique content.
Some examples of websites that use a combination of travel reviews and the booking of travel are TripAdvisor, Priceline, Liberty Holidays, and Expedia.
Service Providers:
Individual airlines, hotels, bed and breakfasts, cruise lines, automobile rental companies, and other travel-related service providers often maintain their own web sites providing retail sales. Many with complex offerings include some sort of search engine technology to look for bookings within a certain time frame, service class, geographic location, or price range.
Online Travel Agencies:
An online travel agency (OTA) specializes in offering planning sources and booking capabilities. Major OTAs include:
- Voyages-sncf.com – revenue €2.23 billion (2008)
- Expedia, Inc., including:
- Expedia.com,
- Hotels.com,
- Hotwire.com,
- Travelocity and others – revenue US$2.937 billion (2008),
- later expanded to include Orbitz Worldwide, Inc., including:
- Orbitz,
- CheapTickets,
- ebookers,
- and others –
- revenue US$870 million (2008)
- Sabre Holdings, including lastminute.com and others – revenue US$2.9 billion (2008)
- Opodo – revenue €1.3 billion (2008)
- The Priceline Group, including:
- Priceline.com,
- Booking.com,
- Agoda.com,
- Kayak.com,
- OpenTable
- and others
- revenue US$1.9 billion (2008)
- Travelgenio – revenue €344 million (2014)
- Wotif.com – revenue A$145 million (2012)
- Webjet – revenue A$59.3 million (2012)
Fare aggregators and metasearch engines:
The average consumer visits 3.6 sites when shopping for an airline ticket online, according to PhoCusWright, a Sherman, CT-based travel technology firm.
Yahoo claims 76% of all online travel purchases are preceded by some sort of search function, according to Malcolmson, director of product development for Yahoo Travel.
The 2004 Travel Consumer Survey published by Jupiter Research reported that "nearly two in five online travel consumers say they believe that no one site has the lowest rates or fares.
Thus a niche has existed for aggregate travel search to find the lowest rates from multiple travel sites, obviating the need for consumers to cross-shop from site to site, with traveling searching occurring quite frequently.
Metasearch engines are so named as they conduct searches across multiple independent search engines. Metasearch engines often make use of "screen scraping" to get live availability of flights. Screen scraping is a way of crawling through the airline websites, getting content from those sites by extracting data from the same HTML feed used by consumers for browsing (rather than using a Semantic Web or database feed designed to be machine-readable).
Metasearch engines usually process incoming data to eliminate duplicate entries, but may not expose "advanced search" options in the underlying databases (because not all databases support the same options).
Fare aggregators redirect the users to an airline, cruise, hotel, or car rental site or Online Travel Agent for the final purchase of a ticket. Aggregators' business models include getting feeds from major OTAs, then displaying to the users all of the results on one screen. The OTA then fulfills the ticket. Aggregators generate revenues through advertising and charging OTAs for referring clients.
Examples of aggregate sites include:
- Bravofly,
- Cheapflights,
- Priceline,
- Expedia,
- Reservations.com,
- Kayak.com,
- Momondo,
- LowEndTicket,
- FareBuzz,
- and CheapOair.
Kayak.com is unusual in linking to online travel agencies and hotel web sites alike, allowing the customer to choose whether to book directly on the hotel web site or through an online travel agency. Google Hotel Finder is an experiment that allows to find hotel prices with Google, however it does not offer to book hotels, merely to compare rates.
The difference between a "fare aggregator" and "metasearch engine" is unclear, though different terms may imply different levels of cooperation between the companies involved.
In 2008, Ryanair threatened to cancel all bookings made on Ryanair flights made through metasearch engines, but later allowed the sites to operate as long as they did not resell tickets or overload Ryanair's servers.
In 2015, Lufthansa Group (including Lufthansa, Austrian Airlines, Brussels Airlines and Swiss) announced adding surcharge for flights booked on other sites.
Bargain Websites:
Travel bargain websites collect and publish bargain rates by advising consumers where to find them online (sometimes but not always through a direct link). Rather than providing detailed search tools, these sites generally focus on offering advertised specials, such as last-minute sales from travel suppliers eager to deplete unused inventory; therefore, these sites often work best for consumers who are flexible about destinations and other key itinerary components.
Travel and tourism guides:
Many websites take the form of a digital version of a traditional guide book, aiming to provide advice on which destinations, attractions, accommodations, and so on, are worth a visit and providing information on how to access them.
Most states, provinces and countries have their own convention and visitor bureaus, which usually sponsor a website dedicated to promoting tourism in their respective regions. Cities that rely on tourism also operate websites promoting their destinations, such as VEGAS.com for Las Vegas, Nevada.
Student travel agencies:
Some travel websites cater specifically to the college student audience and list exclusive airfare deals and travel products. Significant sites in this area include StudentUniverse and STA Travel.
Social travel website:
A social travel website is a type of travel website that will look at where the user is going and pair them with other places they want to go based on where other people have gone. This can help the traveler gain insight of the destination, people, culture before travel and become aware of the places the user is willing to visit.
Copyleft travel websites:
There are two travel websites where the rationale of the crowdsourcing is clear for the contributor as all edits to these are under copyleft license (CC-BY-SA): the ad-free Wikivoyage operated by Wikimedia Foundation and Wikitravel by a for-profit entity.
See also:
Employment Websites, including a List
YouTube Video: Is Applying for Jobs Online an Effective Way to Find Work?
by PBS News Hour
Click here for an alphabetical List of Employment Websites.
An employment website is a website that deals specifically with employment or careers.
Many employment websites are designed to allow employers to post job requirements for a position to be filled and are commonly known as job boards. Other employment sites offer employer reviews, career and job-search advice, and describe different job descriptions or employers. Through a job website a prospective employee can locate and fill out a job application or submit resumes over the Internet for the advertised position.
The Online Career Center was developed as a non-profit organization backed by forty major corporations to allow job hunters to post their resumes and for recruiters to post job openings.
In 1994 Robert J. McGovern began NetStart Inc. as software sold to companies for listing job openings on their websites and manage the incoming e-mails those listings generated. After an influx of two million dollars in investment capital, he then transported this software to its own web address, at first listing the job openings from the companies who utilized the software.
NetStart Inc. changed its name in 1998 to operate under the name of their software, CareerBuilder. The company received a further influx of seven million dollars from investment firms such as New Enterprise Associates to expand their operations.
Six major newspapers joined forces in 1995 to list their classified sections online. The service was called CareerPath.com and featured help-wanted listings from the Los Angeles Times, the Boston Globe, Chicago Tribune, the New York Times, San Jose Mercury News and the Washington Post.
The industry attempted to reach a broader, less tech-savvy base in 1998 when Hotjobs.com attempted to buy a Super Bowl spot, but Fox rejected the ad for being in poor taste. The ad featured a janitor at a zoo sweeping out the elephant cage completely unbeknownst to the animal. The elephant sits down briefly and when it stands back up, the janitor has disappeared. The ad meant to illustrate a need for those stuck in jobs they hate, and offer a solution through their Web site.
In 1999, Monster.com ran on three 30 second Super Bowl ads for four million dollars. One ad which featured children speaking like adults, drolly intoning their dream of working at various dead-end jobs to humorous effect were far more popular than rival Hotjobs.com ad about a security guard who transitions from a low paying security job to the same job at a fancier building.
Soon thereafter, Monster.com was elevated to the top spot of online employment sites. Hotjobs.com's ad wasn't as successful, but it gave the company enough of a boost for its IPO in August.
After being purchased in a joint venture by Knight Ridder and Tribune Company in July, CareerBuilder absorbed competitor boards CareerPath.com and then Headhunter.net which had already acquired CareerMosaic.
Even with these aggressive mergers CareerBuilder still trailed behind the number one employment site Jobsonline.com, number two Monster.com and number three Hotjobs.com.
Monster.com made a move in 2001 to purchase Hotjobs.com for $374 million in stock, but were unsuccessful due to Yahoo's unsolicited cash and stock bid of $430 million late in the year. Yahoo had previously announced plans to enter the job board business, but decided to jump start that venture by purchasing the established brand.
In February 2010, Monster acquired HotJobs from Yahoo for $225 million.
Features and Types:
The success of jobs search engines in bridging the gap between job-seekers and employers has spawned thousands of job sites, many of which list job opportunities in a specific sector, such as education, health care, hospital management, academics and even in the non-governmental sector. These sites range from broad all-purpose generalist job boards to niche sites that serve various audiences, geographies, and industries. Many industry experts are encouraging job-seekers to concentrate on industry specific sector sites.
Job Postings:
A job board is a website that facilitates job hunting and range from large scale generalist sites to niche job boards for job categories such as engineering, legal, insurance, social work, teaching, mobile app development as well as cross-sector categories such as green jobs, ethical jobs and seasonal jobs. Users can typically deposit their résumés and submit them to potential employers and recruiters for review, while employers and recruiters can post job ads and search for potential employees.
The term job search engine might refer to a job board with a search engine style interface, or to a web site that actually indexes and searches other web sites.
Niche job boards are starting to play a bigger role in providing more targeted job vacancies and employees to the candidate and the employer respectively. Job boards such as airport jobs and federal jobs among others provide a very focused way of eliminating and reducing time to applying to the most appropriate role. USAJobs.gov is the United States' official website for jobs. It gathers job listings from over 500 federal agencies.
Metasearch and vertical search engines
Some web sites are simply search engines that collect results from multiple independent job boards. This is an example of both metasearch (since these are search engines which search other search engines) and vertical search (since the searches are limited to a specific topic - job listings).
Some of these new search engines primarily index traditional job boards. These sites aim to provide a "one-stop shop" for job-seekers who don't need to search the underlying job boards.
In 2006, tensions developed between the job boards and several scraper sites, with Craigslist banning scrapers from its job classifieds and Monster.com specifically banning scrapers through its adoption of a robots exclusion standard on all its pages while others have embraced them.
Indeed.com, a "job aggregator", collects job postings from employer websites, job boards, online classifieds, and association websites. Simply Hired is another large aggregator collecting job postings from many sources.
LinkUp (website) is a job search engine ("job aggregator") that indexes pages only from employers' websites choosing to bypass traditional job boards entirely. These vertical search engines allow jobseekers to find new positions that may not be advertised on the traditional job boards.
Industry specific posting boards are also appearing. These consolidate all the vacancies in a very specific industry. The largest "niche" job board is Dice.com which focuses on the IT industry. Many industry and professional associations offer members a job posting capability on the association website.
Employer review website:
An employer review website is a type of employment website where past and current employees post comments about their experiences working for a company or organization. An employer review website usually takes the form of an internet forum.
Typical comments are about management, working conditions, and pay. Although employer review websites may produce links to potential employers, they do not necessarily list vacancies.
Pay For Performance (PFP):
The most recent second generation of employment websites, often referred to as pay for performance (PFP) involves charging for membership services rendered to job seekers.
Websites providing information and advice for employees, employers and job seekers:
Although many sites that provide access to job advertisements include pages with advice about writing resumes and CVs, performing well in interviews, and other topics of interest to job seekers there are sites that specialize in providing information of this kind, rather than job opportunities.
One such is Working in Canada. It does provide links to the Canadian Job Bank. However, most of its content is information about local labor markets (in Canada), requirements for working in various occupations, information about relevant laws and regulations, government services and grants, and so on. Most items could be of interest to people in various roles and conditions including those considering career options, job seekers, employers and employees.
Risks:
Many jobs search engines and jobs boards encourage users to post their resume and contact details. While this is attractive for the site operators (who sell access to the resume bank to headhunters and recruiters), job-seekers exercise caution in uploading personal information, since they have no control over where their resume will eventually be seen.
Their resume may be viewed by a current employer or, worse, by criminals who may use information from it to amass and sell personal contact information, or even perpetrate identity theft.
See Also:
An employment website is a website that deals specifically with employment or careers.
Many employment websites are designed to allow employers to post job requirements for a position to be filled and are commonly known as job boards. Other employment sites offer employer reviews, career and job-search advice, and describe different job descriptions or employers. Through a job website a prospective employee can locate and fill out a job application or submit resumes over the Internet for the advertised position.
The Online Career Center was developed as a non-profit organization backed by forty major corporations to allow job hunters to post their resumes and for recruiters to post job openings.
In 1994 Robert J. McGovern began NetStart Inc. as software sold to companies for listing job openings on their websites and manage the incoming e-mails those listings generated. After an influx of two million dollars in investment capital, he then transported this software to its own web address, at first listing the job openings from the companies who utilized the software.
NetStart Inc. changed its name in 1998 to operate under the name of their software, CareerBuilder. The company received a further influx of seven million dollars from investment firms such as New Enterprise Associates to expand their operations.
Six major newspapers joined forces in 1995 to list their classified sections online. The service was called CareerPath.com and featured help-wanted listings from the Los Angeles Times, the Boston Globe, Chicago Tribune, the New York Times, San Jose Mercury News and the Washington Post.
The industry attempted to reach a broader, less tech-savvy base in 1998 when Hotjobs.com attempted to buy a Super Bowl spot, but Fox rejected the ad for being in poor taste. The ad featured a janitor at a zoo sweeping out the elephant cage completely unbeknownst to the animal. The elephant sits down briefly and when it stands back up, the janitor has disappeared. The ad meant to illustrate a need for those stuck in jobs they hate, and offer a solution through their Web site.
In 1999, Monster.com ran on three 30 second Super Bowl ads for four million dollars. One ad which featured children speaking like adults, drolly intoning their dream of working at various dead-end jobs to humorous effect were far more popular than rival Hotjobs.com ad about a security guard who transitions from a low paying security job to the same job at a fancier building.
Soon thereafter, Monster.com was elevated to the top spot of online employment sites. Hotjobs.com's ad wasn't as successful, but it gave the company enough of a boost for its IPO in August.
After being purchased in a joint venture by Knight Ridder and Tribune Company in July, CareerBuilder absorbed competitor boards CareerPath.com and then Headhunter.net which had already acquired CareerMosaic.
Even with these aggressive mergers CareerBuilder still trailed behind the number one employment site Jobsonline.com, number two Monster.com and number three Hotjobs.com.
Monster.com made a move in 2001 to purchase Hotjobs.com for $374 million in stock, but were unsuccessful due to Yahoo's unsolicited cash and stock bid of $430 million late in the year. Yahoo had previously announced plans to enter the job board business, but decided to jump start that venture by purchasing the established brand.
In February 2010, Monster acquired HotJobs from Yahoo for $225 million.
Features and Types:
The success of jobs search engines in bridging the gap between job-seekers and employers has spawned thousands of job sites, many of which list job opportunities in a specific sector, such as education, health care, hospital management, academics and even in the non-governmental sector. These sites range from broad all-purpose generalist job boards to niche sites that serve various audiences, geographies, and industries. Many industry experts are encouraging job-seekers to concentrate on industry specific sector sites.
Job Postings:
A job board is a website that facilitates job hunting and range from large scale generalist sites to niche job boards for job categories such as engineering, legal, insurance, social work, teaching, mobile app development as well as cross-sector categories such as green jobs, ethical jobs and seasonal jobs. Users can typically deposit their résumés and submit them to potential employers and recruiters for review, while employers and recruiters can post job ads and search for potential employees.
The term job search engine might refer to a job board with a search engine style interface, or to a web site that actually indexes and searches other web sites.
Niche job boards are starting to play a bigger role in providing more targeted job vacancies and employees to the candidate and the employer respectively. Job boards such as airport jobs and federal jobs among others provide a very focused way of eliminating and reducing time to applying to the most appropriate role. USAJobs.gov is the United States' official website for jobs. It gathers job listings from over 500 federal agencies.
Metasearch and vertical search engines
Some web sites are simply search engines that collect results from multiple independent job boards. This is an example of both metasearch (since these are search engines which search other search engines) and vertical search (since the searches are limited to a specific topic - job listings).
Some of these new search engines primarily index traditional job boards. These sites aim to provide a "one-stop shop" for job-seekers who don't need to search the underlying job boards.
In 2006, tensions developed between the job boards and several scraper sites, with Craigslist banning scrapers from its job classifieds and Monster.com specifically banning scrapers through its adoption of a robots exclusion standard on all its pages while others have embraced them.
Indeed.com, a "job aggregator", collects job postings from employer websites, job boards, online classifieds, and association websites. Simply Hired is another large aggregator collecting job postings from many sources.
LinkUp (website) is a job search engine ("job aggregator") that indexes pages only from employers' websites choosing to bypass traditional job boards entirely. These vertical search engines allow jobseekers to find new positions that may not be advertised on the traditional job boards.
Industry specific posting boards are also appearing. These consolidate all the vacancies in a very specific industry. The largest "niche" job board is Dice.com which focuses on the IT industry. Many industry and professional associations offer members a job posting capability on the association website.
Employer review website:
An employer review website is a type of employment website where past and current employees post comments about their experiences working for a company or organization. An employer review website usually takes the form of an internet forum.
Typical comments are about management, working conditions, and pay. Although employer review websites may produce links to potential employers, they do not necessarily list vacancies.
Pay For Performance (PFP):
The most recent second generation of employment websites, often referred to as pay for performance (PFP) involves charging for membership services rendered to job seekers.
Websites providing information and advice for employees, employers and job seekers:
Although many sites that provide access to job advertisements include pages with advice about writing resumes and CVs, performing well in interviews, and other topics of interest to job seekers there are sites that specialize in providing information of this kind, rather than job opportunities.
One such is Working in Canada. It does provide links to the Canadian Job Bank. However, most of its content is information about local labor markets (in Canada), requirements for working in various occupations, information about relevant laws and regulations, government services and grants, and so on. Most items could be of interest to people in various roles and conditions including those considering career options, job seekers, employers and employees.
Risks:
Many jobs search engines and jobs boards encourage users to post their resume and contact details. While this is attractive for the site operators (who sell access to the resume bank to headhunters and recruiters), job-seekers exercise caution in uploading personal information, since they have no control over where their resume will eventually be seen.
Their resume may be viewed by a current employer or, worse, by criminals who may use information from it to amass and sell personal contact information, or even perpetrate identity theft.
See Also:
Websites, including a List
YouTube Video: How To Make a WordPress Website - 2017 - Create Almost Any Website!
Pictured: NASA Website Home Page, courtesy of NASA, Page Editor: Jim Wilson, NASA Official: Brian Dunbar - /, Public Domain
Click Here for a List of Websites by Type, Subject, and Other.
A website, or simply site, is a collection of related web pages, including multimedia content, typically identified with a common domain name, and published on at least one web server. A website may be accessible via a public Internet Protocol (IP) network, such as the Internet, or a private local area network (LAN), by referencing a uniform resource locator (URL) that identifies the site.
Websites have many functions and can be used in various fashions; a website can be a personal website, a commercial website for a company, a government website or a non-profit organization website. Websites are typically dedicated to a particular topic or purpose, ranging from entertainment and social networking to providing news and education.
All publicly accessible websites collectively constitute the World Wide Web, while private websites, such as a company's website for its employees, are typically a part of an intranet.
Web pages, which are the building blocks of websites, are documents, typically composed in plain text interspersed with formatting instructions of Hypertext Markup Language (HTML, XHTML). They may incorporate elements from other websites with suitable markup anchors.
Web pages are accessed and transported with the Hypertext Transfer Protocol (HTTP), which may optionally employ encryption (HTTP Secure, HTTPS) to provide security and privacy for the user. The user's application, often a web browser, renders the page content according to its HTML markup instructions onto a display terminal.
Hyperlinking between web pages conveys to the reader the site structure and guides the navigation of the site, which often starts with a home page containing a directory of the site web content.
Some websites require user registration or subscription to access content. Examples of subscription websites include many business sites, news websites, academic journal websites, gaming websites, file-sharing websites, message boards, web-based email, social networking websites, websites providing real-time stock market data, as well as sites providing various other services. As of 2016 end users can access websites on a range of devices, including desktop and laptop computers, tablet computers, smartphones and smart TVs.
History: Main article: History of the World Wide Web
The World Wide Web (WWW) was created in 1990 by the British CERN physicist Tim Berners-Lee. On 30 April 1993, CERN announced that the World Wide Web would be free to use for anyone. Before the introduction of HTML and HTTP, other protocols such as File Transfer Protocol and the gopher protocol were used to retrieve individual files from a server. These protocols offer a simple directory structure which the user navigates and chooses files to download. Documents were most often presented as plain text files without formatting, or were encoded in word processor formats.
Overview:
Websites have many functions and can be used in various fashions; a website can be a personal website, a commercial website, a government website or a non-profit organization website. Websites can be the work of an individual, a business or other organization, and are typically dedicated to a particular topic or purpose.
Any website can contain a hyperlink to any other website, so the distinction between individual sites, as perceived by the user, can be blurred. Websites are written in, or converted to, HTML (Hyper Text Markup Language) and are accessed using a software interface classified as a user agent.
Web pages can be viewed or otherwise accessed from a range of computer-based and Internet-enabled devices of various sizes, including desktop computers, laptops, PDAs and cell phones. A website is hosted on a computer system known as a web server, also called an HTTP (Hyper Text Transfer Protocol) server.
These terms can also refer to the software that runs on these systems which retrieves and delivers the web pages in response to requests from the website's users. Apache is the most commonly used web server software (according to Netcraft statistics) and Microsoft's IIS is also commonly used. Some alternatives, such as Nginx, Lighttpd, Hiawatha or Cherokee, are fully functional and lightweight.
Static Website: Main article: Static web page
A static website is one that has web pages stored on the server in the format that is sent to a client web browser. It is primarily coded in Hypertext Markup Language (HTML); Cascading Style Sheets (CSS) are used to control appearance beyond basic HTML. Images are commonly used to effect the desired appearance and as part of the main content. Audio or video might also be considered "static" content if it plays automatically or is generally non-interactive.
This type of website usually displays the same information to all visitors. Similar to handing out a printed brochure to customers or clients, a static website will generally provide consistent, standard information for an extended period of time. Although the website owner may make updates periodically, it is a manual process to edit the text, photos and other content and may require basic website design skills and software.
Simple forms or marketing examples of websites, such as classic website, a five-page website or a brochure website are often static websites, because they present pre-defined, static information to the user. This may include information about a company and its products and services through text, photos, animations, audio/video, and navigation menus.
Static websites can be edited using four broad categories of software:
Static websites may still use server side includes (SSI) as an editing convenience, such as sharing a common menu bar across many pages. As the site's behaviour to the reader is still static, this is not considered a dynamic site.
Dynamic Website: Main articles: Dynamic web page and Web application
A dynamic website is one that changes or customizes itself frequently and automatically. Server-side dynamic pages are generated "on the fly" by computer code that produces the HTML (CSS are responsible for appearance and thus, are static files).
There are a wide range of software systems, such as CGI, Java Servlets and Java Server Pages (JSP), Active Server Pages and ColdFusion (CFML) that are available to generate dynamic web systems and dynamic sites. Various web application frameworks and web template systems are available for general-use programming languages like Perl, PHP, Python and Ruby to make it faster and easier to create complex dynamic websites.
A site can display the current state of a dialogue between users, monitor a changing situation, or provide information in some way personalized to the requirements of the individual user. For example, when the front page of a news site is requested, the code running on the web server might combine stored HTML fragments with news stories retrieved from a database or another website via RSS to produce a page that includes the latest information.
Dynamic sites can be interactive by using HTML forms, storing and reading back browser cookies, or by creating a series of pages that reflect the previous history of clicks. Another example of dynamic content is when a retail website with a database of media products allows a user to input a search request, e.g. for the keyword Beatles.
In response, the content of the web page will spontaneously change the way it looked before, and will then display a list of Beatles products like CDs, DVDs and books. Dynamic HTML uses JavaScript code to instruct the web browser how to interactively modify the page contents. One way to simulate a certain type of dynamic website while avoiding the performance loss of initiating the dynamic engine on a per-user or per-connection basis, is to periodically automatically regenerate a large series of static pages.
Click on any of the following blue hyperlinks for more about a Website:
A website, or simply site, is a collection of related web pages, including multimedia content, typically identified with a common domain name, and published on at least one web server. A website may be accessible via a public Internet Protocol (IP) network, such as the Internet, or a private local area network (LAN), by referencing a uniform resource locator (URL) that identifies the site.
Websites have many functions and can be used in various fashions; a website can be a personal website, a commercial website for a company, a government website or a non-profit organization website. Websites are typically dedicated to a particular topic or purpose, ranging from entertainment and social networking to providing news and education.
All publicly accessible websites collectively constitute the World Wide Web, while private websites, such as a company's website for its employees, are typically a part of an intranet.
Web pages, which are the building blocks of websites, are documents, typically composed in plain text interspersed with formatting instructions of Hypertext Markup Language (HTML, XHTML). They may incorporate elements from other websites with suitable markup anchors.
Web pages are accessed and transported with the Hypertext Transfer Protocol (HTTP), which may optionally employ encryption (HTTP Secure, HTTPS) to provide security and privacy for the user. The user's application, often a web browser, renders the page content according to its HTML markup instructions onto a display terminal.
Hyperlinking between web pages conveys to the reader the site structure and guides the navigation of the site, which often starts with a home page containing a directory of the site web content.
Some websites require user registration or subscription to access content. Examples of subscription websites include many business sites, news websites, academic journal websites, gaming websites, file-sharing websites, message boards, web-based email, social networking websites, websites providing real-time stock market data, as well as sites providing various other services. As of 2016 end users can access websites on a range of devices, including desktop and laptop computers, tablet computers, smartphones and smart TVs.
History: Main article: History of the World Wide Web
The World Wide Web (WWW) was created in 1990 by the British CERN physicist Tim Berners-Lee. On 30 April 1993, CERN announced that the World Wide Web would be free to use for anyone. Before the introduction of HTML and HTTP, other protocols such as File Transfer Protocol and the gopher protocol were used to retrieve individual files from a server. These protocols offer a simple directory structure which the user navigates and chooses files to download. Documents were most often presented as plain text files without formatting, or were encoded in word processor formats.
Overview:
Websites have many functions and can be used in various fashions; a website can be a personal website, a commercial website, a government website or a non-profit organization website. Websites can be the work of an individual, a business or other organization, and are typically dedicated to a particular topic or purpose.
Any website can contain a hyperlink to any other website, so the distinction between individual sites, as perceived by the user, can be blurred. Websites are written in, or converted to, HTML (Hyper Text Markup Language) and are accessed using a software interface classified as a user agent.
Web pages can be viewed or otherwise accessed from a range of computer-based and Internet-enabled devices of various sizes, including desktop computers, laptops, PDAs and cell phones. A website is hosted on a computer system known as a web server, also called an HTTP (Hyper Text Transfer Protocol) server.
These terms can also refer to the software that runs on these systems which retrieves and delivers the web pages in response to requests from the website's users. Apache is the most commonly used web server software (according to Netcraft statistics) and Microsoft's IIS is also commonly used. Some alternatives, such as Nginx, Lighttpd, Hiawatha or Cherokee, are fully functional and lightweight.
Static Website: Main article: Static web page
A static website is one that has web pages stored on the server in the format that is sent to a client web browser. It is primarily coded in Hypertext Markup Language (HTML); Cascading Style Sheets (CSS) are used to control appearance beyond basic HTML. Images are commonly used to effect the desired appearance and as part of the main content. Audio or video might also be considered "static" content if it plays automatically or is generally non-interactive.
This type of website usually displays the same information to all visitors. Similar to handing out a printed brochure to customers or clients, a static website will generally provide consistent, standard information for an extended period of time. Although the website owner may make updates periodically, it is a manual process to edit the text, photos and other content and may require basic website design skills and software.
Simple forms or marketing examples of websites, such as classic website, a five-page website or a brochure website are often static websites, because they present pre-defined, static information to the user. This may include information about a company and its products and services through text, photos, animations, audio/video, and navigation menus.
Static websites can be edited using four broad categories of software:
- Text editors, such as Notepad or TextEdit, where content and HTML markup are manipulated directly within the editor program
- WYSIWYG offline editors, such as Microsoft FrontPage and Adobe Dreamweaver (previously Macromedia Dreamweaver), with which the site is edited using a GUI and the final HTML markup is generated automatically by the editor software
- WYSIWYG online editors which create media rich online presentation like web pages, widgets, intro, blogs, and other documents.
- Template-based editors such as iWeb allow users to create and upload web pages to a web server without detailed HTML knowledge, as they pick a suitable template from a palette and add pictures and text to it in a desktop publishing fashion without direct manipulation of HTML code.
Static websites may still use server side includes (SSI) as an editing convenience, such as sharing a common menu bar across many pages. As the site's behaviour to the reader is still static, this is not considered a dynamic site.
Dynamic Website: Main articles: Dynamic web page and Web application
A dynamic website is one that changes or customizes itself frequently and automatically. Server-side dynamic pages are generated "on the fly" by computer code that produces the HTML (CSS are responsible for appearance and thus, are static files).
There are a wide range of software systems, such as CGI, Java Servlets and Java Server Pages (JSP), Active Server Pages and ColdFusion (CFML) that are available to generate dynamic web systems and dynamic sites. Various web application frameworks and web template systems are available for general-use programming languages like Perl, PHP, Python and Ruby to make it faster and easier to create complex dynamic websites.
A site can display the current state of a dialogue between users, monitor a changing situation, or provide information in some way personalized to the requirements of the individual user. For example, when the front page of a news site is requested, the code running on the web server might combine stored HTML fragments with news stories retrieved from a database or another website via RSS to produce a page that includes the latest information.
Dynamic sites can be interactive by using HTML forms, storing and reading back browser cookies, or by creating a series of pages that reflect the previous history of clicks. Another example of dynamic content is when a retail website with a database of media products allows a user to input a search request, e.g. for the keyword Beatles.
In response, the content of the web page will spontaneously change the way it looked before, and will then display a list of Beatles products like CDs, DVDs and books. Dynamic HTML uses JavaScript code to instruct the web browser how to interactively modify the page contents. One way to simulate a certain type of dynamic website while avoiding the performance loss of initiating the dynamic engine on a per-user or per-connection basis, is to periodically automatically regenerate a large series of static pages.
Click on any of the following blue hyperlinks for more about a Website:
- Multimedia and interactive content
- Spelling
- Types
- See also:
- Link rot
- Nanosite, a mini website
- Site map
- Web content management system
- Web design
- Web development
- Web development tools
- Web hosting service
- Web template
- Website governance
- Website monetization
- World Wide Web Consortium (Web standards)
- Internet Corporation For Assigned Names and Numbers (ICANN)
- World Wide Web Consortium (W3C)
- The Internet Society (ISOC)
Real Estate Websites including Online real estate databases
YouTube Video: Find Homes For SALE - Find EVERY Open House - RedFin, Zillow, Realtor.com - Real Estate Investing
Click here for a List of Online Real Estate Databases.
An electronic version of the real estate industry, internet real estate is the concept of publishing housing estates for sale or rent, and for consumers seeking to buy or rent a property. Often, internet real estates are operated by landlords themselves.
However, there are few exceptions where an online real estate agent would exist, still dealing via the web and often stating a flat-fee and not a commission based on percentage of total sales. Internet real estate surfaced around 1999 when technology advanced and statistics prove that more than 1 million homes were sold by the owners themselves in just America, in 2000. Some of the prime internet real estate platforms include:
According to Realtor, 90% of home buyers searched online during the process of seeking for a property and the percentage of consumers searching for information relating to real estate on Google has increased by 253% over the last 4 years. With an increase of 5.5% from just 0% of people using the internet to carry our house sales within the last decade in UK, figures show that there will be a huge increase to percentage of 50 by 2018. Figures will hit 70% by 2020, with only a third of the UK population seeking help through traditional methods of real estate agents.
The process of the concept of internet real estate usually begins with owners listing their homes with its quoted price on online platforms such as Trulia, Yahoo! Real Estate, cyberhomes, The New York Times and even eBay. The greater number of platforms owners list their properties, the greater the diffusion of information.
As buyers who are seeking for a piece of property, search engines are usually their first pit-stop. "69% of home shoppers who take action on real estate brand website begin their research with a local term, i.e "Houston homes for sale" on a search engine", reports Realtor.
Once a potential buyer contacts the seller, they would go through the details of the property – sizing, amenities, condition, and pricing, if not stated. After which, an appointment for the viewing of the property would usually be scheduled and in some cases, potential buyers may request for a refurnish of certain amenities or parts of the property.
If terms and conditions are met between both parties, the buyer would usually negotiate for the best offer if interested and a deposit may be requested by the owner. Finally, both parties will agree on a date for full payment, signing on official payment, and the handover of keys to the property.
Click on any of the following blue hyperlinks for more about Online Real Estate:
An electronic version of the real estate industry, internet real estate is the concept of publishing housing estates for sale or rent, and for consumers seeking to buy or rent a property. Often, internet real estates are operated by landlords themselves.
However, there are few exceptions where an online real estate agent would exist, still dealing via the web and often stating a flat-fee and not a commission based on percentage of total sales. Internet real estate surfaced around 1999 when technology advanced and statistics prove that more than 1 million homes were sold by the owners themselves in just America, in 2000. Some of the prime internet real estate platforms include:
- Zillow,
- Trulia,
- Yahoo! Real Estate,
- Redfin
- and Realtor.com.
According to Realtor, 90% of home buyers searched online during the process of seeking for a property and the percentage of consumers searching for information relating to real estate on Google has increased by 253% over the last 4 years. With an increase of 5.5% from just 0% of people using the internet to carry our house sales within the last decade in UK, figures show that there will be a huge increase to percentage of 50 by 2018. Figures will hit 70% by 2020, with only a third of the UK population seeking help through traditional methods of real estate agents.
The process of the concept of internet real estate usually begins with owners listing their homes with its quoted price on online platforms such as Trulia, Yahoo! Real Estate, cyberhomes, The New York Times and even eBay. The greater number of platforms owners list their properties, the greater the diffusion of information.
As buyers who are seeking for a piece of property, search engines are usually their first pit-stop. "69% of home shoppers who take action on real estate brand website begin their research with a local term, i.e "Houston homes for sale" on a search engine", reports Realtor.
Once a potential buyer contacts the seller, they would go through the details of the property – sizing, amenities, condition, and pricing, if not stated. After which, an appointment for the viewing of the property would usually be scheduled and in some cases, potential buyers may request for a refurnish of certain amenities or parts of the property.
If terms and conditions are met between both parties, the buyer would usually negotiate for the best offer if interested and a deposit may be requested by the owner. Finally, both parties will agree on a date for full payment, signing on official payment, and the handover of keys to the property.
Click on any of the following blue hyperlinks for more about Online Real Estate:
- Design
- Advantages
- Convenience
Information load and reviews
Direct communications and transactions
- Convenience
- Disadvantages
- Insufficient and inaccurate information
Copyright
Target audience
Human interaction
- Insufficient and inaccurate information
- Sustainability
- Impacts
Peer-to-Peer Lending Websites including a List of Lending Companies
YouTube Video #1: Lending Club Review - Is it a Good Investment? (by GoodFinancialCents.com)
YouTube Video #2: Is peer to peer lending safe?
Pictured: Comparing Peer-to-Peer Lending Companies by Dyer News
Click here for a List of Peer-to-Peer Lending Companies.
Peer-to-peer lending, sometimes abbreviated P2P lending, is the practice of lending money to individuals or businesses through online services that match lenders with borrowers. Since peer-to-peer lending companies offering these services generally operate online, they can run with lower overhead and provide the service more cheaply than traditional financial institutions.
As a result, lenders can earn higher returns compared to savings and investment products offered by banks, while borrowers can borrow money at lower interest rates, even after the P2P lending company has taken a fee for providing the match-making platform and credit checking the borrower.
There is the risk of the borrower defaulting on the loans taken out from peer-lending websites.
Also known as crowdlending, many peer-to-peer loans are unsecured personal loans, though some of the largest amounts are lent to businesses. Secured loans are sometimes offered by using luxury assets such as jewelry, watches, vintage cars, fine art, buildings, aircraft and other business assets as collateral. They are made to an individual, company or charity. Other forms of peer-to-peer lending include student loans, commercial and real estate loans, payday loans, as well as secured business loans, leasing, and factoring.
The interest rates can be set by lenders who compete for the lowest rate on the reverse auction model or fixed by the intermediary company on the basis of an analysis of the borrower's credit.
The lender's investment in the loan is not normally protected by any government guarantee.
On some services, lenders mitigate the risk of bad debt by choosing which borrowers to lend to, and mitigate total risk by diversifying their investments among different borrowers. Other models involve the P2P lending company maintaining a separate, ringfenced fund, such as RateSetter's Provision Fund, which pays lenders back in the event the borrower defaults, but the value of such provision funds for lenders is subject to debate.
The lending intermediaries are for-profit businesses; they generate revenue by collecting a one-time fee on funded loans from borrowers and by assessing a loan servicing fee to investors (tax-disadvantaged in the UK vs charging borrowers) or borrowers (either a fixed amount annually or a percentage of the loan amount). Compared to stock markets, peer-to-peer lending tends to have both less volatility and less liquidity.
Click on any of the following blue hyperlinks for more about Peer-to-Peer Lending:
Peer-to-peer lending, sometimes abbreviated P2P lending, is the practice of lending money to individuals or businesses through online services that match lenders with borrowers. Since peer-to-peer lending companies offering these services generally operate online, they can run with lower overhead and provide the service more cheaply than traditional financial institutions.
As a result, lenders can earn higher returns compared to savings and investment products offered by banks, while borrowers can borrow money at lower interest rates, even after the P2P lending company has taken a fee for providing the match-making platform and credit checking the borrower.
There is the risk of the borrower defaulting on the loans taken out from peer-lending websites.
Also known as crowdlending, many peer-to-peer loans are unsecured personal loans, though some of the largest amounts are lent to businesses. Secured loans are sometimes offered by using luxury assets such as jewelry, watches, vintage cars, fine art, buildings, aircraft and other business assets as collateral. They are made to an individual, company or charity. Other forms of peer-to-peer lending include student loans, commercial and real estate loans, payday loans, as well as secured business loans, leasing, and factoring.
The interest rates can be set by lenders who compete for the lowest rate on the reverse auction model or fixed by the intermediary company on the basis of an analysis of the borrower's credit.
The lender's investment in the loan is not normally protected by any government guarantee.
On some services, lenders mitigate the risk of bad debt by choosing which borrowers to lend to, and mitigate total risk by diversifying their investments among different borrowers. Other models involve the P2P lending company maintaining a separate, ringfenced fund, such as RateSetter's Provision Fund, which pays lenders back in the event the borrower defaults, but the value of such provision funds for lenders is subject to debate.
The lending intermediaries are for-profit businesses; they generate revenue by collecting a one-time fee on funded loans from borrowers and by assessing a loan servicing fee to investors (tax-disadvantaged in the UK vs charging borrowers) or borrowers (either a fixed amount annually or a percentage of the loan amount). Compared to stock markets, peer-to-peer lending tends to have both less volatility and less liquidity.
Click on any of the following blue hyperlinks for more about Peer-to-Peer Lending:
Spotify.Com
YouTube Video: How to Use Spotify by C/Net
Logo Pictured Below
Spotify is a music, podcast, and video streaming service, officially launched on 7 October 2008. It is developed by startup Spotify AB in Stockholm, Sweden. It provides digital rights management-protected content from record labels and media companies.
Spotify is a freemium service, meaning that basic features are free with advertisements, while additional features, including improved streaming quality and offline music downloads, are offered via paid subscriptions.
Spotify is available in most of Europe, most of the Americas, Australia, New Zealand and parts of Asia. I
Spotify is available for most modern devices, including Windows, macOS, and Linux computers, as well as iOS and Android smartphones and tablets.
Music can be browsed or searched for via various parameters, such as artist, album, genre, playlist, or record label. Users can create, edit and share playlists, share tracks on social media, and make playlists with other users. Spotify provides access to over 30 million songs. As of June 2017, it has over 140 million monthly active users, and as of July 2017, it has over 60 million paying subscribers.
Unlike physical or download sales, which pay artists a fixed price per song or album sold, Spotify pays royalties based on the number of artists' streams as a proportion of total songs streamed on the service. They distribute approximately 70% of total revenue to rights holders, who then pay artists based on their individual agreements.
Spotify has faced criticism from artists and producers including Taylor Swift and Radiohead singer Thom Yorke, who feel it does not fairly compensate music creators as music sales decline and streaming increases. In April 2017, as part of its efforts to renegotiate new license deals with record labels for a reported interest in going public,
Spotify announced that artists who are part of Universal Music Group and Merlin Network will have the ability to make their new album releases exclusively available on the service's Premium service tier for a maximum of two weeks.
For more about Spotify, click on any of the following blue hyperlinks:
Spotify is a freemium service, meaning that basic features are free with advertisements, while additional features, including improved streaming quality and offline music downloads, are offered via paid subscriptions.
Spotify is available in most of Europe, most of the Americas, Australia, New Zealand and parts of Asia. I
Spotify is available for most modern devices, including Windows, macOS, and Linux computers, as well as iOS and Android smartphones and tablets.
Music can be browsed or searched for via various parameters, such as artist, album, genre, playlist, or record label. Users can create, edit and share playlists, share tracks on social media, and make playlists with other users. Spotify provides access to over 30 million songs. As of June 2017, it has over 140 million monthly active users, and as of July 2017, it has over 60 million paying subscribers.
Unlike physical or download sales, which pay artists a fixed price per song or album sold, Spotify pays royalties based on the number of artists' streams as a proportion of total songs streamed on the service. They distribute approximately 70% of total revenue to rights holders, who then pay artists based on their individual agreements.
Spotify has faced criticism from artists and producers including Taylor Swift and Radiohead singer Thom Yorke, who feel it does not fairly compensate music creators as music sales decline and streaming increases. In April 2017, as part of its efforts to renegotiate new license deals with record labels for a reported interest in going public,
Spotify announced that artists who are part of Universal Music Group and Merlin Network will have the ability to make their new album releases exclusively available on the service's Premium service tier for a maximum of two weeks.
For more about Spotify, click on any of the following blue hyperlinks:
- Business model
- History
- 2017
- 2013–2016
- 2011–2012
- 2009–2010
- 2011–2012
- 2013–2016
- 2017
- Accounts and subscriptions
- Monetization
- Funding
- Advertisements
- Downloads
- Spotify for Artists
- Platforms
- Features
- Playlists
- Listening limitations
- Technical information
- Geographic availability
- History of expansiopm
- Early development
- Criticism
- D. A. Wallach
- Artist withdrawals
- Content withdrawals and delays
- Other criticism
Yelp(.com)
YouTube Video: Yelp CEO on site’s popularity and pitfalls
Yelp is an American multinational corporation headquartered in San Francisco, California. It develops, hosts and markets Yelp.com and the Yelp mobile app, which publish crowd-sourced reviews about local businesses, as well as the online reservation service Yelp Reservations and online food-delivery service Eat24.
The company also trains small businesses in how to respond to reviews, hosts social events for reviewers, and provides data about businesses, including health inspection scores.
Yelp was founded in 2004 by former PayPal employees Russel Simmons and Jeremy Stoppelman. Yelp grew quickly and raised several rounds of funding.
By 2010 it had $30 million in revenues and the website had published more than 4.5 million crowd-sourced reviews. From 2009 to 2012, Yelp expanded throughout Europe and Asia.
In 2009 Yelp entered several negotiations with Google for a potential acquisition. Yelp became a public company in March 2012 and became profitable for the first time two years later. As of 2016, Yelp.com has 135 million monthly visitors and 95 million reviews. The company's revenues come from businesses advertising.
According to BusinessWeek, Yelp has a complicated relationship with small businesses. Criticism of Yelp focuses on the legitimacy of reviews, public statements of Yelp manipulating and blocking reviews in order to increase ad spending, as well as concerns regarding the privacy of reviewers.
Click on any of the following blue hyperlinks for more about Yelp:
The company also trains small businesses in how to respond to reviews, hosts social events for reviewers, and provides data about businesses, including health inspection scores.
Yelp was founded in 2004 by former PayPal employees Russel Simmons and Jeremy Stoppelman. Yelp grew quickly and raised several rounds of funding.
By 2010 it had $30 million in revenues and the website had published more than 4.5 million crowd-sourced reviews. From 2009 to 2012, Yelp expanded throughout Europe and Asia.
In 2009 Yelp entered several negotiations with Google for a potential acquisition. Yelp became a public company in March 2012 and became profitable for the first time two years later. As of 2016, Yelp.com has 135 million monthly visitors and 95 million reviews. The company's revenues come from businesses advertising.
According to BusinessWeek, Yelp has a complicated relationship with small businesses. Criticism of Yelp focuses on the legitimacy of reviews, public statements of Yelp manipulating and blocking reviews in order to increase ad spending, as well as concerns regarding the privacy of reviewers.
Click on any of the following blue hyperlinks for more about Yelp:
- Company history (2004–2016)
- Origins (2004–2009)
2Private company (2009–2012)
Public entity (2012–present)
- Origins (2004–2009)
- Features
- Relationship with businesses
- Community
- See also:
Angie's List
YouTube Video: Introducing Angi | Your Home For Everything Home
Pictured: "Newly Free Angie’s List Will Increase Appeal of Small Biz Listings" by Small Business Trends
Angie's List is an American home services website. Founded in 1995, it is an online directory that allows users to read and publish crowd-sourced reviews of local businesses and contractors. Formerly a subscription-only service, Angie's List added a free membership tier in July 2016.
For the quarter ending on June 30, 2016, Angie's List reported total revenue of US$83,000,000 and a net income of US$4,797,000.
On May 1, 2017, the Wall Street Journal reported that IAC planned to buy Angie's List. The new publicly traded company would be called ANGI Homeservices Inc.
Click on any of the following blue hyperlinks for more about "Angie's List":
For the quarter ending on June 30, 2016, Angie's List reported total revenue of US$83,000,000 and a net income of US$4,797,000.
On May 1, 2017, the Wall Street Journal reported that IAC planned to buy Angie's List. The new publicly traded company would be called ANGI Homeservices Inc.
Click on any of the following blue hyperlinks for more about "Angie's List":
Pandora Radio
YouTube Video: How to Use Pandora Radio to Find the Best Music
Pictured: Pandora Website
Pandora Internet Radio (also known as Pandora Radio or simply Pandora) is a music streaming and automated music recommendation service powered by the Music Genome Project.
As of 1 August 2017, the service, operated by Pandora Media, Inc., is available only in the United States.
The service plays songs that have similar musical traits. The user then provides positive or negative feedback (as "thumbs up" or "thumbs down") for songs chosen by the service, and the feedback is taken into account in the subsequent selection of other songs to play.
The service can be accessed either through a web browser or by downloading and installing application software on the user's device such as a personal computer or mobile phone.
Click on any of the following blue hyperlinks for more about Pandora Radio:
As of 1 August 2017, the service, operated by Pandora Media, Inc., is available only in the United States.
The service plays songs that have similar musical traits. The user then provides positive or negative feedback (as "thumbs up" or "thumbs down") for songs chosen by the service, and the feedback is taken into account in the subsequent selection of other songs to play.
The service can be accessed either through a web browser or by downloading and installing application software on the user's device such as a personal computer or mobile phone.
Click on any of the following blue hyperlinks for more about Pandora Radio:
- History
- Features
- Streaming
Limitations
Mobile devices
- Streaming
- Technical information
- Business model
- Royalties
- Reception
- Advertising
- Revenue
Pitch to advertisers
Methods of advertising
Market segments
- Revenue
- Internet radio competitors
- Owned and operated stations
- See also:
- List of Internet radio stations
- List of online music databases
- Official website
- Pandora featured in Fast Company
- The Flux podcast interview with Tim Westergren, founder of Pandora
- Pandora feature on WNBC-TV
- Closing Pandora's Box: The End of Internet Radio?, May 3, 2007 interview with Tim Westergren
- Pandora adds classical music
- Interview with Tim Westergren about the Music Genome Project and Pandora
- Dave Dederer & nuTsie Challenge Pandora
- Inc. Magazine profile of Tim Westergren
- New York Times article on Tim Westergren and Pandora
- Pink Floyd: Pandora's Internet radio royalty ripoff USA TODAY, 2013
iHeartRadio
YouTube Video about iHeartRadio: Unlimited Music & Free Radio in One App
iHeartRadio is a radio network and Internet radio platform owned by iHeartMedia, Inc.
Founded in April 2008 as the website iheartmusic.com, as of 2015 iHeartRadio functions both as a music recommender system and as a radio network that aggregates audio content from over 800 local iHeartMedia radio stations across the United States, as well as from hundreds of other stations and from various other media (with companies such as Cumulus Media, Cox Radio and Beasley Broadcast Group also utilizing this service).
iHeartRadio is available online, via mobile devices, and on select video-game consoles., and was created by Mary Beth Fitzgerald.
iHeartRadio was ranked No. 4 on AdAge's Entertainment A-List in 2010.
Since 2011, they hold the iHeartRadio Music Festival.
In 2014, iHeartRadio started an awards show titled iHeartRadio Music Awards and regularly produces concerts in Los Angeles and New York though the iHeartRadio Theater locations.
Click on any of the following blue hyperlinks for more about iHeartRadio:
Founded in April 2008 as the website iheartmusic.com, as of 2015 iHeartRadio functions both as a music recommender system and as a radio network that aggregates audio content from over 800 local iHeartMedia radio stations across the United States, as well as from hundreds of other stations and from various other media (with companies such as Cumulus Media, Cox Radio and Beasley Broadcast Group also utilizing this service).
iHeartRadio is available online, via mobile devices, and on select video-game consoles., and was created by Mary Beth Fitzgerald.
iHeartRadio was ranked No. 4 on AdAge's Entertainment A-List in 2010.
Since 2011, they hold the iHeartRadio Music Festival.
In 2014, iHeartRadio started an awards show titled iHeartRadio Music Awards and regularly produces concerts in Los Angeles and New York though the iHeartRadio Theater locations.
Click on any of the following blue hyperlinks for more about iHeartRadio:
- History
- Availability and supported devices
- Mobile
Home
Automotive
Wearables
- Mobile
- Functionality and rating system
- Limitations
- See also:
Internet Privacy and the Right to Privacy including Repeal of the FCC Privacy Rules (as reported by the Washington Post April 4, 2017)
YouTube Video about the Right to Privacy on the Internet*
* -- As reported by CBS News, 24-year-old Ashley Payne was forced to resign from her position as a public high school teacher when a student allegedly complained over a Facebook photo of Payne holding alcoholic beverages claiming it promoted drinking. 48 Hours' Erin Moriarty investigates our ever changing rights to privacy.
Trump has signed repeal of the FCC privacy rules. Here’s what happens next. (by Brian Fung of The Washington Post April 4, 2017)
"President Trump signed congressional legislation Monday night that repeals the Federal Communications Commission's privacy protections for Internet users, rolling back a landmark policy from the Obama era and enabling Internet providers to compete with Google and Facebook in the online ad market.The Obama-backed rules — which would have taken effect later this year — would have banned Internet providers from collecting, storing, sharing and selling certain types of customer information without user consent.
Data such as a consumer's Web browsing history, app usage history, location details and more would have required a customer's explicit permission before companies such as Verizon and Comcast could mine the information for advertising purposes.
Evan Greer, campaign director for the Internet activism group Fight for the Future, condemned the move, saying it was “deeply ironic” for Trump to sign the legislation while complaining about the privacy of his own communications in connection with the FBI's probe into his campaign's possible links with Russia.
The only people in the United States who want less Internet privacy are CEOs and lobbyists for giant telecom companies, who want to rake in money by spying on all of us and selling the private details of our lives to marketing companies,” said Greer.
Trump signed the legislation with little fanfare Monday evening, a contrast to other major executive actions he has taken from the Oval Office. The move prohibits the FCC from passing similar privacy regulations in the future. And it paves the way for Internet providers to compete in the $83 billion market for digital advertising.
By watching where their customers go online, providers may understand more about their users' Internet habits and present those findings to third parties. While companies such as Comcast have pledged not to sell the data of individual customers, those commitments are voluntary and as a result of Trump's signature, not backed by federal regulation.
Trump's FCC chairman, Ajit Pai, said the Federal Trade Commission, not the FCC, should regulate Internet providers' data-mining practices. “American consumers’ privacy deserves to be protected regardless of who handles their personal information,” he said in a statement Monday evening.
The FTC currently has guidelines for how companies such as Google and Facebook may use customers' information. Those websites are among the world's biggest online advertisers, and Internet providers are eager to gain a slice of their market share. But critics of the FCC privacy rules argued that the regulations placed stricter requirements on broadband companies than on tech firms, creating an imbalance that could only be resolved by rolling back the FCC rules and designing something new.
The FTC is empowered to bring lawsuits against companies that violate its privacy guidelines, but it has no authority to create new rules for industry. It also currently cannot enforce its own guidelines against Internet providers due to a government rule that places those types of companies squarely within the jurisdiction of the FCC and out of the reach of the FTC.
As a result, Internet providers now exist in a “policy gap” in which the only privacy regulators for the industry operate at the state, not federal, level, analysts say. They add that policymakers are likely to focus next on how to resolve that contradiction as well as look for ways to undo net neutrality, another Obama-era initiative that bans Internet providers from discriminating against websites."
END of Washington Post Article.
___________________________________________________________________________
Internet privacy involves the right or mandate of personal privacy concerning the storing, re-purposing, provision to third parties, and displaying of information pertaining to oneself via of the Internet.
Internet privacy is a subset of data privacy. Privacy concerns have been articulated from the beginnings of large scale computer sharing.
Privacy can entail either Personally Identifying Information (PII) or non-PII information such as a site visitor's behavior on a website. PII refers to any information that can be used to identify an individual. For example, age and physical address alone could identify who an individual is without explicitly disclosing their name, as these two factors are unique enough to typically identify a specific person.
Some experts such as Steve Rambam, a private investigator specializing in Internet privacy cases, believe that privacy no longer exists; saying, "Privacy is dead – get over it". In fact, it has been suggested that the "appeal of online services is to broadcast personal information on purpose."
On the other hand, in his essay The Value of Privacy, security expert Bruce Schneier says, "Privacy protects us from abuses by those in power, even if we're doing nothing wrong at the time of surveillance."
Levels of Privacy:
Internet and digital privacy are viewed differently from traditional expectations of privacy. Internet privacy is primarily concerned with protecting user information.
Law Professor Jerry Kang explains that the term privacy expresses space, decision, and information. In terms of space, individuals have an expectation that their physical spaces (i.e. homes, cars) not be intruded. Privacy within the realm of decision is best illustrated by the landmark case Roe v. Wade. Lastly, information privacy is in regards to the collection of user information from a variety of sources, which produces great discussion.
The 1997 Information Infrastructure Task Force (IITF) created under President Clinton defined information privacy as "an individual's claim to control the terms under which personal information--information identifiable to the individual--is acquired, disclosed, and used."
At the end of the 1990s, with the rise of the Internet, it became clear that the internet and companies would need to abide by new rules to protect individual's privacy. With the rise of the internet and mobile networks the salience of internet privacy is a daily concern for users.
People with only a casual concern for Internet privacy need not achieve total anonymity. Internet users may protect their privacy through controlled disclosure of personal information.
The revelation of IP addresses, non-personally-identifiable profiling, and similar information might become acceptable trade-offs for the convenience that users could otherwise lose using the workarounds needed to suppress such details rigorously.
On the other hand, some people desire much stronger privacy. In that case, they may try to achieve Internet anonymity to ensure privacy — use of the Internet without giving any third parties the ability to link the Internet activities to personally-identifiable information of the Internet user. In order to keep their information private, people need to be careful with what they submit to and look at online.
When filling out forms and buying merchandise, that becomes tracked and because the information was not private, some companies are now sending Internet users spam and advertising on similar products.
There are also several governmental organizations that protect individual's privacy and anonymity on the Internet, to a point. In an article presented by the FTC, in October 2011, a number of pointers were brought to attention that helps an individual internet user avoid possible identity theft and other cyber-attacks.
Preventing or limiting the usage of Social Security numbers online, being wary and respectful of emails including spam messages, being mindful of personal financial details, creating and managing strong passwords, and intelligent web-browsing behaviours are recommended, among others.
Posting things on the Internet can be harmful or in danger of malicious attack. Some information posted on the Internet is permanent, depending on the terms of service, and privacy policies of particular services offered online.
This can include comments written on blogs, pictures, and Internet sites, such as Facebook and Twitter. It is absorbed into cyberspace and once it is posted, anyone can potentially find it and access it. Some employers may research a potential employee by searching online for the details of their online behaviors, possibly affecting the outcome of the success of the candidate.
Risks to Internet Privacy:
Companies are hired to watch what internet sites people visit, and then use the information, for instance by sending advertising based on one's browsing history. There are many ways in which people can divulge their personal information, for instance by use of "social media" and by sending bank and credit card information to various websites.
Moreover, directly observed behavior, such as browsing logs, search queries, or contents of the Facebook profile can be automatically processed to infer potentially more intrusive details about an individual, such as sexual orientation, political and religious views, race, substance use, intelligence, and personality.
Those concerned about Internet privacy often cite a number of privacy risks — events that can compromise privacy — which may be encountered through Internet use. These range from the gathering of statistics on users to more malicious acts such as the spreading of spyware and the exploitation of various forms of bugs (software faults).
Several social networking sites try to protect the personal information of their subscribers. On Facebook, for example, privacy settings are available to all registered users: they can block certain individuals from seeing their profile, they can choose their "friends", and they can limit who has access to one's pictures and videos. Privacy settings are also available on other social networking sites such as Google Plus and Twitter. The user can apply such settings when providing personal information on the internet.
In late 2007 Facebook launched the Beacon program where user rental records were released on the public for friends to see. Many people were enraged by this breach in privacy, and the Lane v. Facebook, Inc. case ensued.
Children and adolescents often use the Internet (including social media) in ways which risk their privacy: a cause for growing concern among parents. Young people also may not realise that all their information and browsing can and may be tracked while visiting a particular site, and that it is up to them to protect their own privacy. They must be informed about all these risks.
For example, on Twitter, threats include shortened links that lead one to potentially harmful places. In their e-mail inbox, threats include email scams and attachments that get them to install malware and disclose personal information.
On Torrent sites, threats include malware hiding in video, music, and software downloads. Even when using a smartphone, threats include geolocation, meaning that one's phone can detect where they are and post it online for all to see. Users can protect themselves by updating virus protection, using security settings, downloading patches, installing a firewall, screening e-mail, shutting down spyware, controlling cookies, using encryption, fending off browser hijackers, and blocking pop-ups.
However most people have little idea how to go about doing many of these things. How can the average user with no training be expected to know how to run their own network security (especially as things are getting more complicated all the time)? Many businesses hire professionals to take care of these issues, but most individuals can only do their best to learn about all this.
In 1998, the Federal Trade Commission in the USA considered the lack of privacy for children on the Internet, and created the Children Online Privacy Protection Act (COPPA). COPPA limits the options which gather information from children and created warning labels if potential harmful information or content was presented.
In 2000, Children's Internet Protection Act (CIPA) was developed to implement safe Internet policies such as rules, and filter software. These laws, awareness campaigns, parental and adult supervision strategies and Internet filters can all help to make the Internet safer for children around the world.
Click on any of the following blue hyperlinks for more about Internet Privacy:
The right to privacy is an element of various legal traditions to restrain government and private actions that threaten the privacy of individuals. Over 150 national constitutions mention the right to privacy.
Since the global surveillance disclosures of 2013, the inalienable human right to privacy has been a subject of international debate.
In combating worldwide terrorism, government agencies, such as the NSA, CIA, R&AW, and GCHQ have engaged in mass, global surveillance, perhaps undermining the right to privacy.
There is now a question as to whether the right to privacy can co-exist with the current capabilities of intelligence agencies to access and analyse virtually every detail of an individual's life. A major question is whether or not the right to privacy needs to be forfeited as part of the social contract to bolster defense against supposed terrorist threats.
Click on any of the following blue hyperlinks for more about the Right to Privacy:
"President Trump signed congressional legislation Monday night that repeals the Federal Communications Commission's privacy protections for Internet users, rolling back a landmark policy from the Obama era and enabling Internet providers to compete with Google and Facebook in the online ad market.The Obama-backed rules — which would have taken effect later this year — would have banned Internet providers from collecting, storing, sharing and selling certain types of customer information without user consent.
Data such as a consumer's Web browsing history, app usage history, location details and more would have required a customer's explicit permission before companies such as Verizon and Comcast could mine the information for advertising purposes.
Evan Greer, campaign director for the Internet activism group Fight for the Future, condemned the move, saying it was “deeply ironic” for Trump to sign the legislation while complaining about the privacy of his own communications in connection with the FBI's probe into his campaign's possible links with Russia.
The only people in the United States who want less Internet privacy are CEOs and lobbyists for giant telecom companies, who want to rake in money by spying on all of us and selling the private details of our lives to marketing companies,” said Greer.
Trump signed the legislation with little fanfare Monday evening, a contrast to other major executive actions he has taken from the Oval Office. The move prohibits the FCC from passing similar privacy regulations in the future. And it paves the way for Internet providers to compete in the $83 billion market for digital advertising.
By watching where their customers go online, providers may understand more about their users' Internet habits and present those findings to third parties. While companies such as Comcast have pledged not to sell the data of individual customers, those commitments are voluntary and as a result of Trump's signature, not backed by federal regulation.
Trump's FCC chairman, Ajit Pai, said the Federal Trade Commission, not the FCC, should regulate Internet providers' data-mining practices. “American consumers’ privacy deserves to be protected regardless of who handles their personal information,” he said in a statement Monday evening.
The FTC currently has guidelines for how companies such as Google and Facebook may use customers' information. Those websites are among the world's biggest online advertisers, and Internet providers are eager to gain a slice of their market share. But critics of the FCC privacy rules argued that the regulations placed stricter requirements on broadband companies than on tech firms, creating an imbalance that could only be resolved by rolling back the FCC rules and designing something new.
The FTC is empowered to bring lawsuits against companies that violate its privacy guidelines, but it has no authority to create new rules for industry. It also currently cannot enforce its own guidelines against Internet providers due to a government rule that places those types of companies squarely within the jurisdiction of the FCC and out of the reach of the FTC.
As a result, Internet providers now exist in a “policy gap” in which the only privacy regulators for the industry operate at the state, not federal, level, analysts say. They add that policymakers are likely to focus next on how to resolve that contradiction as well as look for ways to undo net neutrality, another Obama-era initiative that bans Internet providers from discriminating against websites."
END of Washington Post Article.
___________________________________________________________________________
Internet privacy involves the right or mandate of personal privacy concerning the storing, re-purposing, provision to third parties, and displaying of information pertaining to oneself via of the Internet.
Internet privacy is a subset of data privacy. Privacy concerns have been articulated from the beginnings of large scale computer sharing.
Privacy can entail either Personally Identifying Information (PII) or non-PII information such as a site visitor's behavior on a website. PII refers to any information that can be used to identify an individual. For example, age and physical address alone could identify who an individual is without explicitly disclosing their name, as these two factors are unique enough to typically identify a specific person.
Some experts such as Steve Rambam, a private investigator specializing in Internet privacy cases, believe that privacy no longer exists; saying, "Privacy is dead – get over it". In fact, it has been suggested that the "appeal of online services is to broadcast personal information on purpose."
On the other hand, in his essay The Value of Privacy, security expert Bruce Schneier says, "Privacy protects us from abuses by those in power, even if we're doing nothing wrong at the time of surveillance."
Levels of Privacy:
Internet and digital privacy are viewed differently from traditional expectations of privacy. Internet privacy is primarily concerned with protecting user information.
Law Professor Jerry Kang explains that the term privacy expresses space, decision, and information. In terms of space, individuals have an expectation that their physical spaces (i.e. homes, cars) not be intruded. Privacy within the realm of decision is best illustrated by the landmark case Roe v. Wade. Lastly, information privacy is in regards to the collection of user information from a variety of sources, which produces great discussion.
The 1997 Information Infrastructure Task Force (IITF) created under President Clinton defined information privacy as "an individual's claim to control the terms under which personal information--information identifiable to the individual--is acquired, disclosed, and used."
At the end of the 1990s, with the rise of the Internet, it became clear that the internet and companies would need to abide by new rules to protect individual's privacy. With the rise of the internet and mobile networks the salience of internet privacy is a daily concern for users.
People with only a casual concern for Internet privacy need not achieve total anonymity. Internet users may protect their privacy through controlled disclosure of personal information.
The revelation of IP addresses, non-personally-identifiable profiling, and similar information might become acceptable trade-offs for the convenience that users could otherwise lose using the workarounds needed to suppress such details rigorously.
On the other hand, some people desire much stronger privacy. In that case, they may try to achieve Internet anonymity to ensure privacy — use of the Internet without giving any third parties the ability to link the Internet activities to personally-identifiable information of the Internet user. In order to keep their information private, people need to be careful with what they submit to and look at online.
When filling out forms and buying merchandise, that becomes tracked and because the information was not private, some companies are now sending Internet users spam and advertising on similar products.
There are also several governmental organizations that protect individual's privacy and anonymity on the Internet, to a point. In an article presented by the FTC, in October 2011, a number of pointers were brought to attention that helps an individual internet user avoid possible identity theft and other cyber-attacks.
Preventing or limiting the usage of Social Security numbers online, being wary and respectful of emails including spam messages, being mindful of personal financial details, creating and managing strong passwords, and intelligent web-browsing behaviours are recommended, among others.
Posting things on the Internet can be harmful or in danger of malicious attack. Some information posted on the Internet is permanent, depending on the terms of service, and privacy policies of particular services offered online.
This can include comments written on blogs, pictures, and Internet sites, such as Facebook and Twitter. It is absorbed into cyberspace and once it is posted, anyone can potentially find it and access it. Some employers may research a potential employee by searching online for the details of their online behaviors, possibly affecting the outcome of the success of the candidate.
Risks to Internet Privacy:
Companies are hired to watch what internet sites people visit, and then use the information, for instance by sending advertising based on one's browsing history. There are many ways in which people can divulge their personal information, for instance by use of "social media" and by sending bank and credit card information to various websites.
Moreover, directly observed behavior, such as browsing logs, search queries, or contents of the Facebook profile can be automatically processed to infer potentially more intrusive details about an individual, such as sexual orientation, political and religious views, race, substance use, intelligence, and personality.
Those concerned about Internet privacy often cite a number of privacy risks — events that can compromise privacy — which may be encountered through Internet use. These range from the gathering of statistics on users to more malicious acts such as the spreading of spyware and the exploitation of various forms of bugs (software faults).
Several social networking sites try to protect the personal information of their subscribers. On Facebook, for example, privacy settings are available to all registered users: they can block certain individuals from seeing their profile, they can choose their "friends", and they can limit who has access to one's pictures and videos. Privacy settings are also available on other social networking sites such as Google Plus and Twitter. The user can apply such settings when providing personal information on the internet.
In late 2007 Facebook launched the Beacon program where user rental records were released on the public for friends to see. Many people were enraged by this breach in privacy, and the Lane v. Facebook, Inc. case ensued.
Children and adolescents often use the Internet (including social media) in ways which risk their privacy: a cause for growing concern among parents. Young people also may not realise that all their information and browsing can and may be tracked while visiting a particular site, and that it is up to them to protect their own privacy. They must be informed about all these risks.
For example, on Twitter, threats include shortened links that lead one to potentially harmful places. In their e-mail inbox, threats include email scams and attachments that get them to install malware and disclose personal information.
On Torrent sites, threats include malware hiding in video, music, and software downloads. Even when using a smartphone, threats include geolocation, meaning that one's phone can detect where they are and post it online for all to see. Users can protect themselves by updating virus protection, using security settings, downloading patches, installing a firewall, screening e-mail, shutting down spyware, controlling cookies, using encryption, fending off browser hijackers, and blocking pop-ups.
However most people have little idea how to go about doing many of these things. How can the average user with no training be expected to know how to run their own network security (especially as things are getting more complicated all the time)? Many businesses hire professionals to take care of these issues, but most individuals can only do their best to learn about all this.
In 1998, the Federal Trade Commission in the USA considered the lack of privacy for children on the Internet, and created the Children Online Privacy Protection Act (COPPA). COPPA limits the options which gather information from children and created warning labels if potential harmful information or content was presented.
In 2000, Children's Internet Protection Act (CIPA) was developed to implement safe Internet policies such as rules, and filter software. These laws, awareness campaigns, parental and adult supervision strategies and Internet filters can all help to make the Internet safer for children around the world.
Click on any of the following blue hyperlinks for more about Internet Privacy:
- HTTP cookies
- Flash cookies
- Evercookies
- Anti-fraud uses
Advertising uses
Criticism
- Anti-fraud uses
- Device fingerprinting
- Search engines
- Public views including Concerns of Internet privacy and real life implications
- Laws and regulations
- Legal threats
- See also:
- Anonymity
- Anonymous blogging
Anonymous P2P
Anonymous post
Anonymous remailer
Anonymous web browsing
- Anonymous blogging
- Index of Articles Relating to Terms of Service and Privacy Policies
- Internet censorship including Internet censorship circumvention
- Internet vigilantism
- Privacy-enhancing technologies
- PRISM
- Privacy law
- Surveillance
- Unauthorized access in online social networks
- PrivacyTools.io - Provides knowledge and tools to protect your privacy against global mass surveillance
- Activist Net Privacy - Curated list of interviews of individuals at the forefront of the privacy debate
- Electronic Frontier Foundation - an organization devoted to privacy and intellectual freedom advocacy
- Expectation of privacy for company email not deemed objectively reasonable – Bourke v. Nissan
- Internet Privacy: The Views of the FTC, the FCC, and NTIA: Joint Hearing before the Subcommittee on Commerce, Manufacturing, and Trade and the Subcommittee on Communications and Technology of the Committee on Energy and Commerce, House of Representatives, One Hundred Twelfth Congress, First Session, July 14, 2011
- Anonymity
The right to privacy is an element of various legal traditions to restrain government and private actions that threaten the privacy of individuals. Over 150 national constitutions mention the right to privacy.
Since the global surveillance disclosures of 2013, the inalienable human right to privacy has been a subject of international debate.
In combating worldwide terrorism, government agencies, such as the NSA, CIA, R&AW, and GCHQ have engaged in mass, global surveillance, perhaps undermining the right to privacy.
There is now a question as to whether the right to privacy can co-exist with the current capabilities of intelligence agencies to access and analyse virtually every detail of an individual's life. A major question is whether or not the right to privacy needs to be forfeited as part of the social contract to bolster defense against supposed terrorist threats.
Click on any of the following blue hyperlinks for more about the Right to Privacy:
- Background
- Definitions
An individual right
A collective value and a human right
Universal Declaration of Human Rights
- Definitions
- United States
- Journalism
- Mass surveillance and privacy
- Support
- Opposition
- See also:
- Bank Secrecy Act, a US law requiring banks to disclose details of financial transactions
- Right to be forgotten
- Moore, Adam D. Privacy Rights: Moral and Legal Foundations (Pennsylvania State University Press, August 2010). ISBN 978-0-271-03686-1.
- "The Privacy Torts" (December 19, 2000). Privacilla.org, a "web-based think tank", devoted to privacy issues, edited by Jim Harper ("About Privacilla")
WebMD
YouTube Video: What Is a Spleen and What Does it Do?
YouTube Video: Advances in Liver Transplant Surgery
Pictured: Giada De Laurentiis, WebMD Editor-in-Chief Kristy Hammam, WebMD CEO David Schlanger, Robin Roberts and WebMD President Steve Zatz at WebMD's first-ever Digital Content NewFront presentation in New York City.
WebMD is an American corporation known primarily as an online publisher of news and information pertaining to human health and well-being. It was founded in 1996 by James H. Clark and Pavan Nigam as Healthscape, later Healtheon, and then it acquired WebMD in 1999 to form Healtheon/WebMD. The name was later shortened to WebMD.
Website Traffic:
WebMD is best known as a health information services website, which publishes content regarding health and health care topics, including a symptom checklist, pharmacy information, drugs information, blogs of physicians with specific topics, and providing a place to store personal medical information.
During 2015, WebMD’s network of websites reached more unique visitors each month than any other leading private or government healthcare website, making it the leading health publisher in the United States. In the fourth quarter of 2016, WebMD recorded an average of 179.5 million unique users per month, and 3.63 billion page views per quarter.
Accreditation:
URAC, the Utilization Review Accreditation Commission, has accredited WebMD’s operations continuously since 2001 regarding everything from proper disclosures and health content to security and privacy.
Click on any of the following blue hyperlinks for more about the website WebMD:
Website Traffic:
WebMD is best known as a health information services website, which publishes content regarding health and health care topics, including a symptom checklist, pharmacy information, drugs information, blogs of physicians with specific topics, and providing a place to store personal medical information.
During 2015, WebMD’s network of websites reached more unique visitors each month than any other leading private or government healthcare website, making it the leading health publisher in the United States. In the fourth quarter of 2016, WebMD recorded an average of 179.5 million unique users per month, and 3.63 billion page views per quarter.
Accreditation:
URAC, the Utilization Review Accreditation Commission, has accredited WebMD’s operations continuously since 2001 regarding everything from proper disclosures and health content to security and privacy.
Click on any of the following blue hyperlinks for more about the website WebMD:
- Revenues
- Business model
- Criticism
- See also:
- WebMD (corporate website)
- WebMD Health (consumer website)
- Medscape (physician website)
- MedicineNet (MedicineNet website)
- RxList (drugs and medications website)
- eMedicineHealth (consumer first aid and health information website)
- Boots WebMD (UK consumer website)
- WebMD Health Services (private portal website)
Crowdsourcing*
* - [Note that while Crowdsourcing does not apply to just the Internet, it is the ability to use Internet-based Crowdsourcing technology that has caught on so well, per the examples below.]
YouTube Video: Mindsharing, the art of crowdsourcing everything | TED Talks
Pictured: Example of Crowdsourcing Process in Graphic Design
Crowdsourcing is a specific sourcing model in which individuals or organizations use contributions from Internet users to obtain needed services or ideas.
Crowdsourcing was coined in 2005 as a portmanteau of crowd and outsourcing.This mode of sourcing, which is to divide work between participants to achieve a cumulative result, was already successful prior to the digital age (i.e., "offline").
Crowdsourcing is distinguished from outsourcing in that the work can come from an undefined public (instead of being commissioned from a specific, named group) and in that crowdsourcing includes a mix of bottom-up and top-down processes.
Advantages of using crowdsourcing may include improved costs, speed, quality, flexibility, scalability, or diversity. Crowdsourcing in the form of idea competitions or innovation contests provides a way for organizations to learn beyond what their "base of minds" of employees provides (e.g., LEGO Ideas).
Crowdsourcing can also involve rather tedious "microtasks" that are performed in parallel by large, paid crowds (e.g., Amazon Mechanical Turk). Crowdsourcing has also been used for noncommercial work and to develop common goods (e.g., Wikipedia). The affect of user communication and the platform presentation should be taken into account when evaluating the performance of ideas in crowdsourcing contexts.
The term "crowdsourcing" was coined in 2005 by Jeff Howe and Mark Robinson, editors at Wired, to describe how businesses were using the Internet to "outsource work to the crowd", which quickly led to the portmanteau "crowdsourcing."
Howe first published a definition for the term crowdsourcing in a companion blog post to his June 2006 Wired article, "The Rise of Crowdsourcing", which came out in print just days later:
"Simply defined, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers."
In a February 1, 2008, article, Daren C. Brabham, "the first [person] to publish scholarly research using the word crowdsourcing" and writer of the 2013 book, Crowdsourcing, defined it as an "online, distributed problem-solving and production model."
Kristen L. Guth and Brabham found that the performance of ideas offered in crowdsourcing platforms are affected not only by their quality, but also by the communication among users about the ideas, and presentation in the platform itself.
After studying more than 40 definitions of crowdsourcing in the scientific and popular literature, Enrique Estellés-Arolas and Fernando González Ladrón-de-Guevara, researchers at the Technical University of Valencia, developed a new integrating definition:
"Crowdsourcing is a type of participatory online activity in which an individual, an institution, a nonprofit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number, via a flexible open call, the voluntary undertaking of a task. The undertaking of the task; of variable complexity and modularity, and; in which the crowd should participate, bringing their work, money, knowledge **[and/or]** experience, always entails mutual benefit. The user will receive the satisfaction of a given type of need, be it economic, social recognition, self-esteem, or the development of individual skills, while the crowdsourcer will obtain and use to their advantage that which the user has brought to the venture, whose form will depend on the type of activity undertaken".
As mentioned by the definitions of Brabham and Estellés-Arolas and Ladrón-de-Guevara above, crowdsourcing in the modern conception is an IT-mediated phenomenon, meaning that a form of IT is always used to create and access crowds of people. In this respect, crowdsourcing has been considered to encompass three separate, but stable techniques:
Henk van Ess, a college lecturer in online communications, emphasizes the need to "give back" the crowdsourced results to the public on ethical grounds. His nonscientific, noncommercial definition is widely cited in the popular press: "Crowdsourcing is channeling the experts’ desire to solve a problem and then freely sharing the answer with everyone."
Despite the multiplicity of definitions for crowdsourcing, one constant has been the broadcasting of problems to the public, and an open call for contributions to help solve the problem. Members of the public submit solutions that are then owned by the entity, which originally broadcast the problem.
In some cases, the contributor of the solution is compensated monetarily with prizes or with recognition. In other cases, the only rewards may be a kudos or intellectual satisfaction. Crowdsourcing may produce solutions from amateurs or volunteers working in their spare time or from experts or small businesses, which were previously unknown to the initiating organization.
Another consequence of the multiple definitions is the controversy surrounding what kinds of activities that may be considered crowdsourcing.
Click on any of the following blue hyperlinks for more about Crowdsourcing:
Crowdsourcing was coined in 2005 as a portmanteau of crowd and outsourcing.This mode of sourcing, which is to divide work between participants to achieve a cumulative result, was already successful prior to the digital age (i.e., "offline").
Crowdsourcing is distinguished from outsourcing in that the work can come from an undefined public (instead of being commissioned from a specific, named group) and in that crowdsourcing includes a mix of bottom-up and top-down processes.
Advantages of using crowdsourcing may include improved costs, speed, quality, flexibility, scalability, or diversity. Crowdsourcing in the form of idea competitions or innovation contests provides a way for organizations to learn beyond what their "base of minds" of employees provides (e.g., LEGO Ideas).
Crowdsourcing can also involve rather tedious "microtasks" that are performed in parallel by large, paid crowds (e.g., Amazon Mechanical Turk). Crowdsourcing has also been used for noncommercial work and to develop common goods (e.g., Wikipedia). The affect of user communication and the platform presentation should be taken into account when evaluating the performance of ideas in crowdsourcing contexts.
The term "crowdsourcing" was coined in 2005 by Jeff Howe and Mark Robinson, editors at Wired, to describe how businesses were using the Internet to "outsource work to the crowd", which quickly led to the portmanteau "crowdsourcing."
Howe first published a definition for the term crowdsourcing in a companion blog post to his June 2006 Wired article, "The Rise of Crowdsourcing", which came out in print just days later:
"Simply defined, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers."
In a February 1, 2008, article, Daren C. Brabham, "the first [person] to publish scholarly research using the word crowdsourcing" and writer of the 2013 book, Crowdsourcing, defined it as an "online, distributed problem-solving and production model."
Kristen L. Guth and Brabham found that the performance of ideas offered in crowdsourcing platforms are affected not only by their quality, but also by the communication among users about the ideas, and presentation in the platform itself.
After studying more than 40 definitions of crowdsourcing in the scientific and popular literature, Enrique Estellés-Arolas and Fernando González Ladrón-de-Guevara, researchers at the Technical University of Valencia, developed a new integrating definition:
"Crowdsourcing is a type of participatory online activity in which an individual, an institution, a nonprofit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number, via a flexible open call, the voluntary undertaking of a task. The undertaking of the task; of variable complexity and modularity, and; in which the crowd should participate, bringing their work, money, knowledge **[and/or]** experience, always entails mutual benefit. The user will receive the satisfaction of a given type of need, be it economic, social recognition, self-esteem, or the development of individual skills, while the crowdsourcer will obtain and use to their advantage that which the user has brought to the venture, whose form will depend on the type of activity undertaken".
As mentioned by the definitions of Brabham and Estellés-Arolas and Ladrón-de-Guevara above, crowdsourcing in the modern conception is an IT-mediated phenomenon, meaning that a form of IT is always used to create and access crowds of people. In this respect, crowdsourcing has been considered to encompass three separate, but stable techniques:
- competition crowdsourcing,
- virtual labor market crowdsourcing,
- and open collaboration crowdsourcing.
Henk van Ess, a college lecturer in online communications, emphasizes the need to "give back" the crowdsourced results to the public on ethical grounds. His nonscientific, noncommercial definition is widely cited in the popular press: "Crowdsourcing is channeling the experts’ desire to solve a problem and then freely sharing the answer with everyone."
Despite the multiplicity of definitions for crowdsourcing, one constant has been the broadcasting of problems to the public, and an open call for contributions to help solve the problem. Members of the public submit solutions that are then owned by the entity, which originally broadcast the problem.
In some cases, the contributor of the solution is compensated monetarily with prizes or with recognition. In other cases, the only rewards may be a kudos or intellectual satisfaction. Crowdsourcing may produce solutions from amateurs or volunteers working in their spare time or from experts or small businesses, which were previously unknown to the initiating organization.
Another consequence of the multiple definitions is the controversy surrounding what kinds of activities that may be considered crowdsourcing.
Click on any of the following blue hyperlinks for more about Crowdsourcing:
- Historical examples
- Modern methods
- Examples
- Crowdvoting
- Crowdsourcing creative work
- Crowdsourcing language-related data collection
- Crowdsolving
- Crowdsearching
- Crowdfunding
- Mobile crowdsourcing
- Macrowork
- Microwork
- Simple projects
- Complex projects
- Inducement prize contests
- Implicit crowdsourcing
- Health-care crowdsourcing
- Crowdsourcing in agriculture
- Crowdsourcing in cheating in bridge
- Crowdsourcers
- Limitations and controversies
- See also:
- Citizen science
- Clickworkers
- Collaborative innovation network
- Collective consciousness
- Collective intelligence
- Collective problem solving
- Commons-based peer production
- Crowd computing
- Crowdcasting
- Crowdfixing
- Crowdsourcing software development
- Distributed thinking
- Distributed Proofreaders
- Flash mob
- Gamification
- Government crowdsourcing
- List of crowdsourcing projects
- Microcredit
- Participatory democracy
- Participatory monitoring
- Smart mob
- Social collaboration
- "Stone Soup"
- TrueCaller
- Virtual Collective Consciousness
- Virtual volunteering
- Wisdom of the crowd
Online Encyclopedias including a List of Online Encyclopedias
YouTube Video: Is Wikipedia a Credible Source?
Pictured (L-R): Wikipedia and Fortune Online Encyclopedia of Economics
An online encyclopedia is an encyclopedia accessible through the internet, such as the English Wikipedia. The idea to build a free encyclopedia using the Internet can be traced at least to the 1994 Interpedia proposal; it was planned as an encyclopedia on the Internet to which everyone could contribute materials. The project never left the planning stage and was overtaken by a key branch of old printed encyclopedias.
Digitization of old content:
In January 1995, Project Gutenberg started to publish the ASCII text of the Encyclopædia Britannica, 11th edition (1911), but disagreement about the method halted the work after the first volume.
For trademark reasons this has been published as the Gutenberg Encyclopedia. In 2002, ASCII text of and 48 sounds of music was published on Encyclopædia Britannica Eleventh Edition by source; a copyright claim was added to the materials included.
Project Gutenberg has restarted work on digitizing and proofreading this encyclopedia; as of June 2005 it had not yet been published. Meanwhile, in the face of competition from rivals such as Encarta, the latest Britannica was digitized by its publishers, and sold first as a CD-ROM and later as an online service.
Other digitization projects have made progress in other titles. One example is Easton's Bible Dictionary (1897) digitized by the Christian Classics Ethereal Library. Probably the most important and successful digitization of an encyclopedia was the Bartleby Project's online adaptation of the Columbia Encyclopedia, tenth Edition, in early 2000 and is updated periodically.
Creation of new content:
Another related branch of activity is the creation of new, free contents on a volunteer basis. In 1991, the participants of the Usenet newsgroup alt.fan.douglas-adams started a project to produce a real version of The Hitchhiker's Guide to the Galaxy, a fictional encyclopedia used in the works of Douglas Adams.
It became known as Project Galactic Guide. Although it originally aimed to contain only real, factual articles, policy was changed to allow and encourage semi-real and unreal articles as well. Project Galactic Guide contains over 1700 articles, but no new articles have been added since 2000; this is probably partly due to the founding of h2g2, a more official project along similar lines.
See Also: ___________________________________________________________________________
List of Online Encyclopedias:
Click on any of the following blue hyperlinks for a List of Online Encyclopedias by category:
Digitization of old content:
In January 1995, Project Gutenberg started to publish the ASCII text of the Encyclopædia Britannica, 11th edition (1911), but disagreement about the method halted the work after the first volume.
For trademark reasons this has been published as the Gutenberg Encyclopedia. In 2002, ASCII text of and 48 sounds of music was published on Encyclopædia Britannica Eleventh Edition by source; a copyright claim was added to the materials included.
Project Gutenberg has restarted work on digitizing and proofreading this encyclopedia; as of June 2005 it had not yet been published. Meanwhile, in the face of competition from rivals such as Encarta, the latest Britannica was digitized by its publishers, and sold first as a CD-ROM and later as an online service.
Other digitization projects have made progress in other titles. One example is Easton's Bible Dictionary (1897) digitized by the Christian Classics Ethereal Library. Probably the most important and successful digitization of an encyclopedia was the Bartleby Project's online adaptation of the Columbia Encyclopedia, tenth Edition, in early 2000 and is updated periodically.
Creation of new content:
Another related branch of activity is the creation of new, free contents on a volunteer basis. In 1991, the participants of the Usenet newsgroup alt.fan.douglas-adams started a project to produce a real version of The Hitchhiker's Guide to the Galaxy, a fictional encyclopedia used in the works of Douglas Adams.
It became known as Project Galactic Guide. Although it originally aimed to contain only real, factual articles, policy was changed to allow and encourage semi-real and unreal articles as well. Project Galactic Guide contains over 1700 articles, but no new articles have been added since 2000; this is probably partly due to the founding of h2g2, a more official project along similar lines.
See Also: ___________________________________________________________________________
List of Online Encyclopedias:
Click on any of the following blue hyperlinks for a List of Online Encyclopedias by category:
- General reference
- Biography
- Antiquities, arts, and literature
- Regional interest
- Pop culture and fiction
- Mathematics
- Music
- Philosophy
- Politics and history
- Religion and theology
- Science and technology
- See also:
HTTP cookies and How They Work
YouTube Video: How Cookies Work in the Google Chrome Browser
Pictured: How Cookies Work (by Google) (Below: GATC Request Process)
(for description of items "1" through "6", see below picture)
How the Tracking Code Works (see the above illustration):
In general, the Google Analytics Tracking Code (GATC) retrieves web page data as follows:
An HTTP cookie (also called web cookie, Internet cookie, browser cookie, or simply cookie) is a small piece of data sent from a website and stored on the user's computer by the user's web browser while the user is browsing.
Cookies were designed to be a reliable mechanism for websites to remember stateful information (such as items added in the shopping cart in an online store) or to record the user's browsing activity (including clicking particular buttons, logging in, or recording which pages were visited in the past). They can also be used to remember arbitrary pieces of information that the user previously entered into form fields such as names, addresses, passwords, and credit card numbers.
Other kinds of cookies perform essential functions in the modern web. Perhaps most importantly, authentication cookies are the most common method used by web servers to know whether the user is logged in or not, and which account they are logged in with. Without such a mechanism, the site would not know whether to send a page containing sensitive information, or require the user to authenticate themselves by logging in.
The security of an authentication cookie generally depends on the security of the issuing website and the user's web browser, and on whether the cookie data is encrypted. Security vulnerabilities may allow a cookie's data to be read by a hacker, used to gain access to user data, or used to gain access (with the user's credentials) to the website to which the cookie belongs (see cross-site scripting and cross-site request forgery for examples).
The tracking cookies, and especially third-party tracking cookies, are commonly used as ways to compile long-term records of individuals' browsing histories – a potential privacy concern that prompted European and U.S. lawmakers to take action in 2011. European law requires that all websites targeting European Union member states gain "informed consent" from users before storing non-essential cookies on their device.
Origin of the nameThe term "cookie" was coined by web browser programmer Lou Montulli. It was derived from the term "magic cookie", which is a packet of data a program receives and sends back unchanged, used by Unix programmers.
History
Magic cookies were already used in computing when computer programmer Lou Montulli had the idea of using them in web communications in June 1994. At the time, he was an employee of Netscape Communications, which was developing an e-commerce application for MCI.
Vint Cerf and John Klensin represented MCI in technical discussions with Netscape Communications. MCI did not want its servers to have to retain partial transaction states, which led them to ask Netscape to find a way to store that state in each user's computer instead. Cookies provided a solution to the problem of reliably implementing a virtual shopping cart.
Together with John Giannandrea, Montulli wrote the initial Netscape cookie specification the same year. Version 0.9beta of Mosaic Netscape, released on October 13, 1994, supported cookies.
The first use of cookies (out of the labs) was checking whether visitors to the Netscape website had already visited the site. Montulli applied for a patent for the cookie technology in 1995, and US 5774670 was granted in 1998. Support for cookies was integrated in Internet Explorer in version 2, released in October 1995.
The introduction of cookies was not widely known to the public at the time. In particular, cookies were accepted by default, and users were not notified of their presence. The general public learned about cookies after the Financial Times published an article about them on February 12, 1996. In the same year, cookies received a lot of media attention, especially because of potential privacy implications. Cookies were discussed in two U.S. Federal Trade Commission hearings in 1996 and 1997.
The development of the formal cookie specifications was already ongoing. In particular, the first discussions about a formal specification started in April 1995 on the www-talk mailing list. A special working group within the Internet Engineering Task Force (IETF) was formed.
Two alternative proposals for introducing state in HTTP transactions had been proposed by Brian Behlendorf and David Kristol respectively. But the group, headed by Kristol himself and Lou Montulli, soon decided to use the Netscape specification as a starting point.
In February 1996, the working group identified third-party cookies as a considerable privacy threat. The specification produced by the group was eventually published as RFC 2109 in February 1997. It specifies that third-party cookies were either not allowed at all, or at least not enabled by default.
At this time, advertising companies were already using third-party cookies. The recommendation about third-party cookies of RFC 2109 was not followed by Netscape and Internet Explorer. RFC 2109 was superseded by RFC 2965 in October 2000. RFC 2965 added a Set-Cookie2 header, which informally came to be called "RFC 2965-style cookies" as opposed to the original Set-Cookie header which was called "Netscape-style cookies". Set-Cookie2 was seldom used however, and was deprecated in RFC 6265 in April 2011 which was written as a definitive specification for cookies as used in the real world.
Types of Cookies:
Session cookie:
A session cookie, also known as an in-memory cookie or transient cookie, exists only in temporary memory while the user navigates the website. Web browsers normally delete session cookies when the user closes the browser. Unlike other cookies, session cookies do not have an expiration date assigned to them, which is how the browser knows to treat them as session cookies.
Persistent cookie;
Instead of expiring when the web browser is closed as session cookies do, a persistent cookie expires at a specific date or after a specific length of time. This means that, for the cookie's entire lifespan (which can be as long or as short as its creators want), its information will be transmitted to the server every time the user visits the website that it belongs to, or every time the user views a resource belonging to that website from another website (such as an advertisement).
For this reason, persistent cookies are sometimes referred to as tracking cookies because they can be used by advertisers to record information about a user's web browsing habits over an extended period of time. However, they are also used for "legitimate" reasons (such as keeping users logged into their accounts on websites, to avoid re-entering login credentials at every visit).
These cookies are however reset if the expiration time is reached or the user manually deletes the cookie.
Secure cookie:
A secure cookie can only be transmitted over an encrypted connection (i.e. HTTPS). They cannot be transmitted over unencrypted connections (i.e. HTTP). This makes the cookie less likely to be exposed to cookie theft via eavesdropping. A cookie is made secure by adding the Secure flag to the cookie.
HttpOnly cookie:
An HttpOnly cookie cannot be accessed by client-side APIs, such as JavaScript. This restriction eliminates the threat of cookie theft via cross-site scripting (XSS). However, the cookie remains vulnerable to cross-site tracing (XST) and cross-site request forgery (XSRF) attacks. A cookie is given this characteristic by adding the HttpOnly flag to the cookie.
SameSite cookie:
Google Chrome 51 recently introduced a new kind of cookie which can only be sent in requests originating from the same origin as the target domain. This restriction mitigates attacks such as cross-site request forgery (XSRF). A cookie is given this characteristic by setting the SameSite flag to Strict or Lax.
Third-party cookie:
Normally, a cookie's domain attribute will match the domain that is shown in the web browser's address bar. This is called a first-party cookie. A third-party cookie, however, belongs to a domain different from the one shown in the address bar. This sort of cookie typically appears when web pages feature content from external websites, such as banner advertisements. This opens up the potential for tracking the user's browsing history, and is often used by advertisers in an effort to serve relevant advertisements to each user.
As an example, suppose a user visits www.example.org. This web site contains an advertisement from ad.foxytracking.com, which, when downloaded, sets a cookie belonging to the advertisement's domain (ad.foxytracking.com).
Then, the user visits another website, www.foo.com, which also contains an advertisement from ad.foxytracking.com, and which also sets a cookie belonging to that domain (ad.foxytracking.com). Eventually, both of these cookies will be sent to the advertiser when loading their advertisements or visiting their website. The advertiser can then use these cookies to build up a browsing history of the user across all the websites that have ads from this advertiser.
As of 2014, some websites were setting cookies readable for over 100 third-party domains. On average, a single website was setting 10 cookies, with a maximum number of cookies (first- and third-party) reaching over 800.
Most modern web browsers contain privacy settings that can block third-party cookies.
Supercookie:
A supercookie is a cookie with an origin of a top-level domain (such as .com) or a public suffix (such as .co.uk). Ordinary cookies, by contrast, have an origin of a specific domain name, such as example.com.
Supercookies can be a potential security concern and are therefore often blocked by web browsers. If unblocked by the browser, an attacker in control of a malicious website could set a supercookie and potentially disrupt or impersonate legitimate user requests to another website that shares the same top-level domain or public suffix as the malicious website.
For example, a supercookie with an origin of .com, could maliciously affect a request made to example.com, even if the cookie did not originate from example.com. This can be used to fake logins or change user information.
The Public Suffix List helps to mitigate the risk that supercookies pose. The Public Suffix List is a cross-vendor initiative that aims to provide an accurate and up-to-date list of domain name suffixes. Older versions of browsers may not have an up-to-date list, and will therefore be vulnerable to supercookies from certain domains.
Other usesThe term "supercookie" is sometimes used for tracking technologies that do not rely on HTTP cookies. Two such "supercookie" mechanisms were found on Microsoft websites in August 2011: cookie syncing that respawned MUID (machine unique identifier) cookies, and ETag cookies. Due to media attention, Microsoft later disabled this code.
Zombie cookie:
Main articles: Zombie cookie and Evercookie
A zombie cookie is a cookie that is automatically recreated after being deleted. This is accomplished by storing the cookie's content in multiple locations, such as Flash Local shared object, HTML5 Web storage, and other client-side and even server-side locations. When the cookie's absence is detected, the cookie is recreated using the data stored in these locations.
Cookie Structure:
A cookie consists of the following components:
Cookie Uses:
Session management:
Cookies were originally introduced to provide a way for users to record items they want to purchase as they navigate throughout a website (a virtual "shopping cart" or "shopping basket").
Today, however, the contents of a user's shopping cart are usually stored in a database on the server, rather than in a cookie on the client. To keep track of which user is assigned to which shopping cart, the server sends a cookie to the client that contains a unique session identifier (typically, a long string of random letters and numbers).
Because cookies are sent to the server with every request the client makes, that session identifier will be sent back to the server every time the user visits a new page on the website, which lets the server know which shopping cart to display to the user.
Another popular use of cookies is for logging into websites. When the user visits a website's login page, the web server typically sends the client a cookie containing a unique session identifier. When the user successfully logs in, the server remembers that that particular session identifier has been authenticated, and grants the user access to its services.
Because session cookies only contain a unique session identifier, this makes the amount of personal information that a website can save about each user virtually limitless—the website is not limited to restrictions concerning how large a cookie can be. Session cookies also help to improve page load times, since the amount of information in a session cookie is small and requires little bandwidth.
Personalization:
Cookies can be used to remember information about the user in order to show relevant content to that user over time. For example, a web server might send a cookie containing the username last used to log into a website so that it may be filled in automatically the next time the user logs in.
Many websites use cookies for personalization based on the user's preferences. Users select their preferences by entering them in a web form and submitting the form to the server. The server encodes the preferences in a cookie and sends the cookie back to the browser. This way, every time the user accesses a page on the website, the server can personalize the page according to the user's preferences.
For example, the Google search engine once used cookies to allow users (even non-registered ones) to decide how many search results per page they wanted to see.
Also, DuckDuckGo uses cookies to allow users to set the viewing preferences like colors of the web page.
Tracking:
See also: Web visitor tracking
Tracking cookies are used to track users' web browsing habits. This can also be done to some extent by using the IP address of the computer requesting the page or the referer field of the HTTP request header, but cookies allow for greater precision. This can be demonstrated as follows:
By analyzing this log file, it is then possible to find out which pages the user has visited, in what sequence, and for how long.
Corporations exploit users' web habits by tracking cookies to collect information about buying habits. The Wall Street Journal found that America's top fifty websites installed an average of sixty-four pieces of tracking technology onto computers resulting in a total of 3,180 tracking files. The data can then be collected and sold to bidding corporations.
Implemention:
Cookies are arbitrary pieces of data, usually chosen and first sent by the web server, and stored on the client computer by the web browser. The browser then sends them back to the server with every request, introducing states (memory of previous events) into otherwise stateless HTTP transactions.
Without cookies, each retrieval of a web page or component of a web page would be an isolated event, largely unrelated to all other page views made by the user on the website.
Although cookies are usually set by the web server, they can also be set by the client using a scripting language such as JavaScript (unless the cookie's HttpOnly flag is set, in which case the cookie cannot be modified by scripting languages).
The cookie specifications require that browsers meet the following requirements in order to support cookies:
See also:
Browser Settings:
Most modern browsers support cookies and allow the user to disable them. The following are common options:
By default, Internet Explorer allows third-party cookies only if they are accompanied by a P3P "CP" (Compact Policy) field.
Add-on tools for managing cookie permissions also exist.
Privacy and third-party cookies:
See also: Do Not Track and Web analytics § Problems with cookies
Cookies have some important implications on the privacy and anonymity of web users. While cookies are sent only to the server setting them or a server in the same Internet domain, a web page may contain images or other components stored on servers in other domains.
Cookies that are set during retrieval of these components are called third-party cookies. The older standards for cookies, RFC 2109 and RFC 2965, specify that browsers should protect user privacy and not allow sharing of cookies between servers by default. However, the newer standard, RFC 6265, explicitly allows user agents to implement whichever third-party cookie policy they wish.
Most browsers, such as Mozilla Firefox, Internet Explorer, Opera and Google Chrome do allow third-party cookies by default, as long as the third-party website has Compact Privacy Policy published.
Newer versions of Safari block third-party cookies, and this is planned for Mozilla Firefox as well (initially planned for version 22 but was postponed indefinitely).
Advertising companies use third-party cookies to track a user across multiple sites. In particular, an advertising company can track a user across all pages where it has placed advertising images or web bugs.
Knowledge of the pages visited by a user allows the advertising company to target advertisements to the user's presumed preferences.
Website operators who do not disclose third-party cookie use to consumers run the risk of harming consumer trust if cookie use is discovered. Having clear disclosure (such as in a privacy policy) tends to eliminate any negative effects of such cookie discovery.
The possibility of building a profile of users is a privacy threat, especially when tracking is done across multiple domains using third-party cookies. For this reason, some countries have legislation about cookies.
The United States government has set strict rules on setting cookies in 2000 after it was disclosed that the White House drug policy office used cookies to track computer users viewing its online anti-drug advertising. In 2002, privacy activist Daniel Brandt found that the CIA had been leaving persistent cookies on computers which had visited its website.
When notified it was violating policy, CIA stated that these cookies were not intentionally set and stopped setting them. On December 25, 2005, Brandt discovered that the National Security Agency (NSA) had been leaving two persistent cookies on visitors' computers due to a software upgrade. After being informed, the NSA immediately disabled the cookies.
Cookie theft and session hijacking:
Most websites use cookies as the only identifiers for user sessions, because other methods of identifying web users have limitations and vulnerabilities. If a website uses cookies as session identifiers, attackers can impersonate users' requests by stealing a full set of victims' cookies.
From the web server's point of view, a request from an attacker then has the same authentication as the victim's requests; thus the request is performed on behalf of the victim's session.
Listed here are various scenarios of cookie theft and user session hijacking (even without stealing user cookies) which work with websites which rely solely on HTTP cookies for user identification.
See Also:
Drawbacks of cookies:
Besides privacy concerns, cookies also have some technical drawbacks. In particular, they do not always accurately identify users, they can be used for security attacks, and they are often at odds with the Representational State Transfer (REST) software architectural style.
Inaccurate identification:
If more than one browser is used on a computer, each usually has a separate storage area for cookies. Hence cookies do not identify a person, but a combination of a user account, a computer, and a web browser. Thus, anyone who uses multiple accounts, computers, or browsers has multiple sets of cookies.
Likewise, cookies do not differentiate between multiple users who share the same user account, computer, and browser.
Inconsistent state on client and server:
The use of cookies may generate an inconsistency between the state of the client and the state as stored in the cookie. If the user acquires a cookie and then clicks the "Back" button of the browser, the state on the browser is generally not the same as before that acquisition.
As an example, if the shopping cart of an online shop is built using cookies, the content of the cart may not change when the user goes back in the browser's history: if the user presses a button to add an item in the shopping cart and then clicks on the "Back" button, the item remains in the shopping cart.
This might not be the intention of the user, who possibly wanted to undo the addition of the item. This can lead to unreliability, confusion, and bugs. Web developers should therefore be aware of this issue and implement measures to handle such situations.
Alternatives to cookies:
Some of the operations that can be done using cookies can also be done using other mechanisms.
JSON Web Tokens:
JSON Web Token (JWT) is a self-contained packet of information that can be used to store user identity and authenticity information. This allows them to be used in place of session cookies. Unlike cookies, which are automatically attached to each HTTP request by the browser, JWTs must be explicitly attached to each HTTP request by the web application.
HTTP authentication:
The HTTP protocol includes the basic access authentication and the digest access authentication protocols, which allow access to a web page only when the user has provided the correct username and password. If the server requires such credentials for granting access to a web page, the browser requests them from the user and, once obtained, the browser stores and sends them in every subsequent page request. This information can be used to track the user.
IP address:
Some users may be tracked based on the IP address of the computer requesting the page. The server knows the IP address of the computer running the browser (or the proxy, if any is used) and could theoretically link a user's session to this IP address.
However, IP addresses are generally not a reliable way to track a session or identify a user. Many computers designed to be used by a single user, such as office PCs or home PCs, are behind a network address translator (NAT).
This means that several PCs will share a public IP address. Furthermore, some systems, such as Tor, are designed to retain Internet anonymity, rendering tracking by IP address impractical, impossible, or a security risk.
URL (query string):
A more precise technique is based on embedding information into URLs. The query string part of the URL is the part that is typically used for this purpose, but other parts can be used as well. The Java Servlet and PHP session mechanisms both use this method if cookies are not enabled.
This method consists of the web server appending query strings containing a unique session identifier to all the links inside of a web page. When the user follows a link, the browser sends the query string to the server, allowing the server to identify the user and maintain state.
These kinds of query strings are very similar to cookies in that both contain arbitrary pieces of information chosen by the server and both are sent back to the server on every request.
However, there are some differences. Since a query string is part of a URL, if that URL is later reused, the same attached piece of information will be sent to the server, which could lead to confusion. For example, if the preferences of a user are encoded in the query string of a URL and the user sends this URL to another user by e-mail, those preferences will be used for that other user as well.
Moreover, if the same user accesses the same page multiple times from different sources, there is no guarantee that the same query string will be used each time. For example, if a user visits a page by coming from a page internal to the site the first time, and then visits the same page by coming from an external search engine the second time, the query strings would likely be different. If cookies were used in this situation, the cookies would be the same.
Other drawbacks of query strings are related to security. Storing data that identifies a session in a query string enables session fixation attacks, referer logging attacks and other security exploits. Transferring session identifiers as HTTP cookies is more secure.
Hidden form fields:
Another form of session tracking is to use web forms with hidden fields. This technique is very similar to using URL query strings to hold the information and has many of the same advantages and drawbacks.
In fact, if the form is handled with the HTTP GET method, then this technique is similar to using URL query strings, since the GET method adds the form fields to the URL as a query string. But most forms are handled with HTTP POST, which causes the form information, including the hidden fields, to be sent in the HTTP request body, which is neither part of the URL, nor of a cookie.
This approach presents two advantages from the point of view of the tracker:
"window.name" DOM property:
All current web browsers can store a fairly large amount of data (2–32 MB) via JavaScript using the DOM property window.name. This data can be used instead of session cookies and is also cross-domain. The technique can be coupled with JSON/JavaScript objects to store complex sets of session variables on the client side.
The downside is that every separate window or tab will initially have an empty window.name property when opened. Furthermore, the property can be used for tracking visitors across different websites, making it of concern for Internet privacy.
In some respects, this can be more secure than cookies due to the fact that its contents are not automatically sent to the server on every request like cookies are, so it is not vulnerable to network cookie sniffing attacks. However, if special measures are not taken to protect the data, it is vulnerable to other attacks because the data is available across different websites opened in the same window or tab.
Identifier for advertisers:
Apple uses a tracking technique called "identifier for advertisers" (IDFA). This technique assigns a unique identifier to every user that buys an Apple iOS device (such as an iPhone or iPad). This identifier is then used by Apple's advertising network, iAd, to determine the ads that individuals are viewing and responding to.
ETagMain:
Article: HTTP ETag § Tracking using ETags
Because ETags are cached by the browser, and returned with subsequent requests for the same resource, a tracking server can simply repeat any ETag received from the browser to ensure an assigned ETag persists indefinitely (in a similar way to persistent cookies). Additional caching headers can also enhance the preservation of ETag data.
ETags can be flushed in some browsers by clearing the browser cache.
Web storage:
Main article: Web storage
Some web browsers support persistence mechanisms which allow the page to store the information locally for later use.
The HTML5 standard (which most modern web browsers support to some extent) includes a JavaScript API called Web storage that allows two types of storage: local storage and session storage.
Local storage behaves similarly to persistent cookies while session storage behaves similarly to session cookies, except that session storage is tied to an individual tab/window's lifetime (AKA a page session), not to a whole browser session like session cookies.
Internet Explorer supports persistent information in the browser's history, in the browser's favorites, in an XML store ("user data"), or directly within a web page saved to disk.
Some web browser plugins include persistence mechanisms as well. For example, Adobe Flash has Local shared object and Microsoft Silverlight has Isolated storage.
Browser cache:
Main article: Web cache
The browser cache can also be used to store information that can be used to track individual users. This technique takes advantage of the fact that the web browser will use resources stored within the cache instead of downloading them from the website when it determines that the cache already has the most up-to-date version of the resource.
For example, a website could serve a JavaScript file that contains code which sets a unique identifier for the user (for example, var userId = 3243242;). After the user's initial visit, every time the user accesses the page, this file will be loaded from the cache instead of downloaded from the server. Thus, its content will never change.
Browser fingerprint:
Main article: Device fingerprint
A browser fingerprint is information collected about a browser's configuration, such as version number, screen resolution, and operating system, for the purpose of identification. Fingerprints can be used to fully or partially identify individual users or devices even when cookies are turned off.
Basic web browser configuration information has long been collected by web analytics services in an effort to accurately measure real human web traffic and discount various forms of click fraud.
With the assistance of client-side scripting languages, collection of much more esoteric parameters is possible. Assimilation of such information into a single string comprises a device fingerprint.
In 2010, EFF measured at least 18.1 bits of entropy possible from browser fingerprinting. Canvas fingerprinting, a more recent technique, claims to add another 5.7 bits.
See also:
In general, the Google Analytics Tracking Code (GATC) retrieves web page data as follows:
- A browser requests a web page that contains the tracking code.
- A JavaScript Array named _gaq is created and tracking commands are pushed onto the array.
- A <script> element is created and enabled for asynchronous loading (loading in the background).
- The ga.js tracking code is fetched, with the appropriate protocol automatically detected. Once the code is fetched and loaded, the commands on the _gaq array are executed and the array is transformed into a tracking object. Subsequent tracking calls are made directly to Google Analytics.
- Loads the script element to the DOM.
- After the tracking code collects data, the GIF request is sent to the Analytics database for logging and post-processing.
An HTTP cookie (also called web cookie, Internet cookie, browser cookie, or simply cookie) is a small piece of data sent from a website and stored on the user's computer by the user's web browser while the user is browsing.
Cookies were designed to be a reliable mechanism for websites to remember stateful information (such as items added in the shopping cart in an online store) or to record the user's browsing activity (including clicking particular buttons, logging in, or recording which pages were visited in the past). They can also be used to remember arbitrary pieces of information that the user previously entered into form fields such as names, addresses, passwords, and credit card numbers.
Other kinds of cookies perform essential functions in the modern web. Perhaps most importantly, authentication cookies are the most common method used by web servers to know whether the user is logged in or not, and which account they are logged in with. Without such a mechanism, the site would not know whether to send a page containing sensitive information, or require the user to authenticate themselves by logging in.
The security of an authentication cookie generally depends on the security of the issuing website and the user's web browser, and on whether the cookie data is encrypted. Security vulnerabilities may allow a cookie's data to be read by a hacker, used to gain access to user data, or used to gain access (with the user's credentials) to the website to which the cookie belongs (see cross-site scripting and cross-site request forgery for examples).
The tracking cookies, and especially third-party tracking cookies, are commonly used as ways to compile long-term records of individuals' browsing histories – a potential privacy concern that prompted European and U.S. lawmakers to take action in 2011. European law requires that all websites targeting European Union member states gain "informed consent" from users before storing non-essential cookies on their device.
Origin of the nameThe term "cookie" was coined by web browser programmer Lou Montulli. It was derived from the term "magic cookie", which is a packet of data a program receives and sends back unchanged, used by Unix programmers.
History
Magic cookies were already used in computing when computer programmer Lou Montulli had the idea of using them in web communications in June 1994. At the time, he was an employee of Netscape Communications, which was developing an e-commerce application for MCI.
Vint Cerf and John Klensin represented MCI in technical discussions with Netscape Communications. MCI did not want its servers to have to retain partial transaction states, which led them to ask Netscape to find a way to store that state in each user's computer instead. Cookies provided a solution to the problem of reliably implementing a virtual shopping cart.
Together with John Giannandrea, Montulli wrote the initial Netscape cookie specification the same year. Version 0.9beta of Mosaic Netscape, released on October 13, 1994, supported cookies.
The first use of cookies (out of the labs) was checking whether visitors to the Netscape website had already visited the site. Montulli applied for a patent for the cookie technology in 1995, and US 5774670 was granted in 1998. Support for cookies was integrated in Internet Explorer in version 2, released in October 1995.
The introduction of cookies was not widely known to the public at the time. In particular, cookies were accepted by default, and users were not notified of their presence. The general public learned about cookies after the Financial Times published an article about them on February 12, 1996. In the same year, cookies received a lot of media attention, especially because of potential privacy implications. Cookies were discussed in two U.S. Federal Trade Commission hearings in 1996 and 1997.
The development of the formal cookie specifications was already ongoing. In particular, the first discussions about a formal specification started in April 1995 on the www-talk mailing list. A special working group within the Internet Engineering Task Force (IETF) was formed.
Two alternative proposals for introducing state in HTTP transactions had been proposed by Brian Behlendorf and David Kristol respectively. But the group, headed by Kristol himself and Lou Montulli, soon decided to use the Netscape specification as a starting point.
In February 1996, the working group identified third-party cookies as a considerable privacy threat. The specification produced by the group was eventually published as RFC 2109 in February 1997. It specifies that third-party cookies were either not allowed at all, or at least not enabled by default.
At this time, advertising companies were already using third-party cookies. The recommendation about third-party cookies of RFC 2109 was not followed by Netscape and Internet Explorer. RFC 2109 was superseded by RFC 2965 in October 2000. RFC 2965 added a Set-Cookie2 header, which informally came to be called "RFC 2965-style cookies" as opposed to the original Set-Cookie header which was called "Netscape-style cookies". Set-Cookie2 was seldom used however, and was deprecated in RFC 6265 in April 2011 which was written as a definitive specification for cookies as used in the real world.
Types of Cookies:
Session cookie:
A session cookie, also known as an in-memory cookie or transient cookie, exists only in temporary memory while the user navigates the website. Web browsers normally delete session cookies when the user closes the browser. Unlike other cookies, session cookies do not have an expiration date assigned to them, which is how the browser knows to treat them as session cookies.
Persistent cookie;
Instead of expiring when the web browser is closed as session cookies do, a persistent cookie expires at a specific date or after a specific length of time. This means that, for the cookie's entire lifespan (which can be as long or as short as its creators want), its information will be transmitted to the server every time the user visits the website that it belongs to, or every time the user views a resource belonging to that website from another website (such as an advertisement).
For this reason, persistent cookies are sometimes referred to as tracking cookies because they can be used by advertisers to record information about a user's web browsing habits over an extended period of time. However, they are also used for "legitimate" reasons (such as keeping users logged into their accounts on websites, to avoid re-entering login credentials at every visit).
These cookies are however reset if the expiration time is reached or the user manually deletes the cookie.
Secure cookie:
A secure cookie can only be transmitted over an encrypted connection (i.e. HTTPS). They cannot be transmitted over unencrypted connections (i.e. HTTP). This makes the cookie less likely to be exposed to cookie theft via eavesdropping. A cookie is made secure by adding the Secure flag to the cookie.
HttpOnly cookie:
An HttpOnly cookie cannot be accessed by client-side APIs, such as JavaScript. This restriction eliminates the threat of cookie theft via cross-site scripting (XSS). However, the cookie remains vulnerable to cross-site tracing (XST) and cross-site request forgery (XSRF) attacks. A cookie is given this characteristic by adding the HttpOnly flag to the cookie.
SameSite cookie:
Google Chrome 51 recently introduced a new kind of cookie which can only be sent in requests originating from the same origin as the target domain. This restriction mitigates attacks such as cross-site request forgery (XSRF). A cookie is given this characteristic by setting the SameSite flag to Strict or Lax.
Third-party cookie:
Normally, a cookie's domain attribute will match the domain that is shown in the web browser's address bar. This is called a first-party cookie. A third-party cookie, however, belongs to a domain different from the one shown in the address bar. This sort of cookie typically appears when web pages feature content from external websites, such as banner advertisements. This opens up the potential for tracking the user's browsing history, and is often used by advertisers in an effort to serve relevant advertisements to each user.
As an example, suppose a user visits www.example.org. This web site contains an advertisement from ad.foxytracking.com, which, when downloaded, sets a cookie belonging to the advertisement's domain (ad.foxytracking.com).
Then, the user visits another website, www.foo.com, which also contains an advertisement from ad.foxytracking.com, and which also sets a cookie belonging to that domain (ad.foxytracking.com). Eventually, both of these cookies will be sent to the advertiser when loading their advertisements or visiting their website. The advertiser can then use these cookies to build up a browsing history of the user across all the websites that have ads from this advertiser.
As of 2014, some websites were setting cookies readable for over 100 third-party domains. On average, a single website was setting 10 cookies, with a maximum number of cookies (first- and third-party) reaching over 800.
Most modern web browsers contain privacy settings that can block third-party cookies.
Supercookie:
A supercookie is a cookie with an origin of a top-level domain (such as .com) or a public suffix (such as .co.uk). Ordinary cookies, by contrast, have an origin of a specific domain name, such as example.com.
Supercookies can be a potential security concern and are therefore often blocked by web browsers. If unblocked by the browser, an attacker in control of a malicious website could set a supercookie and potentially disrupt or impersonate legitimate user requests to another website that shares the same top-level domain or public suffix as the malicious website.
For example, a supercookie with an origin of .com, could maliciously affect a request made to example.com, even if the cookie did not originate from example.com. This can be used to fake logins or change user information.
The Public Suffix List helps to mitigate the risk that supercookies pose. The Public Suffix List is a cross-vendor initiative that aims to provide an accurate and up-to-date list of domain name suffixes. Older versions of browsers may not have an up-to-date list, and will therefore be vulnerable to supercookies from certain domains.
Other usesThe term "supercookie" is sometimes used for tracking technologies that do not rely on HTTP cookies. Two such "supercookie" mechanisms were found on Microsoft websites in August 2011: cookie syncing that respawned MUID (machine unique identifier) cookies, and ETag cookies. Due to media attention, Microsoft later disabled this code.
Zombie cookie:
Main articles: Zombie cookie and Evercookie
A zombie cookie is a cookie that is automatically recreated after being deleted. This is accomplished by storing the cookie's content in multiple locations, such as Flash Local shared object, HTML5 Web storage, and other client-side and even server-side locations. When the cookie's absence is detected, the cookie is recreated using the data stored in these locations.
Cookie Structure:
A cookie consists of the following components:
- Name
- Value
- Zero or more attributes (name/value pairs). Attributes store information such as the cookie’s expiration, domain, and flags (such as Secure and HttpOnly).
Cookie Uses:
Session management:
Cookies were originally introduced to provide a way for users to record items they want to purchase as they navigate throughout a website (a virtual "shopping cart" or "shopping basket").
Today, however, the contents of a user's shopping cart are usually stored in a database on the server, rather than in a cookie on the client. To keep track of which user is assigned to which shopping cart, the server sends a cookie to the client that contains a unique session identifier (typically, a long string of random letters and numbers).
Because cookies are sent to the server with every request the client makes, that session identifier will be sent back to the server every time the user visits a new page on the website, which lets the server know which shopping cart to display to the user.
Another popular use of cookies is for logging into websites. When the user visits a website's login page, the web server typically sends the client a cookie containing a unique session identifier. When the user successfully logs in, the server remembers that that particular session identifier has been authenticated, and grants the user access to its services.
Because session cookies only contain a unique session identifier, this makes the amount of personal information that a website can save about each user virtually limitless—the website is not limited to restrictions concerning how large a cookie can be. Session cookies also help to improve page load times, since the amount of information in a session cookie is small and requires little bandwidth.
Personalization:
Cookies can be used to remember information about the user in order to show relevant content to that user over time. For example, a web server might send a cookie containing the username last used to log into a website so that it may be filled in automatically the next time the user logs in.
Many websites use cookies for personalization based on the user's preferences. Users select their preferences by entering them in a web form and submitting the form to the server. The server encodes the preferences in a cookie and sends the cookie back to the browser. This way, every time the user accesses a page on the website, the server can personalize the page according to the user's preferences.
For example, the Google search engine once used cookies to allow users (even non-registered ones) to decide how many search results per page they wanted to see.
Also, DuckDuckGo uses cookies to allow users to set the viewing preferences like colors of the web page.
Tracking:
See also: Web visitor tracking
Tracking cookies are used to track users' web browsing habits. This can also be done to some extent by using the IP address of the computer requesting the page or the referer field of the HTTP request header, but cookies allow for greater precision. This can be demonstrated as follows:
- If the user requests a page of the site, but the request contains no cookie, the server presumes that this is the first page visited by the user. So the server creates a unique identifier (typically a string of random letters and numbers) and sends it as a cookie back to the browser together with the requested page.
- From this point on, the cookie will automatically be sent by the browser to the server every time a new page from the site is requested. The server sends the page as usual, but also stores the URL of the requested page, the date/time of the request, and the cookie in a log file.
By analyzing this log file, it is then possible to find out which pages the user has visited, in what sequence, and for how long.
Corporations exploit users' web habits by tracking cookies to collect information about buying habits. The Wall Street Journal found that America's top fifty websites installed an average of sixty-four pieces of tracking technology onto computers resulting in a total of 3,180 tracking files. The data can then be collected and sold to bidding corporations.
Implemention:
Cookies are arbitrary pieces of data, usually chosen and first sent by the web server, and stored on the client computer by the web browser. The browser then sends them back to the server with every request, introducing states (memory of previous events) into otherwise stateless HTTP transactions.
Without cookies, each retrieval of a web page or component of a web page would be an isolated event, largely unrelated to all other page views made by the user on the website.
Although cookies are usually set by the web server, they can also be set by the client using a scripting language such as JavaScript (unless the cookie's HttpOnly flag is set, in which case the cookie cannot be modified by scripting languages).
The cookie specifications require that browsers meet the following requirements in order to support cookies:
- Can support cookies as large as 4,096 bytes in size.
- Can support at least 50 cookies per domain (i.e. per website).
- Can support at least 3,000 cookies in total.
See also:
Browser Settings:
Most modern browsers support cookies and allow the user to disable them. The following are common options:
- To enable or disable cookies completely, so that they are always accepted or always blocked.
- To view and selectively delete cookies using a cookie manager.
- To fully wipe all private data, including cookies.
By default, Internet Explorer allows third-party cookies only if they are accompanied by a P3P "CP" (Compact Policy) field.
Add-on tools for managing cookie permissions also exist.
Privacy and third-party cookies:
See also: Do Not Track and Web analytics § Problems with cookies
Cookies have some important implications on the privacy and anonymity of web users. While cookies are sent only to the server setting them or a server in the same Internet domain, a web page may contain images or other components stored on servers in other domains.
Cookies that are set during retrieval of these components are called third-party cookies. The older standards for cookies, RFC 2109 and RFC 2965, specify that browsers should protect user privacy and not allow sharing of cookies between servers by default. However, the newer standard, RFC 6265, explicitly allows user agents to implement whichever third-party cookie policy they wish.
Most browsers, such as Mozilla Firefox, Internet Explorer, Opera and Google Chrome do allow third-party cookies by default, as long as the third-party website has Compact Privacy Policy published.
Newer versions of Safari block third-party cookies, and this is planned for Mozilla Firefox as well (initially planned for version 22 but was postponed indefinitely).
Advertising companies use third-party cookies to track a user across multiple sites. In particular, an advertising company can track a user across all pages where it has placed advertising images or web bugs.
Knowledge of the pages visited by a user allows the advertising company to target advertisements to the user's presumed preferences.
Website operators who do not disclose third-party cookie use to consumers run the risk of harming consumer trust if cookie use is discovered. Having clear disclosure (such as in a privacy policy) tends to eliminate any negative effects of such cookie discovery.
The possibility of building a profile of users is a privacy threat, especially when tracking is done across multiple domains using third-party cookies. For this reason, some countries have legislation about cookies.
The United States government has set strict rules on setting cookies in 2000 after it was disclosed that the White House drug policy office used cookies to track computer users viewing its online anti-drug advertising. In 2002, privacy activist Daniel Brandt found that the CIA had been leaving persistent cookies on computers which had visited its website.
When notified it was violating policy, CIA stated that these cookies were not intentionally set and stopped setting them. On December 25, 2005, Brandt discovered that the National Security Agency (NSA) had been leaving two persistent cookies on visitors' computers due to a software upgrade. After being informed, the NSA immediately disabled the cookies.
Cookie theft and session hijacking:
Most websites use cookies as the only identifiers for user sessions, because other methods of identifying web users have limitations and vulnerabilities. If a website uses cookies as session identifiers, attackers can impersonate users' requests by stealing a full set of victims' cookies.
From the web server's point of view, a request from an attacker then has the same authentication as the victim's requests; thus the request is performed on behalf of the victim's session.
Listed here are various scenarios of cookie theft and user session hijacking (even without stealing user cookies) which work with websites which rely solely on HTTP cookies for user identification.
See Also:
- Network eavesdropping
- Publishing false sub-domain: DNS cache poisoning
- Cross-site scripting: cookie theft
- Cross-site scripting: proxy request
- Cross-site request forgery
Drawbacks of cookies:
Besides privacy concerns, cookies also have some technical drawbacks. In particular, they do not always accurately identify users, they can be used for security attacks, and they are often at odds with the Representational State Transfer (REST) software architectural style.
Inaccurate identification:
If more than one browser is used on a computer, each usually has a separate storage area for cookies. Hence cookies do not identify a person, but a combination of a user account, a computer, and a web browser. Thus, anyone who uses multiple accounts, computers, or browsers has multiple sets of cookies.
Likewise, cookies do not differentiate between multiple users who share the same user account, computer, and browser.
Inconsistent state on client and server:
The use of cookies may generate an inconsistency between the state of the client and the state as stored in the cookie. If the user acquires a cookie and then clicks the "Back" button of the browser, the state on the browser is generally not the same as before that acquisition.
As an example, if the shopping cart of an online shop is built using cookies, the content of the cart may not change when the user goes back in the browser's history: if the user presses a button to add an item in the shopping cart and then clicks on the "Back" button, the item remains in the shopping cart.
This might not be the intention of the user, who possibly wanted to undo the addition of the item. This can lead to unreliability, confusion, and bugs. Web developers should therefore be aware of this issue and implement measures to handle such situations.
Alternatives to cookies:
Some of the operations that can be done using cookies can also be done using other mechanisms.
JSON Web Tokens:
JSON Web Token (JWT) is a self-contained packet of information that can be used to store user identity and authenticity information. This allows them to be used in place of session cookies. Unlike cookies, which are automatically attached to each HTTP request by the browser, JWTs must be explicitly attached to each HTTP request by the web application.
HTTP authentication:
The HTTP protocol includes the basic access authentication and the digest access authentication protocols, which allow access to a web page only when the user has provided the correct username and password. If the server requires such credentials for granting access to a web page, the browser requests them from the user and, once obtained, the browser stores and sends them in every subsequent page request. This information can be used to track the user.
IP address:
Some users may be tracked based on the IP address of the computer requesting the page. The server knows the IP address of the computer running the browser (or the proxy, if any is used) and could theoretically link a user's session to this IP address.
However, IP addresses are generally not a reliable way to track a session or identify a user. Many computers designed to be used by a single user, such as office PCs or home PCs, are behind a network address translator (NAT).
This means that several PCs will share a public IP address. Furthermore, some systems, such as Tor, are designed to retain Internet anonymity, rendering tracking by IP address impractical, impossible, or a security risk.
URL (query string):
A more precise technique is based on embedding information into URLs. The query string part of the URL is the part that is typically used for this purpose, but other parts can be used as well. The Java Servlet and PHP session mechanisms both use this method if cookies are not enabled.
This method consists of the web server appending query strings containing a unique session identifier to all the links inside of a web page. When the user follows a link, the browser sends the query string to the server, allowing the server to identify the user and maintain state.
These kinds of query strings are very similar to cookies in that both contain arbitrary pieces of information chosen by the server and both are sent back to the server on every request.
However, there are some differences. Since a query string is part of a URL, if that URL is later reused, the same attached piece of information will be sent to the server, which could lead to confusion. For example, if the preferences of a user are encoded in the query string of a URL and the user sends this URL to another user by e-mail, those preferences will be used for that other user as well.
Moreover, if the same user accesses the same page multiple times from different sources, there is no guarantee that the same query string will be used each time. For example, if a user visits a page by coming from a page internal to the site the first time, and then visits the same page by coming from an external search engine the second time, the query strings would likely be different. If cookies were used in this situation, the cookies would be the same.
Other drawbacks of query strings are related to security. Storing data that identifies a session in a query string enables session fixation attacks, referer logging attacks and other security exploits. Transferring session identifiers as HTTP cookies is more secure.
Hidden form fields:
Another form of session tracking is to use web forms with hidden fields. This technique is very similar to using URL query strings to hold the information and has many of the same advantages and drawbacks.
In fact, if the form is handled with the HTTP GET method, then this technique is similar to using URL query strings, since the GET method adds the form fields to the URL as a query string. But most forms are handled with HTTP POST, which causes the form information, including the hidden fields, to be sent in the HTTP request body, which is neither part of the URL, nor of a cookie.
This approach presents two advantages from the point of view of the tracker:
- Having the tracking information placed in the HTTP request body rather than in the URL means it will not be noticed by the average user.
- The session information is not copied when the user copies the URL (to bookmark the page or send it via email, for example).
"window.name" DOM property:
All current web browsers can store a fairly large amount of data (2–32 MB) via JavaScript using the DOM property window.name. This data can be used instead of session cookies and is also cross-domain. The technique can be coupled with JSON/JavaScript objects to store complex sets of session variables on the client side.
The downside is that every separate window or tab will initially have an empty window.name property when opened. Furthermore, the property can be used for tracking visitors across different websites, making it of concern for Internet privacy.
In some respects, this can be more secure than cookies due to the fact that its contents are not automatically sent to the server on every request like cookies are, so it is not vulnerable to network cookie sniffing attacks. However, if special measures are not taken to protect the data, it is vulnerable to other attacks because the data is available across different websites opened in the same window or tab.
Identifier for advertisers:
Apple uses a tracking technique called "identifier for advertisers" (IDFA). This technique assigns a unique identifier to every user that buys an Apple iOS device (such as an iPhone or iPad). This identifier is then used by Apple's advertising network, iAd, to determine the ads that individuals are viewing and responding to.
ETagMain:
Article: HTTP ETag § Tracking using ETags
Because ETags are cached by the browser, and returned with subsequent requests for the same resource, a tracking server can simply repeat any ETag received from the browser to ensure an assigned ETag persists indefinitely (in a similar way to persistent cookies). Additional caching headers can also enhance the preservation of ETag data.
ETags can be flushed in some browsers by clearing the browser cache.
Web storage:
Main article: Web storage
Some web browsers support persistence mechanisms which allow the page to store the information locally for later use.
The HTML5 standard (which most modern web browsers support to some extent) includes a JavaScript API called Web storage that allows two types of storage: local storage and session storage.
Local storage behaves similarly to persistent cookies while session storage behaves similarly to session cookies, except that session storage is tied to an individual tab/window's lifetime (AKA a page session), not to a whole browser session like session cookies.
Internet Explorer supports persistent information in the browser's history, in the browser's favorites, in an XML store ("user data"), or directly within a web page saved to disk.
Some web browser plugins include persistence mechanisms as well. For example, Adobe Flash has Local shared object and Microsoft Silverlight has Isolated storage.
Browser cache:
Main article: Web cache
The browser cache can also be used to store information that can be used to track individual users. This technique takes advantage of the fact that the web browser will use resources stored within the cache instead of downloading them from the website when it determines that the cache already has the most up-to-date version of the resource.
For example, a website could serve a JavaScript file that contains code which sets a unique identifier for the user (for example, var userId = 3243242;). After the user's initial visit, every time the user accesses the page, this file will be loaded from the cache instead of downloaded from the server. Thus, its content will never change.
Browser fingerprint:
Main article: Device fingerprint
A browser fingerprint is information collected about a browser's configuration, such as version number, screen resolution, and operating system, for the purpose of identification. Fingerprints can be used to fully or partially identify individual users or devices even when cookies are turned off.
Basic web browser configuration information has long been collected by web analytics services in an effort to accurately measure real human web traffic and discount various forms of click fraud.
With the assistance of client-side scripting languages, collection of much more esoteric parameters is possible. Assimilation of such information into a single string comprises a device fingerprint.
In 2010, EFF measured at least 18.1 bits of entropy possible from browser fingerprinting. Canvas fingerprinting, a more recent technique, claims to add another 5.7 bits.
See also:
- Dynamic HTML
- Enterprise JavaBeans
- Session (computer science)
- Secure cookies
- RFC 6265, the current official specification for HTTP cookies
- HTTP cookies, Mozilla Developer Network
- Using cookies via ECMAScript, Mozilla Developer Network
- How Internet Cookies Work at HowStuffWorks
- Cookies at the Electronic Privacy Information Center (EPIC)
- Mozilla Knowledge-Base: Cookies
- Cookie Domain, explain in detail how cookie domains are handled in current major browsers
Buzzfeed.com
YouTube Video: $11 Steak Vs. $306 Steak
BuzzFeed Inc is an American Internet media company based in New York City. The firm is a social news and entertainment company with a focus on digital media.
BuzzFeed was founded in 2006 as a viral lab focusing on tracking viral content, by Jonah Peretti and John S. Johnson III. Kenneth Lerer, co-founder and chairman of The Huffington Post, started as a co-founder and investor in BuzzFeed and is now the executive chairman as well.
The company has grown into a global media and technology company providing coverage on a variety of topics including politics, DIY, animals and business. In late 2011, Ben Smith of Politico was hired as editor-in-chief to expand the site into serious journalism, long-form journalism, and reportage.
History:
Founding:
Prior to establishing BuzzFeed, Peretti was director of research and development and the OpenLab at Eyebeam, Johnson's New York City-based art and technology nonprofit, where he experimented with other viral media.
While working at the Huffington Post, Peretti started BuzzFeed as a side project, in 2006, in partnership with his former supervisor John Johnson. In the beginning, BuzzFeed employed no writers or editors, just an "algorithm to cull stories from around the web that were showing stirrings of virality."
The site initially launched an instant messaging client, BuzzBot, which messaged users a link to popular content. The messages were sent based on algorithms which examined the links that were being quickly disseminated, scouring through the feeds of hundreds of blogs that were aggregating them.
Later, the site began spotlighting the most popular links that BuzzBot found. Peretti hired curators to help describe the content that was popular around the web. In 2011, Peretti hired Politico's Ben Smith, who earlier had achieved much attention as a political blogger, to assemble a news operation in addition to the many aggregated "listicles".
Funding:
In August 2014, BuzzFeed raised $50 million from the venture capital firm Andreessen Horowitz, more than doubling previous rounds of funding. The site was reportedly valued at around $850 million by Andreessen Horowitz.
BuzzFeed generates its advertising revenue through native advertising that matches its own editorial content, and does not rely on banner ads. BuzzFeed also uses its familiarity with social media to target conventional advertising through other channels, such as Facebook.
In December 2014, growth equity firm General Atlantic acquired $50M in secondary stock of the company.
In August 2015, NBCUniversal made a $200 million equity investment in BuzzFeed. Along with plans to hire more journalists to build a more prominent "investigative" unit, BuzzFeed is hiring journalists around the world and plans to open outposts in India, Germany, Mexico, and Japan.
In October 2016, BuzzFeed raised $200 million from Comcast’s TV and movie arm NBCUniversal, at a valuation of roughly $1.7 billion.
Acquisitions:
BuzzFeed's first acquisition was in 2012 when the company purchased Kingfish Labs, a startup founded by Rob Fishman, initially focused on optimizing Facebook ads.
On October 28, 2014, BuzzFeed announced its next acquisition, taking hold of Torando Labs. The Torando team was to become BuzzFeed's first data engineering team.
Content:
BuzzFeed produces daily content, in which the work of staff reporters, contributors, syndicated cartoon artists, and its community are featured. Popular formats on the website include lists, videos, and quizzes.
While BuzzFeed initially was focused exclusively on such viral content, according to The New York Times, "it added more traditional content, building a track record for delivering breaking news and deeply reported articles" in the years up to 2014. In that year, BuzzFeed deleted over 4000 early posts, "apparently because, as time passed, they looked stupider and stupider", as observed by The New Yorker.
BuzzFeed consistently ranked at the top of NewsWhip's "Facebook Publisher Rankings" from December 2013 to April 2014, until The Huffington Post entered the position.
Video:
BuzzFeed Video, BuzzFeed Motion Picture's flagship channel, produces original content. Its production studio and team are based in Los Angeles. Since hiring Ze Frank in 2012, BuzzFeed Video has produced several video series including "The Try Guys".
In August 2014, the company announced a new division, BuzzFeed Motion Pictures, which may produce feature-length films. As of June 27, 2017, BuzzFeed Video's YouTube had garnered more than 10.2 billion views and more than 12.6 million subscribers. It recently was announced that YouTube has signed on for two feature length series to be created by BuzzFeed Motion Pictures, entitled Broke and Squad Wars.
Community:
On July 17, 2012, humor website McSweeney's Internet Tendency published a satirical piece entitled "Suggested BuzzFeed Articles", prompting BuzzFeed to create many of the suggestions.
BuzzFeed listed McSweeney's as a "Community Contributor."The post subsequently received more than 350,000 page views, prompted BuzzFeed to ask for user submissions, and received media attention.
Subsequently, the website launched the "Community" section in May 2013 to enable users to submit content. Users initially are limited to publishing only one post per day, but may increase their submission capacity by raising their "Cat Power", described on the BuzzFeed website as "an official measure of your rank in BuzzFeed's Community." A user's Cat Power increases as they achieve greater prominence on the site.
Technology and social media:
BuzzFeed receives the majority of its traffic by creating content that is shared on social media websites. BuzzFeed works by judging their content on how viral it will become.
Operating in a “continuous feedback loop” where all of its articles and videos are used as input for its sophisticated data operation. The site continues to test and track their custom content with an in-house team of data scientists and external-facing “social dashboard.”
Using an algorithm dubbed "Viral Rank" created by Jonah Peretti and Duncan Watts, the company uses this formula to let editors, users, and advertisers try lots of different ideas, which maximizes distribution. Staff writers are ranked by views on an internal leaderboard.
In 2014, BuzzFeed received 75% of its views from links on social media outlets such as Pinterest, Twitter, and Facebook.
Tasty:
BuzzFeed's video series on comfort food, Tasty, is made for Facebook, where it has ninety million followers as of November 2017.
The channel has substantially more views than BuzzFeed's dedicated food site. The channel included five spinoff segments: "Tasty Junior"—which eventually spun off into its own page, "Tasty Happy Hour" (alcoholic beverages), "Tasty Fresh", "Tasty Vegetarian", and "Tasty Story"—which has celebrities making and discussing their own recipes. Tasty has also released a cookbook. The company also operates these international versions of Tasty in other languages.
Worth It:
Since 2016, Tasty also sponsors a show named "Worth It" starring Steven Lim, Andrew Ilnyckyj, and Adam Bianchi. In each episode, the trio visit three different food places with three different price points in one food category.
Steven Lim also stars on some of BuzzFeed Blue's "Worth It - Lifestyle" videos. The series is similar, in that three items or experiences are valued from different companies, each at their different price point, but focus on material items and experiences, such as plane seats, hotel rooms, and haircuts.
BuzzFeed Unsolved:
BuzzFeed Unsolved is the most successful web series on BuzzFeed's BuzzFeedBlue, created by Ryan Bergara. The show features Ryan Bergara, Shane Madej, and occasionally Brent Bennett.
Notable Stories:
Trump dossier:
Main article: Donald Trump–Russia dossier
On January 10, 2017, CNN reported on the existence of classified documents that claimed Russia had compromising personal and financial information about President-elect Donald Trump. Both Trump and President Barack Obama had been briefed on the content of the dossier the previous week. CNN did not publish the dossier, or any specific details of the dossier, as they could not be verified.
Later the same day, BuzzFeed published a 35-page dossier nearly in-full. BuzzFeed said that the dossier was unverified and "includes some clear errors". The dossier had been read widely by political and media figures in Washington, and previously been sent to multiple journalists who had declined to publish it as unsubstantiated.
In response the next day, Trump called the website a "failing pile of garbage" during a news conference. The publication of the dossier was also met with criticism from, among others, CNN reporter Jake Tapper, who called it irresponsible. BuzzFeed editor-in-chief Ben Smith defended the site's decision to publish the dossier.
Aleksej Gubarev, chief of technology company XBT and a figure mentioned in the dossier, sued BuzzFeed on February 3, 2017. The suit, filed in a Broward County, Florida court, centers on the allegations from the dossier that XBT had been "using botnets and porn traffic to transmit viruses, plant bugs, steal data and conduct 'altering operations' against the Democratic Party leadership."
Traingate:
In September 2016, Private Eye revealed that a Guardian story from August 16 on "Traingate" was written by a former Socialist Workers Party member who joined the Labour Party once Jeremy Corbyn became Labour leader.
The journalist also had a conflict of interest with the individual who filmed Corbyn on the floor of an allegedly-overcrowded train, something the Guardian did not mention in its reporting. Paul Chadwich, the global readers editor for the Guardian, later stated that the story was published too quickly, with aspects of the story not being corroborated by third-party sources prior to reporting. The story proved to be an embarrassment for Corbyn and the Guardian.
The story originally was submitted to BuzzFeed News, who rejected the article because its author had "attached a load of conditions around the words and he wanted it written his way", according to BuzzFeed UK editor-in-chief Janine Gibson.
Watermelon stunt:
Main article: Exploding watermelon stunt
On April 8, 2016, two BuzzFeed interns created a live stream on Facebook, during which rubber bands were wrapped one by one around a watermelon until the pressure caused it to explode. The Daily Dot compared it to something from America's Funniest Home Videosor by the comedian Gallagher, and "just as stupid-funny, but with incredible immediacy and zero production costs".
The video is seen as part of Facebook's strategy to shift to live video, Facebook Live, to counter the rise of Snapchat and Periscope among a younger audience.
"The dress":
Main article: The dress
In February 2015, a post resulting in a debate over the color of an item of clothing from BuzzFeed's Tumblr editor Cates Holderness garnered more than 28 million views in one day, setting a record for most concurrent visitors to a BuzzFeed post.
Holderness had showed the picture to other members of the site's social media team, who immediately began arguing about the dress colors among themselves. After creating a simple poll for users of the site, she left work and took the subway back to her Brooklyn home.
When she got off the train and checked her telephone, it was overwhelmed by the messages on various sites. "I couldn't open Twitter because it kept crashing. I thought somebody had died, maybe. I didn't know what was going on." Later in the evening the page set a new record at BuzzFeed for concurrent visitors, which would reach 673,000 at its peak.
Leaked Milo Yiannopoulos emails:
An exposé by BuzzFeed published in October 2017 documented how Breitbart News solicited story ideas and copy edits from white supremacists and neo-Nazis, with Milo Yiannopoulos acting as an intermediary.
Yiannopoulos and other Breitbart employees developed and marketed the values and tactics of these groups, attempting to make them palatable to a broader audience. In the article, BuzzFeed senior technology reporter Joseph Bernstein wrote that Breitbart actively fed from the "most hate-filled, racist voices of the alt-right" and helped to normalize the American far right.
MSNBC's Chris Hayes called the 8,500-word article "one of the best reported pieces of the year." The Columbia Journalism Review described the story as a scrupulous, months-long project and "the culmination of years of reporting and source-building on a beat that few thought much about until Donald Trump won the presidential election."
Kevin Spacey sexual misconduct accusation:
On October 29, 2017, BuzzFeed published the original story in which actor Anthony Rapp accused actor Kevin Spacey of making sexual advances toward him at a party in 1986 when Rapp was 14 at the time and Spacey was 26.
Subsequently, numerous other men alleged that Spacey had sexually harassed or assaulted them. As a result, Netflix indefinitely suspended production of Spacey's TV series House of Cards, and opted to not release his film Gore on their service, which was in post-production at the time, and Spacey was replaced by Christopher Plummer in Ridley Scott's film All the Money in the World, which was six weeks from release.
Criticism and Controversies:
BuzzFeed has been accused of plagiarizing original content from competitors throughout the online and offline press. On June 28, 2012, Gawker's Adrian Chen posted a story entitled "BuzzFeed and the Plagiarism Problem". In the article, Chen observed that one of BuzzFeed's most popular writers--Matt Stopera—frequently had copied and pasted "chunks of text into lists without attribution." On March 8, 2013, The Atlantic Wire also published an article concerning BuzzFeed and plagiarism issues.
BuzzFeed has been the subject of multiple copyright infringement lawsuits, for both using content it had no rights to and encouraging its proliferation without attributing its sources: one for an individual photographer's photograph, and another for nine celebrity photographs from a single photography company.
In July 2014, BuzzFeed writer Benny Johnson was accused of multiple instances of plagiarism. Two anonymous Twitter users chronicled Johnson attributing work that was not his own, but "directly lift[ed] from other reporters, Wikipedia, and Yahoo! Answers," all without credit.
BuzzFeed editor Ben Smith initially defended Johnson, calling him a "deeply original writer". Days later, Smith acknowledged that Johnson had plagiarized the work of others 40 times and announced that Johnson had been fired, and apologized to BuzzFeed readers.
"Plagiarism, much less copying unchecked facts from Wikipedia or other sources, is an act of disrespect to the reader," Smith said. "We are deeply embarrassed and sorry to have misled you." In total, 41 instances of plagiarism were found and corrected. Johnson, who had previously worked for the Mitt Romney 2008 presidential campaign, subsequently, was hired by the conservative magazine National Review as their social media editor.
In October 2014, it was noted by the Pew Research Center that in the United States, BuzzFeed was viewed as an unreliable source by the majority of people, regardless of political affiliation.
In April 2015, BuzzFeed drew scrutiny after Gawker observed the publication had deleted two posts that criticized advertisers. One of the posts criticized Dove soap (manufactured by Unilever), while another criticized Hasbro. Both companies advertise with BuzzFeed.
Ben Smith apologized in a memo to staff for his actions. "I blew it," Smith wrote. "Twice in the past couple of months, I've asked editors—over their better judgment and without any respect to our standards or process—to delete recently published posts from the site. Both involved the same thing: my overreaction to questions we've been wrestling with about the place of personal opinion pieces on our site. I reacted impulsively when I saw the posts and I was wrong to do that. We've reinstated both with a brief note."
Days later, one of the authors of the deleted posts, Arabelle Sicardi, resigned. An internal review by the company found three additional posts deleted for being critical of products or advertisements (by Microsoft, Pepsi, and Unilever).
In September 2015, The Christian Post wrote that a video by BuzzFeed entitled I'm Christian But I'm Not... was getting criticism from conservative Christians for not specifically mentioning Christ or certain Biblical values.
In 2016, the Advertising Standards Authority of the United Kingdom ruled that BuzzFeed broke the UK advertising rules for failing to make it clear that an article on "14 Laundry Fails We've All Experienced" that promoted Dylon was an online advertorial paid for by the brand.
Although the ASA agreed with BuzzFeed's defence that links to the piece from its homepage and search results clearly labelled the article as "sponsored content", this failed to take into account that many people may link to the story directly, ruling that the labeling "was not sufficient to make clear that the main content of the web page was an advertorial and that editorial content was therefore retained by the advertiser".
In February 2016, Scaachi Koul, a Senior Writer for BuzzFeed Canada tweeted a request for pitches stating that BuzzFeed was "...looking for mostly non-white non-men" followed by "If you are a white man upset that we are looking mostly for non-white non-men I don't care about you go write for Maclean's."
When confronted, she followed with the tweet "White men are still permitted to pitch, I will read it, I will consider it. I'm just less interested because, ugh, men." In response to the tweets, Koul received numerous rape and death threats and racist insults.
Sarmishta Subramanian, a former colleague of Koul's writing for Maclean's condemned the reaction to the tweets, and commented that Koul's request for diversity was appropriate. Subramanian said that her provocative approach raised concerns of tokenism that might hamper BuzzFeed's stated goals.
See also:
BuzzFeed was founded in 2006 as a viral lab focusing on tracking viral content, by Jonah Peretti and John S. Johnson III. Kenneth Lerer, co-founder and chairman of The Huffington Post, started as a co-founder and investor in BuzzFeed and is now the executive chairman as well.
The company has grown into a global media and technology company providing coverage on a variety of topics including politics, DIY, animals and business. In late 2011, Ben Smith of Politico was hired as editor-in-chief to expand the site into serious journalism, long-form journalism, and reportage.
History:
Founding:
Prior to establishing BuzzFeed, Peretti was director of research and development and the OpenLab at Eyebeam, Johnson's New York City-based art and technology nonprofit, where he experimented with other viral media.
While working at the Huffington Post, Peretti started BuzzFeed as a side project, in 2006, in partnership with his former supervisor John Johnson. In the beginning, BuzzFeed employed no writers or editors, just an "algorithm to cull stories from around the web that were showing stirrings of virality."
The site initially launched an instant messaging client, BuzzBot, which messaged users a link to popular content. The messages were sent based on algorithms which examined the links that were being quickly disseminated, scouring through the feeds of hundreds of blogs that were aggregating them.
Later, the site began spotlighting the most popular links that BuzzBot found. Peretti hired curators to help describe the content that was popular around the web. In 2011, Peretti hired Politico's Ben Smith, who earlier had achieved much attention as a political blogger, to assemble a news operation in addition to the many aggregated "listicles".
Funding:
In August 2014, BuzzFeed raised $50 million from the venture capital firm Andreessen Horowitz, more than doubling previous rounds of funding. The site was reportedly valued at around $850 million by Andreessen Horowitz.
BuzzFeed generates its advertising revenue through native advertising that matches its own editorial content, and does not rely on banner ads. BuzzFeed also uses its familiarity with social media to target conventional advertising through other channels, such as Facebook.
In December 2014, growth equity firm General Atlantic acquired $50M in secondary stock of the company.
In August 2015, NBCUniversal made a $200 million equity investment in BuzzFeed. Along with plans to hire more journalists to build a more prominent "investigative" unit, BuzzFeed is hiring journalists around the world and plans to open outposts in India, Germany, Mexico, and Japan.
In October 2016, BuzzFeed raised $200 million from Comcast’s TV and movie arm NBCUniversal, at a valuation of roughly $1.7 billion.
Acquisitions:
BuzzFeed's first acquisition was in 2012 when the company purchased Kingfish Labs, a startup founded by Rob Fishman, initially focused on optimizing Facebook ads.
On October 28, 2014, BuzzFeed announced its next acquisition, taking hold of Torando Labs. The Torando team was to become BuzzFeed's first data engineering team.
Content:
BuzzFeed produces daily content, in which the work of staff reporters, contributors, syndicated cartoon artists, and its community are featured. Popular formats on the website include lists, videos, and quizzes.
While BuzzFeed initially was focused exclusively on such viral content, according to The New York Times, "it added more traditional content, building a track record for delivering breaking news and deeply reported articles" in the years up to 2014. In that year, BuzzFeed deleted over 4000 early posts, "apparently because, as time passed, they looked stupider and stupider", as observed by The New Yorker.
BuzzFeed consistently ranked at the top of NewsWhip's "Facebook Publisher Rankings" from December 2013 to April 2014, until The Huffington Post entered the position.
Video:
BuzzFeed Video, BuzzFeed Motion Picture's flagship channel, produces original content. Its production studio and team are based in Los Angeles. Since hiring Ze Frank in 2012, BuzzFeed Video has produced several video series including "The Try Guys".
In August 2014, the company announced a new division, BuzzFeed Motion Pictures, which may produce feature-length films. As of June 27, 2017, BuzzFeed Video's YouTube had garnered more than 10.2 billion views and more than 12.6 million subscribers. It recently was announced that YouTube has signed on for two feature length series to be created by BuzzFeed Motion Pictures, entitled Broke and Squad Wars.
Community:
On July 17, 2012, humor website McSweeney's Internet Tendency published a satirical piece entitled "Suggested BuzzFeed Articles", prompting BuzzFeed to create many of the suggestions.
BuzzFeed listed McSweeney's as a "Community Contributor."The post subsequently received more than 350,000 page views, prompted BuzzFeed to ask for user submissions, and received media attention.
Subsequently, the website launched the "Community" section in May 2013 to enable users to submit content. Users initially are limited to publishing only one post per day, but may increase their submission capacity by raising their "Cat Power", described on the BuzzFeed website as "an official measure of your rank in BuzzFeed's Community." A user's Cat Power increases as they achieve greater prominence on the site.
Technology and social media:
BuzzFeed receives the majority of its traffic by creating content that is shared on social media websites. BuzzFeed works by judging their content on how viral it will become.
Operating in a “continuous feedback loop” where all of its articles and videos are used as input for its sophisticated data operation. The site continues to test and track their custom content with an in-house team of data scientists and external-facing “social dashboard.”
Using an algorithm dubbed "Viral Rank" created by Jonah Peretti and Duncan Watts, the company uses this formula to let editors, users, and advertisers try lots of different ideas, which maximizes distribution. Staff writers are ranked by views on an internal leaderboard.
In 2014, BuzzFeed received 75% of its views from links on social media outlets such as Pinterest, Twitter, and Facebook.
Tasty:
BuzzFeed's video series on comfort food, Tasty, is made for Facebook, where it has ninety million followers as of November 2017.
The channel has substantially more views than BuzzFeed's dedicated food site. The channel included five spinoff segments: "Tasty Junior"—which eventually spun off into its own page, "Tasty Happy Hour" (alcoholic beverages), "Tasty Fresh", "Tasty Vegetarian", and "Tasty Story"—which has celebrities making and discussing their own recipes. Tasty has also released a cookbook. The company also operates these international versions of Tasty in other languages.
Worth It:
Since 2016, Tasty also sponsors a show named "Worth It" starring Steven Lim, Andrew Ilnyckyj, and Adam Bianchi. In each episode, the trio visit three different food places with three different price points in one food category.
Steven Lim also stars on some of BuzzFeed Blue's "Worth It - Lifestyle" videos. The series is similar, in that three items or experiences are valued from different companies, each at their different price point, but focus on material items and experiences, such as plane seats, hotel rooms, and haircuts.
BuzzFeed Unsolved:
BuzzFeed Unsolved is the most successful web series on BuzzFeed's BuzzFeedBlue, created by Ryan Bergara. The show features Ryan Bergara, Shane Madej, and occasionally Brent Bennett.
Notable Stories:
Trump dossier:
Main article: Donald Trump–Russia dossier
On January 10, 2017, CNN reported on the existence of classified documents that claimed Russia had compromising personal and financial information about President-elect Donald Trump. Both Trump and President Barack Obama had been briefed on the content of the dossier the previous week. CNN did not publish the dossier, or any specific details of the dossier, as they could not be verified.
Later the same day, BuzzFeed published a 35-page dossier nearly in-full. BuzzFeed said that the dossier was unverified and "includes some clear errors". The dossier had been read widely by political and media figures in Washington, and previously been sent to multiple journalists who had declined to publish it as unsubstantiated.
In response the next day, Trump called the website a "failing pile of garbage" during a news conference. The publication of the dossier was also met with criticism from, among others, CNN reporter Jake Tapper, who called it irresponsible. BuzzFeed editor-in-chief Ben Smith defended the site's decision to publish the dossier.
Aleksej Gubarev, chief of technology company XBT and a figure mentioned in the dossier, sued BuzzFeed on February 3, 2017. The suit, filed in a Broward County, Florida court, centers on the allegations from the dossier that XBT had been "using botnets and porn traffic to transmit viruses, plant bugs, steal data and conduct 'altering operations' against the Democratic Party leadership."
Traingate:
In September 2016, Private Eye revealed that a Guardian story from August 16 on "Traingate" was written by a former Socialist Workers Party member who joined the Labour Party once Jeremy Corbyn became Labour leader.
The journalist also had a conflict of interest with the individual who filmed Corbyn on the floor of an allegedly-overcrowded train, something the Guardian did not mention in its reporting. Paul Chadwich, the global readers editor for the Guardian, later stated that the story was published too quickly, with aspects of the story not being corroborated by third-party sources prior to reporting. The story proved to be an embarrassment for Corbyn and the Guardian.
The story originally was submitted to BuzzFeed News, who rejected the article because its author had "attached a load of conditions around the words and he wanted it written his way", according to BuzzFeed UK editor-in-chief Janine Gibson.
Watermelon stunt:
Main article: Exploding watermelon stunt
On April 8, 2016, two BuzzFeed interns created a live stream on Facebook, during which rubber bands were wrapped one by one around a watermelon until the pressure caused it to explode. The Daily Dot compared it to something from America's Funniest Home Videosor by the comedian Gallagher, and "just as stupid-funny, but with incredible immediacy and zero production costs".
The video is seen as part of Facebook's strategy to shift to live video, Facebook Live, to counter the rise of Snapchat and Periscope among a younger audience.
"The dress":
Main article: The dress
In February 2015, a post resulting in a debate over the color of an item of clothing from BuzzFeed's Tumblr editor Cates Holderness garnered more than 28 million views in one day, setting a record for most concurrent visitors to a BuzzFeed post.
Holderness had showed the picture to other members of the site's social media team, who immediately began arguing about the dress colors among themselves. After creating a simple poll for users of the site, she left work and took the subway back to her Brooklyn home.
When she got off the train and checked her telephone, it was overwhelmed by the messages on various sites. "I couldn't open Twitter because it kept crashing. I thought somebody had died, maybe. I didn't know what was going on." Later in the evening the page set a new record at BuzzFeed for concurrent visitors, which would reach 673,000 at its peak.
Leaked Milo Yiannopoulos emails:
An exposé by BuzzFeed published in October 2017 documented how Breitbart News solicited story ideas and copy edits from white supremacists and neo-Nazis, with Milo Yiannopoulos acting as an intermediary.
Yiannopoulos and other Breitbart employees developed and marketed the values and tactics of these groups, attempting to make them palatable to a broader audience. In the article, BuzzFeed senior technology reporter Joseph Bernstein wrote that Breitbart actively fed from the "most hate-filled, racist voices of the alt-right" and helped to normalize the American far right.
MSNBC's Chris Hayes called the 8,500-word article "one of the best reported pieces of the year." The Columbia Journalism Review described the story as a scrupulous, months-long project and "the culmination of years of reporting and source-building on a beat that few thought much about until Donald Trump won the presidential election."
Kevin Spacey sexual misconduct accusation:
On October 29, 2017, BuzzFeed published the original story in which actor Anthony Rapp accused actor Kevin Spacey of making sexual advances toward him at a party in 1986 when Rapp was 14 at the time and Spacey was 26.
Subsequently, numerous other men alleged that Spacey had sexually harassed or assaulted them. As a result, Netflix indefinitely suspended production of Spacey's TV series House of Cards, and opted to not release his film Gore on their service, which was in post-production at the time, and Spacey was replaced by Christopher Plummer in Ridley Scott's film All the Money in the World, which was six weeks from release.
Criticism and Controversies:
BuzzFeed has been accused of plagiarizing original content from competitors throughout the online and offline press. On June 28, 2012, Gawker's Adrian Chen posted a story entitled "BuzzFeed and the Plagiarism Problem". In the article, Chen observed that one of BuzzFeed's most popular writers--Matt Stopera—frequently had copied and pasted "chunks of text into lists without attribution." On March 8, 2013, The Atlantic Wire also published an article concerning BuzzFeed and plagiarism issues.
BuzzFeed has been the subject of multiple copyright infringement lawsuits, for both using content it had no rights to and encouraging its proliferation without attributing its sources: one for an individual photographer's photograph, and another for nine celebrity photographs from a single photography company.
In July 2014, BuzzFeed writer Benny Johnson was accused of multiple instances of plagiarism. Two anonymous Twitter users chronicled Johnson attributing work that was not his own, but "directly lift[ed] from other reporters, Wikipedia, and Yahoo! Answers," all without credit.
BuzzFeed editor Ben Smith initially defended Johnson, calling him a "deeply original writer". Days later, Smith acknowledged that Johnson had plagiarized the work of others 40 times and announced that Johnson had been fired, and apologized to BuzzFeed readers.
"Plagiarism, much less copying unchecked facts from Wikipedia or other sources, is an act of disrespect to the reader," Smith said. "We are deeply embarrassed and sorry to have misled you." In total, 41 instances of plagiarism were found and corrected. Johnson, who had previously worked for the Mitt Romney 2008 presidential campaign, subsequently, was hired by the conservative magazine National Review as their social media editor.
In October 2014, it was noted by the Pew Research Center that in the United States, BuzzFeed was viewed as an unreliable source by the majority of people, regardless of political affiliation.
In April 2015, BuzzFeed drew scrutiny after Gawker observed the publication had deleted two posts that criticized advertisers. One of the posts criticized Dove soap (manufactured by Unilever), while another criticized Hasbro. Both companies advertise with BuzzFeed.
Ben Smith apologized in a memo to staff for his actions. "I blew it," Smith wrote. "Twice in the past couple of months, I've asked editors—over their better judgment and without any respect to our standards or process—to delete recently published posts from the site. Both involved the same thing: my overreaction to questions we've been wrestling with about the place of personal opinion pieces on our site. I reacted impulsively when I saw the posts and I was wrong to do that. We've reinstated both with a brief note."
Days later, one of the authors of the deleted posts, Arabelle Sicardi, resigned. An internal review by the company found three additional posts deleted for being critical of products or advertisements (by Microsoft, Pepsi, and Unilever).
In September 2015, The Christian Post wrote that a video by BuzzFeed entitled I'm Christian But I'm Not... was getting criticism from conservative Christians for not specifically mentioning Christ or certain Biblical values.
In 2016, the Advertising Standards Authority of the United Kingdom ruled that BuzzFeed broke the UK advertising rules for failing to make it clear that an article on "14 Laundry Fails We've All Experienced" that promoted Dylon was an online advertorial paid for by the brand.
Although the ASA agreed with BuzzFeed's defence that links to the piece from its homepage and search results clearly labelled the article as "sponsored content", this failed to take into account that many people may link to the story directly, ruling that the labeling "was not sufficient to make clear that the main content of the web page was an advertorial and that editorial content was therefore retained by the advertiser".
In February 2016, Scaachi Koul, a Senior Writer for BuzzFeed Canada tweeted a request for pitches stating that BuzzFeed was "...looking for mostly non-white non-men" followed by "If you are a white man upset that we are looking mostly for non-white non-men I don't care about you go write for Maclean's."
When confronted, she followed with the tweet "White men are still permitted to pitch, I will read it, I will consider it. I'm just less interested because, ugh, men." In response to the tweets, Koul received numerous rape and death threats and racist insults.
Sarmishta Subramanian, a former colleague of Koul's writing for Maclean's condemned the reaction to the tweets, and commented that Koul's request for diversity was appropriate. Subramanian said that her provocative approach raised concerns of tokenism that might hamper BuzzFeed's stated goals.
See also:
- Official website
- ClickHole, a parody of BuzzFeed and similar websites
- Mic (media company)
- Vice Media, Inc.
Fandango
YouTube Video: Easy Movie Tickets with Fandango
Fandango is an American ticketing company that sells movie tickets via their website as well as through their mobile app.
Hi Industry revenue increased rapidly for several years after the company's formation. However, as the Internet grew in popularity, small- and medium-sized movie-theater chains began to offer independent ticket sale capabilities through their own websites.
In addition, a new paradigm of moviegoers printing their own tickets at home (with barcodes to be scanned at the theater) emerged, in services offered by PrintTixUSA and by point-of-sale software vendor operated web sites like "ticketmakers.com" (and eventually Fandango itself).
Finally, an overall slump in moviegoing continued into the 2000s, as home theaters, DVDs, and high definition televisions proliferated in average households, turning the home into the preferred place to screen films.
On April 11, 2007, Comcast acquired Fandango, with plans to integrate it into a new entertainment website called "Fancast.com," set to launch the summer of 2007. In June 2008, the domain Movies.com was acquired from Disney. With Comcast's purchase of a stake in NBCUniversal in January 2011, Fandango and all other Comcast media assets were merged into the company.
In March 2012, Fandango announced a partnership with Yahoo! Movies, becoming the official online and mobile ticketer serving over 30 million registered users of the Yahoo! service.
On January 29, 2016, Fandango announced its acquisition of M-GO, a joint venture between Technicolor SA and DreamWorks Animation (which NBCUniversal acquired the company three months later), which it would later rebrand as "FandangoNOW".
In February of that same year Fandango announced its acquisition of Flixster and Rotten Tomatoes from Time Warner's Warner Bros. Entertainment. As part of the deal, Warner Bros. would become a 30% shareholder of the combined Fandango company.
In December 2016, Fandango Media purchased Cinepapaya, a Peru-based website for purchasing movie tickets, for an undisclosed amount.
Click on any of the following blue hyperlinks for more about the website Fandango:
Hi Industry revenue increased rapidly for several years after the company's formation. However, as the Internet grew in popularity, small- and medium-sized movie-theater chains began to offer independent ticket sale capabilities through their own websites.
In addition, a new paradigm of moviegoers printing their own tickets at home (with barcodes to be scanned at the theater) emerged, in services offered by PrintTixUSA and by point-of-sale software vendor operated web sites like "ticketmakers.com" (and eventually Fandango itself).
Finally, an overall slump in moviegoing continued into the 2000s, as home theaters, DVDs, and high definition televisions proliferated in average households, turning the home into the preferred place to screen films.
On April 11, 2007, Comcast acquired Fandango, with plans to integrate it into a new entertainment website called "Fancast.com," set to launch the summer of 2007. In June 2008, the domain Movies.com was acquired from Disney. With Comcast's purchase of a stake in NBCUniversal in January 2011, Fandango and all other Comcast media assets were merged into the company.
In March 2012, Fandango announced a partnership with Yahoo! Movies, becoming the official online and mobile ticketer serving over 30 million registered users of the Yahoo! service.
On January 29, 2016, Fandango announced its acquisition of M-GO, a joint venture between Technicolor SA and DreamWorks Animation (which NBCUniversal acquired the company three months later), which it would later rebrand as "FandangoNOW".
In February of that same year Fandango announced its acquisition of Flixster and Rotten Tomatoes from Time Warner's Warner Bros. Entertainment. As part of the deal, Warner Bros. would become a 30% shareholder of the combined Fandango company.
In December 2016, Fandango Media purchased Cinepapaya, a Peru-based website for purchasing movie tickets, for an undisclosed amount.
Click on any of the following blue hyperlinks for more about the website Fandango:
Online Advertising
YouTube Video: Buyer Beware -- The Pitfalls of Online Advertising
Pictured: This chart presents the digital advertising spending in the United States from 2011 to 2014, as well as a forecast until 2019, broken down by channel. The source projected that mobile ad spending would grow from 1.57 million U.S. dollars in 2011 to 65.49 billion in 2019.
Online advertising, also called online marketing or Internet advertising or web advertising, is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. Consumers view online advertising as an unwanted distraction with few benefits and have increasingly turned to ad blocking for a variety of reasons.
It includes the following:
Like other advertising media, online advertising frequently involves both a publisher, who integrates advertisements into its online content, and an advertiser, who provides the advertisements to be displayed on the publisher's content.
Other potential participants include advertising agencies who help generate and place the ad copy, an ad server which technologically delivers the ad and tracks statistics, and advertising affiliates who do independent promotional work for the advertiser.
In 2016, Internet advertising revenues in the United States surpassed those of cable television and broadcast television. In 2017, Internet advertising revenues in the United States totaled $83.0 billion, a 14% increase over the $72.50 billion in revenues in 2016.
Many common online advertising practices are controversial and increasingly subject to regulation. Online ad revenues may not adequately replace other publishers' revenue streams.
Declining ad revenue has led some publishers to hide their content behind paywalls.
Delivery Methods:
Display Advertising:
Display advertising conveys its advertising message visually using text, logos, animations, videos, photographs, or other graphics. Display advertisers frequently target users with particular traits to increase the ads' effect.
Online advertisers (typically through their ad servers) often use cookies, which are unique identifiers of specific computers, to decide which ads to serve to a particular consumer. Cookies can track whether a user left a page without buying anything, so the advertiser can later retarget the user with ads from the site the user visited.
As advertisers collect data across multiple external websites about a user's online activity, they can create a detailed profile of the user's interests to deliver even more targeted advertising. This aggregation of data is called behavioral targeting. Advertisers can also target their audience by using contextual to deliver display ads related to the content of the web page where the ads appear.
Re-targeting, behavioral targeting, and contextual advertising all are designed to increase an advertiser's return on investment, or ROI, over untargeted ads.
Advertisers may also deliver ads based on a user's suspected geography through geotargeting.
A user's IP address communicates some geographic information (at minimum, the user's country or general region). The geographic information from an IP can be supplemented and refined with other proxies or information to narrow the range of possible locations. For example, with mobile devices, advertisers can sometimes use a phone's GPS receiver or the location of nearby mobile towers.
Cookies and other persistent data on a user's machine may provide help narrowing a user's location further.
Web banner advertising:
Web banners or banner ads typically are graphical ads displayed within a web page. Many banner ads are delivered by a central ad server.
Banner ads can use rich media to incorporate video, audio, animations, buttons, forms, or other interactive elements using Java applets, HTML5, Adobe Flash, and other programs.
Frame ad (traditional banner):
Frame ads were the first form of web banners. The colloquial usage of "banner ads" often refers to traditional frame ads. Website publishers incorporate frame ads by setting aside a particular space on the web page. The Interactive Advertising Bureau's Ad Unit Guidelines proposes standardized pixel dimensions for ad units.
Pop-ups/pop-unders:
A pop-up ad is displayed in a new web browser window that opens above a website visitor's initial browser window. A pop-under ad opens a new browser window under a website visitor's initial browser window. Pop-under ads and similar technologies are now advised against by online authorities such as Google, who state that they "do not condone this practice".
Floating ad:
A floating ad, or overlay ad, is a type of rich media advertisement that appears superimposed over the requested website's content. Floating ads may disappear or become less obtrusive after a preset time period.
Expanding ad:
An expanding ad is a rich media frame ad that changes dimensions upon a predefined condition, such as a preset amount of time a visitor spends on a webpage, the user's click on the ad, or the user's mouse movement over the ad. Expanding ads allow advertisers to fit more information into a restricted ad space.
Trick banners:
A trick banner is a banner ad where the ad copy imitates some screen element users commonly encounter, such as an operating system message or popular application message, to induce ad clicks.
Trick banners typically do not mention the advertiser in the initial ad, and thus they are a form of bait-and-switch. Trick banners commonly attract a higher-than-average click-through rate, but tricked users may resent the advertiser for deceiving them.
News Feed Ads:
"News Feed Ads", also called "Sponsored Stories", "Boosted Posts", typically exist on social media platforms that offer a steady stream of information updates ("news feed") in regulated formats (i.e. in similar sized small boxes with a uniform style). Those advertisements are intertwined with non-promoted news that the users are reading through.
These advertisements can be of any content, such as promoting a website, a fan page, an app, or a product. Some examples are:
This display ads format falls into its own category because unlike banner ads which are quite distinguishable, News Feed Ads' format blends well into non-paid news updates. This format of online advertisement yields much higher click-through rates than traditional display ads.
Display advertising process overview:
The process by which online advertising is displayed can involve many parties. In the simplest case, the website publisher selects and serves the ads. Publishers which operate their own advertising departments may use this method.
The ads may be outsourced to an advertising agency under contract with the publisher, and served from the advertising agency's servers.
Alternatively, ad space may be offered for sale in a bidding market using an ad exchange and real-time bidding. This involves many parties interacting automatically in real time. In response to a request from the user's browser, the publisher content server sends the web page content to the user's browser over the Internet.
The page does not yet contain ads, but contains links which cause the user's browser to connect to the publisher ad server to request that the spaces left for ads be filled in with ads. Information identifying the user, such as cookies and the page being viewed, is transmitted to the publisher ad server.
The publisher ad server then communicates with a supply-side platform server. The publisher is offering ad space for sale, so they are considered the supplier. The supply side platform also receives the user's identifying information, which it sends to a data management platform. At the data management platform, the user's identifying information is used to look up demographic information, previous purchases, and other information of interest to advertisers.
Broadly speaking, there are three types of data obtained through such a data management platform:
This customer information is combined and returned to the supply side platform, which can now package up the offer of ad space along with information about the user who will view it. The supply side platform sends that offer to an ad exchange.
The ad exchange puts the offer out for bid to demand-side platforms. Demand side platforms act on behalf of ad agencies, who sell ads which advertise brands. Demand side platforms thus have ads ready to display, and are searching for users to view them.
Bidders get the information about the user ready to view the ad, and decide, based on that information, how much to offer to buy the ad space. According to the Internet Advertising Bureau, a demand side platform has 10 milliseconds to respond to an offer. The ad exchange picks the winning bid and informs both parties.
The ad exchange then passes the link to the ad back through the supply side platform and the publisher's ad server to the user's browser, which then requests the ad content from the agency's ad server. The ad agency can thus confirm that the ad was delivered to the browser.
This is simplified, according to the IAB. Exchanges may try to unload unsold ("remnant") space at low prices through other exchanges. Some agencies maintain semi-permanent pre-cached bids with ad exchanges, and those may be examined before going out to additional demand side platforms for bids.
The process for mobile advertising is different and may involve mobile carriers and handset software manufacturers.
Interstitial:
An interstitial ad displays before a user can access requested content, sometimes while the user is waiting for the content to load. Interstitial ads are a form of interruption marketing.
Text ads:
A text ad displays text-based hyperlinks. Text-based ads may display separately from a web page's primary content, or they can be embedded by hyperlinking individual words or phrases to advertiser's websites. Text ads may also be delivered through email marketing or text message marketing. Text-based ads often render faster than graphical ads and can be harder for ad-blocking software to block.
Search engine marketing (SEM):
Search engine marketing, or SEM, is designed to increase a website's visibility in search engine results pages (SERPs). Search engines provide sponsored results and organic (non-sponsored) results based on a web searcher's query. Search engines often employ visual cues to differentiate sponsored results from organic results. Search engine marketing includes all of an advertiser's actions to make a website's listing more prominent for topical keywords.
Search engine optimization (SEO):
Search engine optimization, or SEO, attempts to improve a website's organic search rankings in SERPs by increasing the website content's relevance to search terms. Search engines regularly update their algorithms to penalize poor quality sites that try to game their rankings, making optimization a moving target for advertisers. Many vendors offer SEO services.
Sponsored search:
Sponsored search (also called sponsored links, search ads, or paid search) allows advertisers to be included in the sponsored results of a search for selected keywords. Search ads are often sold via real-time auctions, where advertisers bid on keywords. In addition to setting a maximum price per keyword, bids may include time, language, geographical, and other constraints.
Search engines originally sold listings in order of highest bids. Modern search engines rank sponsored listings based on a combination of bid price, expected click-through rate, keyword relevancy and site quality.
Social media marketing:
Social media marketing is commercial promotion conducted through social media websites.
Many companies promote their products by posting frequent updates and providing special offers through their social media profiles.
Mobile advertising:
Mobile advertising is ad copy delivered through wireless mobile devices such as smartphones, feature phones, or tablet computers. Mobile advertising may take the form of static or rich media display ads, SMS (Short Message Service) or MMS (Multimedia Messaging Service) ads, mobile search ads, advertising within mobile websites, or ads within mobile applications or games (such as interstitial ads, "advergaming," or application sponsorship).
Industry groups such as the Mobile Marketing Association have attempted to standardize mobile ad unit specifications, similar to the IAB's efforts for general online advertising.
Mobile advertising is growing rapidly for several reasons. There are more mobile devices in the field, connectivity speeds have improved (which, among other things, allows for richer media ads to be served quickly), screen resolutions have advanced, mobile publishers are becoming more sophisticated about incorporating ads, and consumers are using mobile devices more extensively.
The Interactive Advertising Bureau predicts continued growth in mobile advertising with the adoption of location-based targeting and other technological features not available or relevant on personal computers.
In July 2014 Facebook reported advertising revenue for the June 2014 quarter of $2.68 billion, an increase of 67 percent over the second quarter of 2013. Of that, mobile advertising revenue accounted for around 62 percent, an increase of 41 percent on the previous year.
As of 2016, 14% of marketers used live videos for advertising.
Email advertising:
Email advertising is ad copy comprising an entire email or a portion of an email message. Email marketing may be unsolicited, in which case the sender may give the recipient an option to opt out of future emails, or it may be sent with the recipient's prior consent (opt-in).
Chat advertising:
As opposed to static messaging, chat advertising refers to real time messages dropped to users on certain sites. This is done by the usage of live chat software or tracking applications installed within certain websites with the operating personnel behind the site often dropping adverts on the traffic surfing around the sites. In reality this is a subset of the email advertising but different because of its time window.
Online classified advertising:
Online classified advertising is advertising posted online in a categorical listing of specific products or services. Examples include online job boards, online real estate listings, automotive listings, online yellow pages, and online auction-based listings. Craigslist and eBay are two prominent providers of online classified listings.
Adware:
Adware is software that, once installed, automatically displays advertisements on a user's computer. The ads may appear in the software itself, integrated into web pages visited by the user, or in pop-ups/pop-unders. Adware installed without the user's permission is a type of malware.
Affiliate marketing:
Affiliate marketing occurs when advertisers organize third parties to generate potential customers for them. Third-party affiliates receive payment based on sales generated through their promotion.
Affiliate marketers generate traffic to offers from affiliate networks, and when the desired action is taken by the visitor, the affiliate earns a commission. These desired actions can be an email submission, a phone call, filling out an online form, or an online order being completed.
Content marketing:
Content marketing is any marketing that involves the creation and sharing of media and publishing content in order to acquire and retain customers. This information can be presented in a variety of formats, including blogs, news, video, white papers, e-books, infographics, case studies, how-to guides and more.
Considering that most marketing involves some form of published media, it is almost (though not entirely) redundant to call 'content marketing' anything other than simply 'marketing'.
There are, of course, other forms of marketing (in-person marketing, telephone-based marketing, word of mouth marketing, etc.) where the label is more useful for identifying the type of marketing. However, even these are usually merely presenting content that they are marketing as information in a way that is different from traditional print, radio, TV, film, email, or web media.
Online marketing platform:
Online marketing platform (OMP) is an integrated web-based platform that combines the benefits of a business directory, local search engine, search engine optimization (SEO) tool, customer relationship management (CRM) package and content management system (CMS).
Ebay and Amazon are used as online marketing and logistics management platforms. On Facebook, Twitter, YouTube, Pinterest, LinkedIn, and other Social Media, retail online marketing is also used. Online business marketing platforms such as Marketo, Aprimo, MarketBright and Pardot have been bought by major IT companies (Eloqua-Oracle, Neolane-Adobe and Unica-IBM).
Unlike television marketing in which Neilsen TV Ratings can be relied upon for viewing metrics, online advertisers do not have an independent party to verify viewing claims made by the big online platforms.
Compensation methods:
Main article: Compensation methods
Advertisers and publishers use a wide range of payment calculation methods. In 2012, advertisers calculated 32% of online advertising transactions on a cost-per-impression basis, 66% on customer performance (e.g. cost per click or cost per acquisition), and 2% on hybrids of impression and performance methods.
CPM (cost per mille):
Cost per mille, often abbreviated to CPM, means that advertisers pay for every thousand displays of their message to potential customers (mille is the Latin word for thousand). In the online context, ad displays are usually called "impressions." Definitions of an "impression" vary among publishers, and some impressions may not be charged because they don't represent a new exposure to an actual customer. Advertisers can use technologies such as web bugs to verify if an impression is actually delivered.
Publishers use a variety of techniques to increase page views, such as dividing content across multiple pages, repurposing someone else's content, using sensational titles, or publishing tabloid or sexual content.
CPM advertising is susceptible to "impression fraud," and advertisers who want visitors to their sites may not find per-impression payments a good proxy for the results they desire.
CPC (cost per click):
CPC (Cost Per Click) or PPC (Pay per click) means advertisers pay each time a user clicks on the ad. CPC advertising works well when advertisers want visitors to their sites, but it's a less accurate measurement for advertisers looking to build brand awareness. CPC's market share has grown each year since its introduction, eclipsing CPM to dominate two-thirds of all online advertising compensation methods.
Like impressions, not all recorded clicks are valuable to advertisers. GoldSpot Media reported that up to 50% of clicks on static mobile banner ads are accidental and resulted in redirected visitors leaving the new site immediately.
CPE (cost per engagement):
Cost per engagement aims to track not just that an ad unit loaded on the page (i.e., an impression was served), but also that the viewer actually saw and/or interacted with the ad.
CPV (cost per view):
Cost per view video advertising. Both Google and TubeMogul endorsed this standardized CPV metric to the IAB's (Interactive Advertising Bureau) Digital Video Committee, and it's garnering a notable amount of industry support. CPV is the primary benchmark used in YouTube Advertising Campaigns, as part of Google's AdWords platform.
CPI (cost per install):
The CPI compensation method is specific to mobile applications and mobile advertising. In CPI ad campaigns brands are charged a fixed of bid rate only when the application was installed.
Attribution of ad value:
Main article: Attribution (marketing)
In marketing, "attribution" is the measurement of effectiveness of particular ads in a consumer's ultimate decision to purchase. Multiple ad impressions may lead to a consumer "click" or other action. A single action may lead to revenue being paid to multiple ad space sellers.
Other performance-based compensation:
CPA (Cost Per Action or Cost Per Acquisition) or PPP (Pay Per Performance) advertising means the advertiser pays for the number of users who perform a desired activity, such as completing a purchase or filling out a registration form.
Performance-based compensation can also incorporate revenue sharing, where publishers earn a percentage of the advertiser's profits made as a result of the ad. Performance-based compensation shifts the risk of failed advertising onto publishers.
Fixed cost:
Fixed cost compensation means advertisers pay a fixed cost for delivery of ads online, usually over a specified time period, irrespective of the ad's visibility or users' response to it.
One example is CPD (cost per day) where advertisers pay a fixed cost for publishing an ad for a day irrespective of impressions served or clicks.
Benefits of Online Advertising:
Cost:
The low costs of electronic communication reduce the cost of displaying online advertisements compared to offline ads. Online advertising, and in particular social media, provides a low-cost means for advertisers to engage with large established communities.
Advertising online offers better returns than in other media.
Measurability:
Online advertisers can collect data on their ads' effectiveness, such as the size of the potential audience or actual audience response, how a visitor reached their advertisement, whether the advertisement resulted in a sale, and whether an ad actually loaded within a visitor's view. This helps online advertisers improve their ad campaigns over time.
Formatting:
Advertisers have a wide variety of ways of presenting their promotional messages, including the ability to convey images, video, audio, and links. Unlike many offline ads, online ads also can be interactive. For example, some ads let users input queries or let users follow the advertiser on social media. Online ads can even incorporate games.
Targeting:
Publishers can offer advertisers the ability to reach customizable and narrow market segments for targeted advertising. Online advertising may use geo-targeting to display relevant advertisements to the user's geography.
Advertisers can customize each individual ad to a particular user based on the user's previous preferences. Advertisers can also track whether a visitor has already seen a particular ad in order to reduce unwanted repetitious exposures and provide adequate time gaps between exposures.
Coverage:
Online advertising can reach nearly every global market, and online advertising influences offline sales.
Speed:
Once ad design is complete, online ads can be deployed immediately. The delivery of online ads does not need to be linked to the publisher's publication schedule. Furthermore, online advertisers can modify or replace ad copy more rapidly than their offline counterparts.
Concerns:
Security concerns:
According to a US Senate investigation, the current state of online advertising endangers the security and privacy of users.
Banner blindness:
Eye-tracking studies have shown that Internet users often ignore web page zones likely to contain display ads (sometimes called "banner blindness"), and this problem is worse online than in offline media. On the other hand, studies suggest that even those ads "ignored" by the users may influence the user subconsciously.
Fraud on the advertiser:
There are numerous ways that advertisers can be overcharged for their advertising. For example, click fraud occurs when a publisher or third parties click (manually or through automated means) on a CPC ad with no legitimate buying intent. For example, click fraud can occur when a competitor clicks on ads to deplete its rival's advertising budget, or when publishers attempt to manufacture revenue.
Click fraud is especially associated with pornography sites. In 2011, certain scamming porn websites launched dozens of hidden pages on each visitor's computer, forcing the visitor's computer to click on hundreds of paid links without the visitor's knowledge.
As with offline publications, online impression fraud can occur when publishers overstate the number of ad impressions they have delivered to their advertisers. To combat impression fraud, several publishing and advertising industry associations are developing ways to count online impressions credibly.
Technological variations:
Heterogeneous clients: Because users have different operating systems, web browsers and computer hardware (including mobile devices and different screen sizes), online ads may appear to users differently from how the advertiser intended, or the ads may not display properly at all.
A 2012 comScore study revealed that, on average, 31% of ads were not "in-view" when rendered, meaning they never had an opportunity to be seen. Rich media ads create even greater compatibility problems, as some developers may use competing (and exclusive) software to render the ads (see e.g. Comparison of HTML 5 and Flash).
Furthermore, advertisers may encounter legal problems if legally required information doesn't actually display to users, even if that failure is due to technological heterogeneity.
In the United States, the FTC has released a set of guidelines indicating that it's the advertisers' responsibility to ensure the ads display any required disclosures or disclaimers, irrespective of the users' technology.
Ad blocking: Ad blocking, or ad filtering, means the ads do not appear to the user because the user uses technology to screen out ads. Many browsers block unsolicited pop-up ads by default.
Other software programs or browser add-ons may also block the loading of ads, or block elements on a page with behaviors characteristic of ads (e.g. HTML autoplay of both audio and video). Approximately 9% of all online page views come from browsers with ad-blocking software installed, and some publishers have 40%+ of their visitors using ad-blockers.
Anti-targeting technologies: Some web browsers offer privacy modes where users can hide information about themselves from publishers and advertisers. Among other consequences, advertisers can't use cookies to serve targeted ads to private browsers. Most major browsers have incorporated Do Not Track options into their browser headers, but the regulations currently are only enforced by the honor system.
Privacy concerns: The collection of user information by publishers and advertisers has raised consumer concerns about their privacy. Sixty percent of Internet users would use Do Not Track technology to block all collection of information if given an opportunity. Over half of all Google and Facebook users are concerned about their privacy when using Google and Facebook, according to Gallup.
Many consumers have reservations about online behavioral targeting. By tracking users' online activities, advertisers are able to understand consumers quite well. Advertisers often use technology, such as web bugs and re-spawning cookies, to maximizing their abilities to track consumers.
According to a 2011 survey conducted by Harris Interactive, over half of Internet users had a negative impression of online behavioral advertising, and forty percent feared that their personally-identifiable information had been shared with advertisers without their consent.
Consumers can be especially troubled by advertisers targeting them based on sensitive information, such as financial or health status. Furthermore, some advertisers attach the MAC address of users' devices to their 'demographic profiles' so they can be re-targeted (regardless of the accuracy of the profile) even if the user clears their cookies and browsing history.
Trustworthiness of advertisers: Scammers can take advantage of consumers' difficulties verifying an online persona's identity, leading to artifices like phishing (where scam emails look identical to those from a well-known brand owner) and confidence schemes like the Nigerian "419" scam.
The Internet Crime Complaint Center received 289,874 complaints in 2012, totaling over half a billion dollars in losses, most of which originated with scam ads.
Consumers also face malware risks, i.e. malvertising, when interacting with online advertising. Cisco's 2013 Annual Security Report revealed that clicking on ads was 182 times more likely to install a virus on a user's computer than surfing the Internet for porn.
For example, in August 2014 Yahoo's advertising network reportedly saw cases of infection of a variant of Cryptolocker ransomware.
Spam: The Internet's low cost of disseminating advertising contributes to spam, especially by large-scale spammers. Numerous efforts have been undertaken to combat spam, ranging from blacklists to regulatorily-required labeling to content filters, but most of those efforts have adverse collateral effects, such as mistaken filtering.
Regulation:
In general, consumer protection laws apply equally to online and offline activities. However, there are questions over which jurisdiction's laws apply and which regulatory agencies have enforcement authority over trans-border activity.
As with offline advertising, industry participants have undertaken numerous efforts to self-regulate and develop industry standards or codes of conduct. Several United States advertising industry organizations jointly published Self-Regulatory Principles for Online Behavioral Advertising based on standards proposed by the FTC in 2009.
European ad associations published a similar document in 2011. Primary tenets of both documents include consumer control of data transfer to third parties, data security, and consent for collection of certain health and financial data. Neither framework, however, penalizes violators of the codes of conduct.
Privacy and data collection:
Privacy regulation can require users' consent before an advertiser can track the user or communicate with the user. However, affirmative consent ("opt in") can be difficult and expensive to obtain. Industry participants often prefer other regulatory schemes.
Different jurisdictions have taken different approaches to privacy issues with advertising. The United States has specific restrictions on online tracking of children in the Children's Online Privacy Protection Act (COPPA), and the FTC has recently expanded its interpretation of COPPA to include requiring ad networks to obtain parental consent before knowingly tracking kids.
Otherwise, the U.S. Federal Trade Commission frequently supports industry self-regulation, although increasingly it has been undertaking enforcement actions related to online privacy and security. The FTC has also been pushing for industry consensus about possible Do Not Track legislation.
In contrast, the European Union's "Privacy and Electronic Communications Directive" restricts websites' ability to use consumer data much more comprehensively. The EU limitations restrict targeting by online advertisers; researchers have estimated online advertising effectiveness decreases on average by around 65% in Europe relative to the rest of the world.
Delivery methods:
Many laws specifically regulate the ways online ads are delivered. For example, online advertising delivered via email is more regulated than the same ad content delivered via banner ads. Among other restrictions, the U.S. CAN-SPAM Act of 2003 requires that any commercial email provide an opt-out mechanism.
Similarly, mobile advertising is governed by the Telephone Consumer Protection Act of 1991 (TCPA), which (among other restrictions) requires user opt-in before sending advertising via text messaging.
See Also:
It includes the following:
- email marketing,
- search engine marketing (SEM),
- social media marketing,
- many types of display advertising (including web banner advertising),
- and mobile advertising.
Like other advertising media, online advertising frequently involves both a publisher, who integrates advertisements into its online content, and an advertiser, who provides the advertisements to be displayed on the publisher's content.
Other potential participants include advertising agencies who help generate and place the ad copy, an ad server which technologically delivers the ad and tracks statistics, and advertising affiliates who do independent promotional work for the advertiser.
In 2016, Internet advertising revenues in the United States surpassed those of cable television and broadcast television. In 2017, Internet advertising revenues in the United States totaled $83.0 billion, a 14% increase over the $72.50 billion in revenues in 2016.
Many common online advertising practices are controversial and increasingly subject to regulation. Online ad revenues may not adequately replace other publishers' revenue streams.
Declining ad revenue has led some publishers to hide their content behind paywalls.
Delivery Methods:
Display Advertising:
Display advertising conveys its advertising message visually using text, logos, animations, videos, photographs, or other graphics. Display advertisers frequently target users with particular traits to increase the ads' effect.
Online advertisers (typically through their ad servers) often use cookies, which are unique identifiers of specific computers, to decide which ads to serve to a particular consumer. Cookies can track whether a user left a page without buying anything, so the advertiser can later retarget the user with ads from the site the user visited.
As advertisers collect data across multiple external websites about a user's online activity, they can create a detailed profile of the user's interests to deliver even more targeted advertising. This aggregation of data is called behavioral targeting. Advertisers can also target their audience by using contextual to deliver display ads related to the content of the web page where the ads appear.
Re-targeting, behavioral targeting, and contextual advertising all are designed to increase an advertiser's return on investment, or ROI, over untargeted ads.
Advertisers may also deliver ads based on a user's suspected geography through geotargeting.
A user's IP address communicates some geographic information (at minimum, the user's country or general region). The geographic information from an IP can be supplemented and refined with other proxies or information to narrow the range of possible locations. For example, with mobile devices, advertisers can sometimes use a phone's GPS receiver or the location of nearby mobile towers.
Cookies and other persistent data on a user's machine may provide help narrowing a user's location further.
Web banner advertising:
Web banners or banner ads typically are graphical ads displayed within a web page. Many banner ads are delivered by a central ad server.
Banner ads can use rich media to incorporate video, audio, animations, buttons, forms, or other interactive elements using Java applets, HTML5, Adobe Flash, and other programs.
Frame ad (traditional banner):
Frame ads were the first form of web banners. The colloquial usage of "banner ads" often refers to traditional frame ads. Website publishers incorporate frame ads by setting aside a particular space on the web page. The Interactive Advertising Bureau's Ad Unit Guidelines proposes standardized pixel dimensions for ad units.
Pop-ups/pop-unders:
A pop-up ad is displayed in a new web browser window that opens above a website visitor's initial browser window. A pop-under ad opens a new browser window under a website visitor's initial browser window. Pop-under ads and similar technologies are now advised against by online authorities such as Google, who state that they "do not condone this practice".
Floating ad:
A floating ad, or overlay ad, is a type of rich media advertisement that appears superimposed over the requested website's content. Floating ads may disappear or become less obtrusive after a preset time period.
Expanding ad:
An expanding ad is a rich media frame ad that changes dimensions upon a predefined condition, such as a preset amount of time a visitor spends on a webpage, the user's click on the ad, or the user's mouse movement over the ad. Expanding ads allow advertisers to fit more information into a restricted ad space.
Trick banners:
A trick banner is a banner ad where the ad copy imitates some screen element users commonly encounter, such as an operating system message or popular application message, to induce ad clicks.
Trick banners typically do not mention the advertiser in the initial ad, and thus they are a form of bait-and-switch. Trick banners commonly attract a higher-than-average click-through rate, but tricked users may resent the advertiser for deceiving them.
News Feed Ads:
"News Feed Ads", also called "Sponsored Stories", "Boosted Posts", typically exist on social media platforms that offer a steady stream of information updates ("news feed") in regulated formats (i.e. in similar sized small boxes with a uniform style). Those advertisements are intertwined with non-promoted news that the users are reading through.
These advertisements can be of any content, such as promoting a website, a fan page, an app, or a product. Some examples are:
- Facebook's "Sponsored Stories"
- LinkedIn's "Sponsored Updates",
- and Twitter's "Promoted Tweets".
This display ads format falls into its own category because unlike banner ads which are quite distinguishable, News Feed Ads' format blends well into non-paid news updates. This format of online advertisement yields much higher click-through rates than traditional display ads.
Display advertising process overview:
The process by which online advertising is displayed can involve many parties. In the simplest case, the website publisher selects and serves the ads. Publishers which operate their own advertising departments may use this method.
The ads may be outsourced to an advertising agency under contract with the publisher, and served from the advertising agency's servers.
Alternatively, ad space may be offered for sale in a bidding market using an ad exchange and real-time bidding. This involves many parties interacting automatically in real time. In response to a request from the user's browser, the publisher content server sends the web page content to the user's browser over the Internet.
The page does not yet contain ads, but contains links which cause the user's browser to connect to the publisher ad server to request that the spaces left for ads be filled in with ads. Information identifying the user, such as cookies and the page being viewed, is transmitted to the publisher ad server.
The publisher ad server then communicates with a supply-side platform server. The publisher is offering ad space for sale, so they are considered the supplier. The supply side platform also receives the user's identifying information, which it sends to a data management platform. At the data management platform, the user's identifying information is used to look up demographic information, previous purchases, and other information of interest to advertisers.
Broadly speaking, there are three types of data obtained through such a data management platform:
- First party data refers to the data retrieved from customer relationship management (CRM) platforms, in addition to website and paid media content or cross-platform data. This can include data from customer behaviors, actions or interests.
- Second party data refers to an amalgamation of statistics related to cookie pools on external publications and platforms. The data is provided directly from the source (adservers, hosted solutions for social or an analytics platform). It is also possible to negotiate a deal with a particular publisher to secure specific data points or audiences.
- Third party data is sourced from external providers and often aggregated from numerous websites. Businesses sell third-party data and are able to share this via an array of distribution avenues.
This customer information is combined and returned to the supply side platform, which can now package up the offer of ad space along with information about the user who will view it. The supply side platform sends that offer to an ad exchange.
The ad exchange puts the offer out for bid to demand-side platforms. Demand side platforms act on behalf of ad agencies, who sell ads which advertise brands. Demand side platforms thus have ads ready to display, and are searching for users to view them.
Bidders get the information about the user ready to view the ad, and decide, based on that information, how much to offer to buy the ad space. According to the Internet Advertising Bureau, a demand side platform has 10 milliseconds to respond to an offer. The ad exchange picks the winning bid and informs both parties.
The ad exchange then passes the link to the ad back through the supply side platform and the publisher's ad server to the user's browser, which then requests the ad content from the agency's ad server. The ad agency can thus confirm that the ad was delivered to the browser.
This is simplified, according to the IAB. Exchanges may try to unload unsold ("remnant") space at low prices through other exchanges. Some agencies maintain semi-permanent pre-cached bids with ad exchanges, and those may be examined before going out to additional demand side platforms for bids.
The process for mobile advertising is different and may involve mobile carriers and handset software manufacturers.
Interstitial:
An interstitial ad displays before a user can access requested content, sometimes while the user is waiting for the content to load. Interstitial ads are a form of interruption marketing.
Text ads:
A text ad displays text-based hyperlinks. Text-based ads may display separately from a web page's primary content, or they can be embedded by hyperlinking individual words or phrases to advertiser's websites. Text ads may also be delivered through email marketing or text message marketing. Text-based ads often render faster than graphical ads and can be harder for ad-blocking software to block.
Search engine marketing (SEM):
Search engine marketing, or SEM, is designed to increase a website's visibility in search engine results pages (SERPs). Search engines provide sponsored results and organic (non-sponsored) results based on a web searcher's query. Search engines often employ visual cues to differentiate sponsored results from organic results. Search engine marketing includes all of an advertiser's actions to make a website's listing more prominent for topical keywords.
Search engine optimization (SEO):
Search engine optimization, or SEO, attempts to improve a website's organic search rankings in SERPs by increasing the website content's relevance to search terms. Search engines regularly update their algorithms to penalize poor quality sites that try to game their rankings, making optimization a moving target for advertisers. Many vendors offer SEO services.
Sponsored search:
Sponsored search (also called sponsored links, search ads, or paid search) allows advertisers to be included in the sponsored results of a search for selected keywords. Search ads are often sold via real-time auctions, where advertisers bid on keywords. In addition to setting a maximum price per keyword, bids may include time, language, geographical, and other constraints.
Search engines originally sold listings in order of highest bids. Modern search engines rank sponsored listings based on a combination of bid price, expected click-through rate, keyword relevancy and site quality.
Social media marketing:
Social media marketing is commercial promotion conducted through social media websites.
Many companies promote their products by posting frequent updates and providing special offers through their social media profiles.
Mobile advertising:
Mobile advertising is ad copy delivered through wireless mobile devices such as smartphones, feature phones, or tablet computers. Mobile advertising may take the form of static or rich media display ads, SMS (Short Message Service) or MMS (Multimedia Messaging Service) ads, mobile search ads, advertising within mobile websites, or ads within mobile applications or games (such as interstitial ads, "advergaming," or application sponsorship).
Industry groups such as the Mobile Marketing Association have attempted to standardize mobile ad unit specifications, similar to the IAB's efforts for general online advertising.
Mobile advertising is growing rapidly for several reasons. There are more mobile devices in the field, connectivity speeds have improved (which, among other things, allows for richer media ads to be served quickly), screen resolutions have advanced, mobile publishers are becoming more sophisticated about incorporating ads, and consumers are using mobile devices more extensively.
The Interactive Advertising Bureau predicts continued growth in mobile advertising with the adoption of location-based targeting and other technological features not available or relevant on personal computers.
In July 2014 Facebook reported advertising revenue for the June 2014 quarter of $2.68 billion, an increase of 67 percent over the second quarter of 2013. Of that, mobile advertising revenue accounted for around 62 percent, an increase of 41 percent on the previous year.
As of 2016, 14% of marketers used live videos for advertising.
Email advertising:
Email advertising is ad copy comprising an entire email or a portion of an email message. Email marketing may be unsolicited, in which case the sender may give the recipient an option to opt out of future emails, or it may be sent with the recipient's prior consent (opt-in).
Chat advertising:
As opposed to static messaging, chat advertising refers to real time messages dropped to users on certain sites. This is done by the usage of live chat software or tracking applications installed within certain websites with the operating personnel behind the site often dropping adverts on the traffic surfing around the sites. In reality this is a subset of the email advertising but different because of its time window.
Online classified advertising:
Online classified advertising is advertising posted online in a categorical listing of specific products or services. Examples include online job boards, online real estate listings, automotive listings, online yellow pages, and online auction-based listings. Craigslist and eBay are two prominent providers of online classified listings.
Adware:
Adware is software that, once installed, automatically displays advertisements on a user's computer. The ads may appear in the software itself, integrated into web pages visited by the user, or in pop-ups/pop-unders. Adware installed without the user's permission is a type of malware.
Affiliate marketing:
Affiliate marketing occurs when advertisers organize third parties to generate potential customers for them. Third-party affiliates receive payment based on sales generated through their promotion.
Affiliate marketers generate traffic to offers from affiliate networks, and when the desired action is taken by the visitor, the affiliate earns a commission. These desired actions can be an email submission, a phone call, filling out an online form, or an online order being completed.
Content marketing:
Content marketing is any marketing that involves the creation and sharing of media and publishing content in order to acquire and retain customers. This information can be presented in a variety of formats, including blogs, news, video, white papers, e-books, infographics, case studies, how-to guides and more.
Considering that most marketing involves some form of published media, it is almost (though not entirely) redundant to call 'content marketing' anything other than simply 'marketing'.
There are, of course, other forms of marketing (in-person marketing, telephone-based marketing, word of mouth marketing, etc.) where the label is more useful for identifying the type of marketing. However, even these are usually merely presenting content that they are marketing as information in a way that is different from traditional print, radio, TV, film, email, or web media.
Online marketing platform:
Online marketing platform (OMP) is an integrated web-based platform that combines the benefits of a business directory, local search engine, search engine optimization (SEO) tool, customer relationship management (CRM) package and content management system (CMS).
Ebay and Amazon are used as online marketing and logistics management platforms. On Facebook, Twitter, YouTube, Pinterest, LinkedIn, and other Social Media, retail online marketing is also used. Online business marketing platforms such as Marketo, Aprimo, MarketBright and Pardot have been bought by major IT companies (Eloqua-Oracle, Neolane-Adobe and Unica-IBM).
Unlike television marketing in which Neilsen TV Ratings can be relied upon for viewing metrics, online advertisers do not have an independent party to verify viewing claims made by the big online platforms.
Compensation methods:
Main article: Compensation methods
Advertisers and publishers use a wide range of payment calculation methods. In 2012, advertisers calculated 32% of online advertising transactions on a cost-per-impression basis, 66% on customer performance (e.g. cost per click or cost per acquisition), and 2% on hybrids of impression and performance methods.
CPM (cost per mille):
Cost per mille, often abbreviated to CPM, means that advertisers pay for every thousand displays of their message to potential customers (mille is the Latin word for thousand). In the online context, ad displays are usually called "impressions." Definitions of an "impression" vary among publishers, and some impressions may not be charged because they don't represent a new exposure to an actual customer. Advertisers can use technologies such as web bugs to verify if an impression is actually delivered.
Publishers use a variety of techniques to increase page views, such as dividing content across multiple pages, repurposing someone else's content, using sensational titles, or publishing tabloid or sexual content.
CPM advertising is susceptible to "impression fraud," and advertisers who want visitors to their sites may not find per-impression payments a good proxy for the results they desire.
CPC (cost per click):
CPC (Cost Per Click) or PPC (Pay per click) means advertisers pay each time a user clicks on the ad. CPC advertising works well when advertisers want visitors to their sites, but it's a less accurate measurement for advertisers looking to build brand awareness. CPC's market share has grown each year since its introduction, eclipsing CPM to dominate two-thirds of all online advertising compensation methods.
Like impressions, not all recorded clicks are valuable to advertisers. GoldSpot Media reported that up to 50% of clicks on static mobile banner ads are accidental and resulted in redirected visitors leaving the new site immediately.
CPE (cost per engagement):
Cost per engagement aims to track not just that an ad unit loaded on the page (i.e., an impression was served), but also that the viewer actually saw and/or interacted with the ad.
CPV (cost per view):
Cost per view video advertising. Both Google and TubeMogul endorsed this standardized CPV metric to the IAB's (Interactive Advertising Bureau) Digital Video Committee, and it's garnering a notable amount of industry support. CPV is the primary benchmark used in YouTube Advertising Campaigns, as part of Google's AdWords platform.
CPI (cost per install):
The CPI compensation method is specific to mobile applications and mobile advertising. In CPI ad campaigns brands are charged a fixed of bid rate only when the application was installed.
Attribution of ad value:
Main article: Attribution (marketing)
In marketing, "attribution" is the measurement of effectiveness of particular ads in a consumer's ultimate decision to purchase. Multiple ad impressions may lead to a consumer "click" or other action. A single action may lead to revenue being paid to multiple ad space sellers.
Other performance-based compensation:
CPA (Cost Per Action or Cost Per Acquisition) or PPP (Pay Per Performance) advertising means the advertiser pays for the number of users who perform a desired activity, such as completing a purchase or filling out a registration form.
Performance-based compensation can also incorporate revenue sharing, where publishers earn a percentage of the advertiser's profits made as a result of the ad. Performance-based compensation shifts the risk of failed advertising onto publishers.
Fixed cost:
Fixed cost compensation means advertisers pay a fixed cost for delivery of ads online, usually over a specified time period, irrespective of the ad's visibility or users' response to it.
One example is CPD (cost per day) where advertisers pay a fixed cost for publishing an ad for a day irrespective of impressions served or clicks.
Benefits of Online Advertising:
Cost:
The low costs of electronic communication reduce the cost of displaying online advertisements compared to offline ads. Online advertising, and in particular social media, provides a low-cost means for advertisers to engage with large established communities.
Advertising online offers better returns than in other media.
Measurability:
Online advertisers can collect data on their ads' effectiveness, such as the size of the potential audience or actual audience response, how a visitor reached their advertisement, whether the advertisement resulted in a sale, and whether an ad actually loaded within a visitor's view. This helps online advertisers improve their ad campaigns over time.
Formatting:
Advertisers have a wide variety of ways of presenting their promotional messages, including the ability to convey images, video, audio, and links. Unlike many offline ads, online ads also can be interactive. For example, some ads let users input queries or let users follow the advertiser on social media. Online ads can even incorporate games.
Targeting:
Publishers can offer advertisers the ability to reach customizable and narrow market segments for targeted advertising. Online advertising may use geo-targeting to display relevant advertisements to the user's geography.
Advertisers can customize each individual ad to a particular user based on the user's previous preferences. Advertisers can also track whether a visitor has already seen a particular ad in order to reduce unwanted repetitious exposures and provide adequate time gaps between exposures.
Coverage:
Online advertising can reach nearly every global market, and online advertising influences offline sales.
Speed:
Once ad design is complete, online ads can be deployed immediately. The delivery of online ads does not need to be linked to the publisher's publication schedule. Furthermore, online advertisers can modify or replace ad copy more rapidly than their offline counterparts.
Concerns:
Security concerns:
According to a US Senate investigation, the current state of online advertising endangers the security and privacy of users.
Banner blindness:
Eye-tracking studies have shown that Internet users often ignore web page zones likely to contain display ads (sometimes called "banner blindness"), and this problem is worse online than in offline media. On the other hand, studies suggest that even those ads "ignored" by the users may influence the user subconsciously.
Fraud on the advertiser:
There are numerous ways that advertisers can be overcharged for their advertising. For example, click fraud occurs when a publisher or third parties click (manually or through automated means) on a CPC ad with no legitimate buying intent. For example, click fraud can occur when a competitor clicks on ads to deplete its rival's advertising budget, or when publishers attempt to manufacture revenue.
Click fraud is especially associated with pornography sites. In 2011, certain scamming porn websites launched dozens of hidden pages on each visitor's computer, forcing the visitor's computer to click on hundreds of paid links without the visitor's knowledge.
As with offline publications, online impression fraud can occur when publishers overstate the number of ad impressions they have delivered to their advertisers. To combat impression fraud, several publishing and advertising industry associations are developing ways to count online impressions credibly.
Technological variations:
Heterogeneous clients: Because users have different operating systems, web browsers and computer hardware (including mobile devices and different screen sizes), online ads may appear to users differently from how the advertiser intended, or the ads may not display properly at all.
A 2012 comScore study revealed that, on average, 31% of ads were not "in-view" when rendered, meaning they never had an opportunity to be seen. Rich media ads create even greater compatibility problems, as some developers may use competing (and exclusive) software to render the ads (see e.g. Comparison of HTML 5 and Flash).
Furthermore, advertisers may encounter legal problems if legally required information doesn't actually display to users, even if that failure is due to technological heterogeneity.
In the United States, the FTC has released a set of guidelines indicating that it's the advertisers' responsibility to ensure the ads display any required disclosures or disclaimers, irrespective of the users' technology.
Ad blocking: Ad blocking, or ad filtering, means the ads do not appear to the user because the user uses technology to screen out ads. Many browsers block unsolicited pop-up ads by default.
Other software programs or browser add-ons may also block the loading of ads, or block elements on a page with behaviors characteristic of ads (e.g. HTML autoplay of both audio and video). Approximately 9% of all online page views come from browsers with ad-blocking software installed, and some publishers have 40%+ of their visitors using ad-blockers.
Anti-targeting technologies: Some web browsers offer privacy modes where users can hide information about themselves from publishers and advertisers. Among other consequences, advertisers can't use cookies to serve targeted ads to private browsers. Most major browsers have incorporated Do Not Track options into their browser headers, but the regulations currently are only enforced by the honor system.
Privacy concerns: The collection of user information by publishers and advertisers has raised consumer concerns about their privacy. Sixty percent of Internet users would use Do Not Track technology to block all collection of information if given an opportunity. Over half of all Google and Facebook users are concerned about their privacy when using Google and Facebook, according to Gallup.
Many consumers have reservations about online behavioral targeting. By tracking users' online activities, advertisers are able to understand consumers quite well. Advertisers often use technology, such as web bugs and re-spawning cookies, to maximizing their abilities to track consumers.
According to a 2011 survey conducted by Harris Interactive, over half of Internet users had a negative impression of online behavioral advertising, and forty percent feared that their personally-identifiable information had been shared with advertisers without their consent.
Consumers can be especially troubled by advertisers targeting them based on sensitive information, such as financial or health status. Furthermore, some advertisers attach the MAC address of users' devices to their 'demographic profiles' so they can be re-targeted (regardless of the accuracy of the profile) even if the user clears their cookies and browsing history.
Trustworthiness of advertisers: Scammers can take advantage of consumers' difficulties verifying an online persona's identity, leading to artifices like phishing (where scam emails look identical to those from a well-known brand owner) and confidence schemes like the Nigerian "419" scam.
The Internet Crime Complaint Center received 289,874 complaints in 2012, totaling over half a billion dollars in losses, most of which originated with scam ads.
Consumers also face malware risks, i.e. malvertising, when interacting with online advertising. Cisco's 2013 Annual Security Report revealed that clicking on ads was 182 times more likely to install a virus on a user's computer than surfing the Internet for porn.
For example, in August 2014 Yahoo's advertising network reportedly saw cases of infection of a variant of Cryptolocker ransomware.
Spam: The Internet's low cost of disseminating advertising contributes to spam, especially by large-scale spammers. Numerous efforts have been undertaken to combat spam, ranging from blacklists to regulatorily-required labeling to content filters, but most of those efforts have adverse collateral effects, such as mistaken filtering.
Regulation:
In general, consumer protection laws apply equally to online and offline activities. However, there are questions over which jurisdiction's laws apply and which regulatory agencies have enforcement authority over trans-border activity.
As with offline advertising, industry participants have undertaken numerous efforts to self-regulate and develop industry standards or codes of conduct. Several United States advertising industry organizations jointly published Self-Regulatory Principles for Online Behavioral Advertising based on standards proposed by the FTC in 2009.
European ad associations published a similar document in 2011. Primary tenets of both documents include consumer control of data transfer to third parties, data security, and consent for collection of certain health and financial data. Neither framework, however, penalizes violators of the codes of conduct.
Privacy and data collection:
Privacy regulation can require users' consent before an advertiser can track the user or communicate with the user. However, affirmative consent ("opt in") can be difficult and expensive to obtain. Industry participants often prefer other regulatory schemes.
Different jurisdictions have taken different approaches to privacy issues with advertising. The United States has specific restrictions on online tracking of children in the Children's Online Privacy Protection Act (COPPA), and the FTC has recently expanded its interpretation of COPPA to include requiring ad networks to obtain parental consent before knowingly tracking kids.
Otherwise, the U.S. Federal Trade Commission frequently supports industry self-regulation, although increasingly it has been undertaking enforcement actions related to online privacy and security. The FTC has also been pushing for industry consensus about possible Do Not Track legislation.
In contrast, the European Union's "Privacy and Electronic Communications Directive" restricts websites' ability to use consumer data much more comprehensively. The EU limitations restrict targeting by online advertisers; researchers have estimated online advertising effectiveness decreases on average by around 65% in Europe relative to the rest of the world.
Delivery methods:
Many laws specifically regulate the ways online ads are delivered. For example, online advertising delivered via email is more regulated than the same ad content delivered via banner ads. Among other restrictions, the U.S. CAN-SPAM Act of 2003 requires that any commercial email provide an opt-out mechanism.
Similarly, mobile advertising is governed by the Telephone Consumer Protection Act of 1991 (TCPA), which (among other restrictions) requires user opt-in before sending advertising via text messaging.
See Also:
- Adblock
- Advertising
- Advertising campaign
- Advertising management
- Advertising media
- Branded entertainment
- Direct marketing
- Integrated marketing communications
- Marketing communications
- Media planning
- Promotion (marketing)
- Promotional mix
- Promotional campaign
- Product placement
- Promotional merchandise
- Sales promotion
Targeted Advertising, including "How Ads Follow You from Phone to Desktop to Tablet" (by MIT Technology Review 7/1/2015)
YouTube Video: The Future of Targeted Advertising in Digital Media by Bloomberg
"Many consumers search on mobile devices but buy on computers, giving advertisers the incentive to track them across multiple screens.
Imagine you slack off at work and read up online about the latest Gibson 1959 Les Paul electric guitar replica. On the way home, you see an ad for the same model on your phone, reminding you this is “the most desirable Les Paul ever.” Then before bed on your tablet, you see another ad with new details about the guitar.
You may think the guitar gods have singled you out—it is your destiny to own this instrument!
For advertisers, the process is divine in its own right. Over the past year, companies have substantially and successfully stepped up repeat ad targeting to the same user across home and work computers, smart phones and tablets. With little fanfare, the strategy is fast becoming the new norm.
“You really have a convergence of three or four different things that are creating a tremendous amount of change,” says Philip Smolin, senior vice president for strategy at California-based digital advertising agency Turn. “There may be one wave that is small and it doesn’t move your boat very much, but when you have three or four medium size waves that all converge at the same time, then it becomes a massive wave, and that is happening right now.”
One of these recent waves has been greater sophistication of companies engaged in “probabilistic matching,” the study of millions of Web users to determine who is likely to be the same person across devices. For example, Drawbridge, which specializes in matching users across devices, says it has linked 1.2 billion users across 3.6 billion devices—up from 1.5 billion devices just a year ago.
Another trend making all this matching possible is the continuing transformation of Internet advertising into a marketplace of instant decisions, based on what companies know about the user.
Firms you have never heard of, such as Drawbridge, Crosswise, and Tapad, learn about your devices and your interests by tracking billions of ad requests a day from Internet ad exchanges selling in real time. Potential buyers see the user’s device, IP address, browser, and other details, information that allows for a sort of fingerprinting. “We are getting very smart about associating the anonymous identifiers across the various devices,” says Kamakshi Sivaramakrishnan, founder and CEO of Drawbridge.
For example, a cell phone and tablet accessing the same IP address at home would be one clue, as would searches for the same product. You might look for “Chevy Cruze” on your phone and then search Edmunds.com for the same thing on your laptop. The same geographic location of the searches within a short time period, combined with other information, might suggest the same user.
In the last six months or so, these companies say they have sharply increased the accuracy of probabilistic matching. A Nielsen survey of Drawbridge data released in April found 97.3 percent accuracy in linking two or more devices; an earlier Nielsen survey of Tapad found 91.2 percent accuracy.
Another wave feeding the fast growth of cross-platform advertising is the stampede onto mobile devices. Just last month Google announced that users in the United States, Japan, and eight other countries now use mobile devices for more than half of their searches. U.S. mobile traffic soared 63 percent in 2014 alone, according to a report from Cisco.
Many consumers search on mobile devices but buy on larger-screen computers, giving advertisers ever more incentive to track across multiple screens. Ad agencies are also breaking down traditional walls between video, mobile, and display teams to forge a more integrated approach.
For example, Turn recently worked with an auto insurer’s campaign that started with a video ad on one platform and then moved to display ads on other devices. The results of such efforts are promising. Drawbridge says it ran a cross-platform campaign for women’s sandals in the middle of winter for a major fashion retailer and achieved three times greater response than traditional Internet advertising.
People who prefer not to be tracked can take some countermeasures, especially against what is called deterministic tracking. Signing into Google or Facebook as well as websites and apps using those logins confirms which devices you own.
So you can log off Facebook, Google, and other accounts, use different e-mail addresses to confuse marketers and use masking software such as Blur. “But the probabilistic stuff is really hard to stop because it is like all the detritus of one’s daily activities,” says Andrew Sudbury, chief technology officer and cofounder of privacy company Abine, which makes Blur.
People can opt out of Internet tracking through an industry program called AdChoices, but few know about it or bother.
Advertisers stress they match potential buyers across platforms without gathering individual names. “People freak out over retargeting. People think someone is watching them. No one is watching anyone. The machine has a number,” says Roland Cozzolino, chief technology officer at MediaMath, a digital advertising company that last year bought Tactads, a cross-device targeting agency. “I don’t know who you are, I don’t know any personal information about you. I just know that these devices are controlled by the same user.”
Companies that go too far risk the wrath of customers. Verizon generated headlines such as “Verizon’s super-cookies are a super privacy violation” earlier this year when the public learned that the carrier plants unique identifying codes dubbed “supercookies” on Web pages. Verizon now explains the process on its website and allows an opt-out.
Drawbridge recently started tracking smart televisions and cable boxes, but advertisers on the whole are cautiously approaching targeted TV commercials, even as many expect such ad personalization in the future. Industry officials say they want to turn up the temperature slowly on the frogs in the pot of advertising, lest they leap out and prod regulation."
[End of Article]
___________________________________________________________________________
Targeted Advertising (by Wikipedia)
Targeted advertising is a form of advertising where online advertisers can use sophisticated methods to target the most receptive audiences with certain traits, based on the product or person the advertiser is promoting. These traits can either be demographic which are focused on race, economic status, sex, age, the level of education, income level and employment or they can be psychographic focused which are based on the consumer's values, personality, attitudes, opinions, lifestyles and interests.
They can also be behavioral variables, such as browser history, purchase history, and other recent activity. Targeted advertising is focused on certain traits and the consumers who are likely to have a strong preference will receive the message instead of those who have no interest and whose preferences do not match a product's attribute. This eliminates wastage.
Traditional forms of advertising, including billboards, newspapers, magazines and radio, are progressively becoming replaced by online advertisements. Information and communication technology (ICT) space has transformed over recent years, resulting in targeted advertising to stretch across all ICT technologies, such as web, IPTV, and mobile environments.
In next generation advertising, the importance of targeted advertisements will radically increase, as it spreads across numerous ICT channels cohesively.
Through the emergence of new online channels, the need for targeted advertising is increasing because companies aim to minimize wasted advertising by means of information technology.
Most targeted new media advertising currently uses second-order proxies for targetings, such as tracking online or mobile web activities of consumers, associating historical web page consumer demographics with new consumer web page access, using a search word as the basis for implied interest, or contextual advertising.
Types of Targeted Advertising:
Web services are continually generating new business ventures and revenue opportunities for internet corporations. Companies have rapidly developing technological capabilities that allow them to gather information about web users. By tracking and monitoring what websites users visit, internet service providers can directly show ads that are relative to the consumer's preferences.
Most of the time, this consist of the last search done. For example, you search for a real estate, you will then be overloaded by real estate, that you care about or not and if you had buy the product or not. It's why Ads are totally bias these days. They just tend to suggest the last search you have done. Most of today's websites are using these targeting technology to track users' internet behavior and there is much debate over the privacy issues present.
Search engine marketing:
Further information: Search engine marketing
Search engine marketing uses search engines to reach target audiences. For example, Google's Google Re-marketing Campaigns are a type of targeted advertising where websites use the IP addresses of computers that have visited their websites to re-market their ad specifically to the user who has previously been on their website as they use websites that are a part of the Google display network, or when searching for keywords related to a product or service on the google search engine.
Dynamic remarketing can improve the targeted advertising as the ads are able to include the products or services that the consumers have previously viewed on the advertisers' website within the ads.
Google Adwords have different platforms how the ads appear. The Search Network displays the ads on 'Google Search, other Google sites such as Maps and Shopping, and hundreds of non-Google search partner websites that show AdWords ads matched to search results'.
'The Display Network includes a collection of Google websites (like Google Finance, Gmail, Blogger, and YouTube), partner sites, and mobile sites and apps that show AdWords ads matched to the content on a given page.' These two kinds of Advertising networks can be beneficial for each specific goal of the company, or type of company. For example, the search network can benefit a company with the goal of reaching consumers searching for a particular product or service.
Other ways Advertising campaigns are able to target the user is to use browser history and search history, for example, if the user typed in promotional pens in a search engine, such as Google; ads for promotional pens will appear at the top of the page above the organic p
Imagine you slack off at work and read up online about the latest Gibson 1959 Les Paul electric guitar replica. On the way home, you see an ad for the same model on your phone, reminding you this is “the most desirable Les Paul ever.” Then before bed on your tablet, you see another ad with new details about the guitar.
You may think the guitar gods have singled you out—it is your destiny to own this instrument!
For advertisers, the process is divine in its own right. Over the past year, companies have substantially and successfully stepped up repeat ad targeting to the same user across home and work computers, smart phones and tablets. With little fanfare, the strategy is fast becoming the new norm.
“You really have a convergence of three or four different things that are creating a tremendous amount of change,” says Philip Smolin, senior vice president for strategy at California-based digital advertising agency Turn. “There may be one wave that is small and it doesn’t move your boat very much, but when you have three or four medium size waves that all converge at the same time, then it becomes a massive wave, and that is happening right now.”
One of these recent waves has been greater sophistication of companies engaged in “probabilistic matching,” the study of millions of Web users to determine who is likely to be the same person across devices. For example, Drawbridge, which specializes in matching users across devices, says it has linked 1.2 billion users across 3.6 billion devices—up from 1.5 billion devices just a year ago.
Another trend making all this matching possible is the continuing transformation of Internet advertising into a marketplace of instant decisions, based on what companies know about the user.
Firms you have never heard of, such as Drawbridge, Crosswise, and Tapad, learn about your devices and your interests by tracking billions of ad requests a day from Internet ad exchanges selling in real time. Potential buyers see the user’s device, IP address, browser, and other details, information that allows for a sort of fingerprinting. “We are getting very smart about associating the anonymous identifiers across the various devices,” says Kamakshi Sivaramakrishnan, founder and CEO of Drawbridge.
For example, a cell phone and tablet accessing the same IP address at home would be one clue, as would searches for the same product. You might look for “Chevy Cruze” on your phone and then search Edmunds.com for the same thing on your laptop. The same geographic location of the searches within a short time period, combined with other information, might suggest the same user.
In the last six months or so, these companies say they have sharply increased the accuracy of probabilistic matching. A Nielsen survey of Drawbridge data released in April found 97.3 percent accuracy in linking two or more devices; an earlier Nielsen survey of Tapad found 91.2 percent accuracy.
Another wave feeding the fast growth of cross-platform advertising is the stampede onto mobile devices. Just last month Google announced that users in the United States, Japan, and eight other countries now use mobile devices for more than half of their searches. U.S. mobile traffic soared 63 percent in 2014 alone, according to a report from Cisco.
Many consumers search on mobile devices but buy on larger-screen computers, giving advertisers ever more incentive to track across multiple screens. Ad agencies are also breaking down traditional walls between video, mobile, and display teams to forge a more integrated approach.
For example, Turn recently worked with an auto insurer’s campaign that started with a video ad on one platform and then moved to display ads on other devices. The results of such efforts are promising. Drawbridge says it ran a cross-platform campaign for women’s sandals in the middle of winter for a major fashion retailer and achieved three times greater response than traditional Internet advertising.
People who prefer not to be tracked can take some countermeasures, especially against what is called deterministic tracking. Signing into Google or Facebook as well as websites and apps using those logins confirms which devices you own.
So you can log off Facebook, Google, and other accounts, use different e-mail addresses to confuse marketers and use masking software such as Blur. “But the probabilistic stuff is really hard to stop because it is like all the detritus of one’s daily activities,” says Andrew Sudbury, chief technology officer and cofounder of privacy company Abine, which makes Blur.
People can opt out of Internet tracking through an industry program called AdChoices, but few know about it or bother.
Advertisers stress they match potential buyers across platforms without gathering individual names. “People freak out over retargeting. People think someone is watching them. No one is watching anyone. The machine has a number,” says Roland Cozzolino, chief technology officer at MediaMath, a digital advertising company that last year bought Tactads, a cross-device targeting agency. “I don’t know who you are, I don’t know any personal information about you. I just know that these devices are controlled by the same user.”
Companies that go too far risk the wrath of customers. Verizon generated headlines such as “Verizon’s super-cookies are a super privacy violation” earlier this year when the public learned that the carrier plants unique identifying codes dubbed “supercookies” on Web pages. Verizon now explains the process on its website and allows an opt-out.
Drawbridge recently started tracking smart televisions and cable boxes, but advertisers on the whole are cautiously approaching targeted TV commercials, even as many expect such ad personalization in the future. Industry officials say they want to turn up the temperature slowly on the frogs in the pot of advertising, lest they leap out and prod regulation."
[End of Article]
___________________________________________________________________________
Targeted Advertising (by Wikipedia)
Targeted advertising is a form of advertising where online advertisers can use sophisticated methods to target the most receptive audiences with certain traits, based on the product or person the advertiser is promoting. These traits can either be demographic which are focused on race, economic status, sex, age, the level of education, income level and employment or they can be psychographic focused which are based on the consumer's values, personality, attitudes, opinions, lifestyles and interests.
They can also be behavioral variables, such as browser history, purchase history, and other recent activity. Targeted advertising is focused on certain traits and the consumers who are likely to have a strong preference will receive the message instead of those who have no interest and whose preferences do not match a product's attribute. This eliminates wastage.
Traditional forms of advertising, including billboards, newspapers, magazines and radio, are progressively becoming replaced by online advertisements. Information and communication technology (ICT) space has transformed over recent years, resulting in targeted advertising to stretch across all ICT technologies, such as web, IPTV, and mobile environments.
In next generation advertising, the importance of targeted advertisements will radically increase, as it spreads across numerous ICT channels cohesively.
Through the emergence of new online channels, the need for targeted advertising is increasing because companies aim to minimize wasted advertising by means of information technology.
Most targeted new media advertising currently uses second-order proxies for targetings, such as tracking online or mobile web activities of consumers, associating historical web page consumer demographics with new consumer web page access, using a search word as the basis for implied interest, or contextual advertising.
Types of Targeted Advertising:
Web services are continually generating new business ventures and revenue opportunities for internet corporations. Companies have rapidly developing technological capabilities that allow them to gather information about web users. By tracking and monitoring what websites users visit, internet service providers can directly show ads that are relative to the consumer's preferences.
Most of the time, this consist of the last search done. For example, you search for a real estate, you will then be overloaded by real estate, that you care about or not and if you had buy the product or not. It's why Ads are totally bias these days. They just tend to suggest the last search you have done. Most of today's websites are using these targeting technology to track users' internet behavior and there is much debate over the privacy issues present.
Search engine marketing:
Further information: Search engine marketing
Search engine marketing uses search engines to reach target audiences. For example, Google's Google Re-marketing Campaigns are a type of targeted advertising where websites use the IP addresses of computers that have visited their websites to re-market their ad specifically to the user who has previously been on their website as they use websites that are a part of the Google display network, or when searching for keywords related to a product or service on the google search engine.
Dynamic remarketing can improve the targeted advertising as the ads are able to include the products or services that the consumers have previously viewed on the advertisers' website within the ads.
Google Adwords have different platforms how the ads appear. The Search Network displays the ads on 'Google Search, other Google sites such as Maps and Shopping, and hundreds of non-Google search partner websites that show AdWords ads matched to search results'.
'The Display Network includes a collection of Google websites (like Google Finance, Gmail, Blogger, and YouTube), partner sites, and mobile sites and apps that show AdWords ads matched to the content on a given page.' These two kinds of Advertising networks can be beneficial for each specific goal of the company, or type of company. For example, the search network can benefit a company with the goal of reaching consumers searching for a particular product or service.
Other ways Advertising campaigns are able to target the user is to use browser history and search history, for example, if the user typed in promotional pens in a search engine, such as Google; ads for promotional pens will appear at the top of the page above the organic p