Copyright © 2015 Bert N. Langford (Images may be subject to copyright. Please send feedback)
Welcome to Our Generation USA!
Below, we cover all aspects of
Modern Medicine
including the Medical Profession as well as ailments and cures, whether physical or mental in cause and treatment
See also: "Health & Fitness"
The Human Genome Project
YouTube Video: Cracking the Code Of Life | PBS Nova | 2001
The Human Genome Project (HGP) is an international scientific research project with the goal of determining the sequence of chemical base pairs which make up human DNA, and of identifying and mapping all of the genes of the human genome from both a physical and functional standpoint.
It remains the world's largest collaborative biological project. After the idea was picked up in 1984 by the US government the planning started, with the project formally launched in 1990, and finally declared complete in 2003.
Funding came from the US government through the National Institutes of Health as well as numerous other groups from around the world. A parallel project was conducted outside of government by the Celera Corporation, or Celera Genomics, which was formally launched in 1998. Most of the government-sponsored sequencing was performed in twenty universities and research centers in the United States, the United Kingdom, Japan, France, Germany, and China.
The Human Genome Project originally aimed to map the nucleotides contained in a human haploid reference genome (more than three billion). The "genome" of any given individual is unique; mapping "the human genome" involves sequencing multiple variations of each gene.
It remains the world's largest collaborative biological project. After the idea was picked up in 1984 by the US government the planning started, with the project formally launched in 1990, and finally declared complete in 2003.
Funding came from the US government through the National Institutes of Health as well as numerous other groups from around the world. A parallel project was conducted outside of government by the Celera Corporation, or Celera Genomics, which was formally launched in 1998. Most of the government-sponsored sequencing was performed in twenty universities and research centers in the United States, the United Kingdom, Japan, France, Germany, and China.
The Human Genome Project originally aimed to map the nucleotides contained in a human haploid reference genome (more than three billion). The "genome" of any given individual is unique; mapping "the human genome" involves sequencing multiple variations of each gene.
A Decade of Advancements Since the Human Genome Project
YouTube Video: Sequencing of the Human Genome to Treat Cancer - Mayo Clinic
A View by Susan Young Rojahn of MIT Technology Review.
April 12, 2013: "This Sunday, the National Institutes of Health will celebrate the 10th anniversary of the completion of the Human Genome Project.
Since the end of the 13-year and $3-billion effort to determine the sequence of a human genome (a mosaic of genomes from several people in this case), there have been some impressive advances in technology and biological understanding and the dawn of a new branch of medicine: medical genomics.
Today, sequencing a human genome can cost less than $5,000 and take only a day or two. This means genome analysis is now in the cost range of a sophisticated medical test, said Eric Green, director of the National Human Genome Research Institute, in a teleconference on Friday. Doctors can now using DNA analysis to diagnose challenging cases, such as mysterious neurodevelopmental disorders, mitochondrial disease, or other disease or unknown origin in children (see “Making Genome Sequencing Part of Clinical Care”). In such cases, genomic analysis can identify disease-causing mutations 19 percent to 33 percent of the time, according to a recent analysis.
Genomics is possibly making its biggest strides in cancer medicine. Doctors can now sequence a patient’s tumor to identify the best treatments. Specific drug targets may be found in as many as 70 percent of tumors (see “Foundation Medicine: Personalizing Cancer Drugs” and “Cancer Genomics”).
The dropping price of DNA sequencing is also changing prenatal care. A pregnant woman now has the option to eschew amniocenteses or other invasive methods for checking for chromosome aberrations in her fetus. Instead, she can get a simple blood draw (see “A Brave New World of Prenatal DNA Sequencing”).
Before the Human Genome Project, researchers knew the genetic basis of about 60 disorders. Today, they know the basis of nearly 5,000 conditions. Prescriptions are also changing because of genomics. More than 100 different FDA-approved drugs are now packaged with genomic information that tells doctors to test their patients for genetic variants linked to efficacy, dosages or risky side-effects.
But the work for human genome scientists is hardly over. There are still regions of the human genome yet to be sequenced. Most of these still unyielding regions are in parts of chromosomes that are full of complex, repetitive sequence, said Green. The much larger challenge will be to decipher what all the A’s, T’s, G’s and C’s in the human genome mean. Only a small proportion of the genome encodes for proteins and there is ongoing debate as to how much of the remainder is functional or just junk or redundant sequences.
And many scientists agree that the advances in medical genomics are just the tip of the iceberg— much more work lies ahead to fully harness genomic information to improve patient health.
You can see more of the numbers behind the advances in genome science since the Human Genome Project in this chart published by the NHGRI on Friday....: (See Chart above).
(For rest of article, click on title above).
Since the end of the 13-year and $3-billion effort to determine the sequence of a human genome (a mosaic of genomes from several people in this case), there have been some impressive advances in technology and biological understanding and the dawn of a new branch of medicine: medical genomics.
Today, sequencing a human genome can cost less than $5,000 and take only a day or two. This means genome analysis is now in the cost range of a sophisticated medical test, said Eric Green, director of the National Human Genome Research Institute, in a teleconference on Friday. Doctors can now using DNA analysis to diagnose challenging cases, such as mysterious neurodevelopmental disorders, mitochondrial disease, or other disease or unknown origin in children (see “Making Genome Sequencing Part of Clinical Care”). In such cases, genomic analysis can identify disease-causing mutations 19 percent to 33 percent of the time, according to a recent analysis.
Genomics is possibly making its biggest strides in cancer medicine. Doctors can now sequence a patient’s tumor to identify the best treatments. Specific drug targets may be found in as many as 70 percent of tumors (see “Foundation Medicine: Personalizing Cancer Drugs” and “Cancer Genomics”).
The dropping price of DNA sequencing is also changing prenatal care. A pregnant woman now has the option to eschew amniocenteses or other invasive methods for checking for chromosome aberrations in her fetus. Instead, she can get a simple blood draw (see “A Brave New World of Prenatal DNA Sequencing”).
Before the Human Genome Project, researchers knew the genetic basis of about 60 disorders. Today, they know the basis of nearly 5,000 conditions. Prescriptions are also changing because of genomics. More than 100 different FDA-approved drugs are now packaged with genomic information that tells doctors to test their patients for genetic variants linked to efficacy, dosages or risky side-effects.
But the work for human genome scientists is hardly over. There are still regions of the human genome yet to be sequenced. Most of these still unyielding regions are in parts of chromosomes that are full of complex, repetitive sequence, said Green. The much larger challenge will be to decipher what all the A’s, T’s, G’s and C’s in the human genome mean. Only a small proportion of the genome encodes for proteins and there is ongoing debate as to how much of the remainder is functional or just junk or redundant sequences.
And many scientists agree that the advances in medical genomics are just the tip of the iceberg— much more work lies ahead to fully harness genomic information to improve patient health.
You can see more of the numbers behind the advances in genome science since the Human Genome Project in this chart published by the NHGRI on Friday....: (See Chart above).
(For rest of article, click on title above).
10 Top Medical Breakthroughs
Exciting advances to health care are coming next year by Candy Sagon, AARP Bulletin, December 2015
YouTube Video: Top 10 Medical Breakthroughs according to WatchMojo
Pictured: A mind-controlled robotic limb may help those who suffer from paralysis in the future. — Zackary Canepari/ The New York Times/Redux
Game-changing new clot retriever for stroke victims. A way to restore movement by linking the brain to paralyzed limbs. New vaccines for our deadliest cancers. These will be the biggest medical advancements for 2016, according to three nationally known experts: Francis Collins, M.D., director of the National Institutes of Health; Michael Roizen, M.D., director of the Cleveland Clinic's Wellness Institute; and pathologist Michael Misialek, M.D., of Tufts University School of Medicine and Newton-Wellesley Hospital. Here are their top predictions.
1. Personalized treatment
What if you could get medical care tailored just for you, based on genetic information? The Precision Medicine Initiative, set to launch in 2016, would revolutionize how we treat disease, taking it from a one-size-fits-all approach to one fitted to each patient's unique differences. The project, Collins says, hopes to answer such important questions as: Why does a medical treatment work for some people with the same disease but not for others? One million volunteers will be asked to share their genomic and other molecular information with researchers working to understand factors that influence health.
2. Busting brain clots
A tiny new tool to get rid of brain clots will be a "game changer for stroke treatment," Roizen says. Although clot-dissolving drugs are the first line of treatment for the most common type of stroke, when they don't do the trick, physicians can now use a stent retriever to remove the small mass. A wire with a mesh-like tip is guided through the artery into the brain to grab the clot. When the wire is removed, the clot goes with it. The American Heart Association has given the device its strongest recommendation, after studies found it improves the odds that certain patients will survive and function normally again.
3. Brain-limb connection
The goal of linking the brain to a robotic limb to help paralyzed people and amputees move naturally is closer than ever. Previously, brain implants to help move these artificial limbs have had limited success because the person had to think about each individual action (open hand, close hand) in order to move the limb. But now researchers are putting implants in the area of the brain that controls the intention to move, not just the physical movement itself. This allows patients to fluidly guide a robotic limb. "We think this will get FDA approval in the next year and will have a major clinical impact on paralyzed patients," Roizen says.
4. Fine-tuning genes
Next year could see major strides toward the goal of cutting harmful genetic mistakes from a person's DNA. A gene-editing tool is a powerful technology that allows scientists to easily correct or edit DNA. "It basically gives you a scissors to cut out pieces of genes," Roizen explains. The technology was recently used to eradicate leukemia in a British child by giving her gene-edited immune cells to fight off the disease. This could represent a huge step toward treating other diseases, including correcting gene mutations that cause inherited diseases, but ethicists and scientists worry the technology could also be used to alter traits for nonmedical reasons.
5. New cancer vaccines
Your body's immune system fights off germs that cause infections — could it be taught to fight off cancer cells? That's the idea behind new immunotherapy cancer vaccines, which train the immune system to use its antiviral fighting response to destroy cancer cells without harming healthy cells. The Food and Drug Administration (FDA) already has approved such vaccines for the treatment of advanced prostate cancer and melanoma. Current research is focused on pairing new and old vaccines, including the tetanus vaccine with a newer cancer vaccine to treat glioblastoma, a type of brain cancer. Those who received the dual vaccine lived three to seven more years after treatment than those who received an injection without the tetanus portion. Among the most eagerly anticipated vaccines in 2016, Misialek says, is a lung cancer vaccine. Work on such a vaccine, first developed in Cuba, is already underway here.
6. Faster public health vaccines
In the wake of the Ebola outbreak last year, the process for producing vaccines to protect the public against a possible epidemic has gone into warp drive — a feat that makes it the Cleveland Clinic's top medical innovation for 2016. The fear that, with international travel, a single person could potentially infect huge numbers of people "has led to a system to be able to immunize a large number of people really fast," Roizen explains. Where previously it took decades to develop a vaccine, the Ebola vaccine was ready in six months. A similarly accelerated process was behind the approval of a meningitis B vaccine following outbreaks at two universities.
7. Targeting cancer
Researchers are closing in on the day when a single drug will treat many different cancers. While traditional clinical trials focus on testing a drug for a particular type of cancer based on its location — breast or lung, for example — new studies are testing therapies that target a specific genetic mutation found in a tumor, regardless of where the cancer originated. "We're throwing all cancers with the same mutation in one basket," pathologist Misialek says, and then testing a drug that targets that mutation. Results of an international basket trial found that a drug focused on a single genetic mutation can be effective across multiple cancer types, including a common form of lung cancer and a rare form of bone cancer.
8. Wearable sensors
Wearable health sensors could soon change the way people with chronic diseases, such as diabetes, heart disease and asthma, control and monitor their condition. Drug companies and researchers are looking into using wearable technology to monitor patients more accurately in clinical trials, and hospitals and outpatient clinics could use it to monitor patients after discharge. Devices, from stick-on sensors to wristbands and special clothing, can already be used to monitor respiratory and heart rates, including EKG readings, as well as body temperature and glucose level. Stress levels and inflammation readings may be next, Roizen says.
9. Fighting superbugs
Decoding the DNA of bacteria is transforming the way we identify infectious diseases in the microbiology lab. That could help prevent the spread of dangerous, drug-resistant infections in hospitals and nursing homes, says Misialek. "Instead of identifying bugs in the traditional way and waiting days or weeks for the results, we can analyze genes and get exact identification sooner," he adds. That could help protect vulnerable patients against the spread of superbugs like MRSA, C. diff and streptococcus pneumoniae that currently resist treatment.
10. Better cancer screening
Some of the deadliest, most difficult-to-find cancers may be detected by analyzing blood for abnormal proteins. Cancer produces these abnormal protein structures. Up to now, blood tests weren't sensitive enough to identify them. A new type of analysis allows researchers to find the proteins earlier, which means cancer treatment can be started at an earlier stage. Cleveland Clinic experts predict this will lead to more effective tests for pancreatic cancer, considered the most deadly type with only about 7 percent surviving five years, as well as for prostate and ovarian cancer. A new test trial using protein analysis identified twice as many cases of ovarian cancer at an earlier stage than current tests.
At the bottom of the above link to the AARP article, you can click on a video about a Parkinson's Breakthrough: This health video will focus on the new breakthrough in the treatment of Parkinson's disease.
1. Personalized treatment
What if you could get medical care tailored just for you, based on genetic information? The Precision Medicine Initiative, set to launch in 2016, would revolutionize how we treat disease, taking it from a one-size-fits-all approach to one fitted to each patient's unique differences. The project, Collins says, hopes to answer such important questions as: Why does a medical treatment work for some people with the same disease but not for others? One million volunteers will be asked to share their genomic and other molecular information with researchers working to understand factors that influence health.
2. Busting brain clots
A tiny new tool to get rid of brain clots will be a "game changer for stroke treatment," Roizen says. Although clot-dissolving drugs are the first line of treatment for the most common type of stroke, when they don't do the trick, physicians can now use a stent retriever to remove the small mass. A wire with a mesh-like tip is guided through the artery into the brain to grab the clot. When the wire is removed, the clot goes with it. The American Heart Association has given the device its strongest recommendation, after studies found it improves the odds that certain patients will survive and function normally again.
3. Brain-limb connection
The goal of linking the brain to a robotic limb to help paralyzed people and amputees move naturally is closer than ever. Previously, brain implants to help move these artificial limbs have had limited success because the person had to think about each individual action (open hand, close hand) in order to move the limb. But now researchers are putting implants in the area of the brain that controls the intention to move, not just the physical movement itself. This allows patients to fluidly guide a robotic limb. "We think this will get FDA approval in the next year and will have a major clinical impact on paralyzed patients," Roizen says.
4. Fine-tuning genes
Next year could see major strides toward the goal of cutting harmful genetic mistakes from a person's DNA. A gene-editing tool is a powerful technology that allows scientists to easily correct or edit DNA. "It basically gives you a scissors to cut out pieces of genes," Roizen explains. The technology was recently used to eradicate leukemia in a British child by giving her gene-edited immune cells to fight off the disease. This could represent a huge step toward treating other diseases, including correcting gene mutations that cause inherited diseases, but ethicists and scientists worry the technology could also be used to alter traits for nonmedical reasons.
5. New cancer vaccines
Your body's immune system fights off germs that cause infections — could it be taught to fight off cancer cells? That's the idea behind new immunotherapy cancer vaccines, which train the immune system to use its antiviral fighting response to destroy cancer cells without harming healthy cells. The Food and Drug Administration (FDA) already has approved such vaccines for the treatment of advanced prostate cancer and melanoma. Current research is focused on pairing new and old vaccines, including the tetanus vaccine with a newer cancer vaccine to treat glioblastoma, a type of brain cancer. Those who received the dual vaccine lived three to seven more years after treatment than those who received an injection without the tetanus portion. Among the most eagerly anticipated vaccines in 2016, Misialek says, is a lung cancer vaccine. Work on such a vaccine, first developed in Cuba, is already underway here.
6. Faster public health vaccines
In the wake of the Ebola outbreak last year, the process for producing vaccines to protect the public against a possible epidemic has gone into warp drive — a feat that makes it the Cleveland Clinic's top medical innovation for 2016. The fear that, with international travel, a single person could potentially infect huge numbers of people "has led to a system to be able to immunize a large number of people really fast," Roizen explains. Where previously it took decades to develop a vaccine, the Ebola vaccine was ready in six months. A similarly accelerated process was behind the approval of a meningitis B vaccine following outbreaks at two universities.
7. Targeting cancer
Researchers are closing in on the day when a single drug will treat many different cancers. While traditional clinical trials focus on testing a drug for a particular type of cancer based on its location — breast or lung, for example — new studies are testing therapies that target a specific genetic mutation found in a tumor, regardless of where the cancer originated. "We're throwing all cancers with the same mutation in one basket," pathologist Misialek says, and then testing a drug that targets that mutation. Results of an international basket trial found that a drug focused on a single genetic mutation can be effective across multiple cancer types, including a common form of lung cancer and a rare form of bone cancer.
8. Wearable sensors
Wearable health sensors could soon change the way people with chronic diseases, such as diabetes, heart disease and asthma, control and monitor their condition. Drug companies and researchers are looking into using wearable technology to monitor patients more accurately in clinical trials, and hospitals and outpatient clinics could use it to monitor patients after discharge. Devices, from stick-on sensors to wristbands and special clothing, can already be used to monitor respiratory and heart rates, including EKG readings, as well as body temperature and glucose level. Stress levels and inflammation readings may be next, Roizen says.
9. Fighting superbugs
Decoding the DNA of bacteria is transforming the way we identify infectious diseases in the microbiology lab. That could help prevent the spread of dangerous, drug-resistant infections in hospitals and nursing homes, says Misialek. "Instead of identifying bugs in the traditional way and waiting days or weeks for the results, we can analyze genes and get exact identification sooner," he adds. That could help protect vulnerable patients against the spread of superbugs like MRSA, C. diff and streptococcus pneumoniae that currently resist treatment.
10. Better cancer screening
Some of the deadliest, most difficult-to-find cancers may be detected by analyzing blood for abnormal proteins. Cancer produces these abnormal protein structures. Up to now, blood tests weren't sensitive enough to identify them. A new type of analysis allows researchers to find the proteins earlier, which means cancer treatment can be started at an earlier stage. Cleveland Clinic experts predict this will lead to more effective tests for pancreatic cancer, considered the most deadly type with only about 7 percent surviving five years, as well as for prostate and ovarian cancer. A new test trial using protein analysis identified twice as many cases of ovarian cancer at an earlier stage than current tests.
At the bottom of the above link to the AARP article, you can click on a video about a Parkinson's Breakthrough: This health video will focus on the new breakthrough in the treatment of Parkinson's disease.
Immunology and Immunotherapy:
The list of cancers being taken down by immunotherapy keeps growing (as reported by the Washington Post on April 19, 2016)
YouTube Video: "Immunotherapy Fights Cancer"
Pictured: From the Article: “In immunotherapy, T-cells from a cancer patient can be genetically engineered to be specific to a tumor. Dendritic cells (lower left) are special cells that help the immune system recognize the cancer cells and initiate a response. Drugs called "checkpoint inhibitors" also can spur the immune system. (Science Source)”
Washington Post Article By Laurie McGinley April 19 at 4:00 PM
"NEW ORLEANS — New immunotherapy drugs are showing significant and extended effectiveness against a broadening range of cancers, including rare and intractable tumors often caused by viruses. Researchers say these advances suggest the treatment approach is poised to become a critical part of the nation’s anti-cancer strategy.
Scientists reported Tuesday on two new studies showing that the medications, which marshal the body’s own immune defenses, are now proving effective against recurrent, difficult-to-treat head and neck cancer and an extremely lethal skin cancer called Merkel cell carcinoma. The diseases can be caused by viruses as well as DNA mutations, and the data show that the drugs help the immune system to recognize and attack cancers resulting from either cause.
The new studies appear to be the first to find that "virus-driven cancers can be amenable to treatment by immunotherapy," said Paul Nghiem, an investigator with the Fred Hutchinson Cancer Research Center in Seattle who led the Merkel cell study. Since viruses and other pathogens are responsible for more than 20 percent of all cancers, “these results have implications that go far beyond" Merkel cell carcinoma, which affects about 2,000 Americans a year, he said.
The new data, plus research released Sunday that showed sharply higher survival rates among advanced-melanoma patients who received immunotherapy, is prompting growing albeit guarded optimism among researchers attending the American Association for Cancer Research annual meeting here. In addition to melanoma, the infusion drugs already have been approved for use against lung and kidney cancers...."
(Click Here for Rest of Washington Post Article)
___________________________________________________________________________
Below, we cover the underlying Medical Sciences of Immunology and Immunotherapy:
Excerpted from Stanford University: "Immunology & Immunotherapy of Cancer Program":
The Program's two major goals are:
These goals will be achieved by fostering collaborative research, advancing the latest technologies to probe immunological mechanisms, and by enhancing the infrastructure for clinical translation.
Research by program members has resulted in exciting new developments in both understanding immune function and developing novel therapies. Advances include the development and application of CyTOF and high-throughput sequencing for evaluating cellular function and responses and the translation of important concepts to the clinic in promising early phase clinical trials.
___________________________________________________________________________
Immunology is a branch of biology that covers the study of immune systems in all organisms. The Russian biologist Ilya Ilyich Mechnikov advanced studies on immunology and received the Nobel Prize for his work in 1908. He pinned small thorns into starfish larvae and noticed unusual cells surrounding the thorns. This was the active response of the body trying to maintain its integrity. It was Mechnikov who first observed the phenomenon of phagocytosis, in which the body defends itself against a foreign body, and coined the term.
Immunology charts, measures, and contextualizes:
Immunology has applications in numerous disciplines of medicine, particularly in the fields of organ transplantation, oncology, virology, bacteriology, parasitology, psychiatry, and dermatology.
Prior to the designation of immunity from the etymological root immunis, which is Latin for "exempt"; early physicians characterized organs that would later be proven as essential components of the immune system.
The important lymphoid organs of the immune system are the thymus and bone marrow, and chief lymphatic tissues such as spleen, tonsils, lymph vessels, lymph nodes, adenoids, and liver.
When health conditions worsen to emergency status, portions of immune system organs including the thymus, spleen, bone marrow, lymph nodes and other lymphatic tissues can be surgically excised for examination while patients are still alive.
Many components of the immune system are typically cellular in nature and not associated with any specific organ; but rather are embedded or circulating in various tissues located throughout the body.
Click on any of the following blue hyperlinks for more about Immunology:
Immunotherapy is the "treatment of disease by inducing, enhancing, or suppressing an immune response".
Immunotherapies designed to elicit or amplify an immune response are classified as activation immunotherapies, while immunotherapies that reduce or suppress are classified as suppression immunotherapies. In recent years, immunotherapy has become of great interest to researchers, clinicians and pharmaceutical companies, particularly in its promise to treat various forms of cancer.
Immunomodulatory regimens often have fewer side effects than existing drugs, including less potential for creating resistance when treating microbial disease.
Cell-based immunotherapies are effective for some cancers. Immune effector cells such as lymphocytes, macrophages, dendritic cells, natural killer cells (NK Cell), cytotoxic T lymphocytes (CTL), etc., work together to defend the body against cancer by targeting abnormal antigens expressed on the surface of tumor cells.
Therapies such as granulocyte colony-stimulating factor (G-CSF), interferons, imiquimod and cellular membrane fractions from bacteria are licensed for medical use. Others including IL-2, IL-7, IL-12, various chemokines, synthetic cytosine phosphate-guanosine (CpG) oligodeoxynucleotides and glucans are involved in clinical and preclinical studies.
Click on any of the following blue hyperlinks for more about Immunotherapy:
"NEW ORLEANS — New immunotherapy drugs are showing significant and extended effectiveness against a broadening range of cancers, including rare and intractable tumors often caused by viruses. Researchers say these advances suggest the treatment approach is poised to become a critical part of the nation’s anti-cancer strategy.
Scientists reported Tuesday on two new studies showing that the medications, which marshal the body’s own immune defenses, are now proving effective against recurrent, difficult-to-treat head and neck cancer and an extremely lethal skin cancer called Merkel cell carcinoma. The diseases can be caused by viruses as well as DNA mutations, and the data show that the drugs help the immune system to recognize and attack cancers resulting from either cause.
The new studies appear to be the first to find that "virus-driven cancers can be amenable to treatment by immunotherapy," said Paul Nghiem, an investigator with the Fred Hutchinson Cancer Research Center in Seattle who led the Merkel cell study. Since viruses and other pathogens are responsible for more than 20 percent of all cancers, “these results have implications that go far beyond" Merkel cell carcinoma, which affects about 2,000 Americans a year, he said.
The new data, plus research released Sunday that showed sharply higher survival rates among advanced-melanoma patients who received immunotherapy, is prompting growing albeit guarded optimism among researchers attending the American Association for Cancer Research annual meeting here. In addition to melanoma, the infusion drugs already have been approved for use against lung and kidney cancers...."
(Click Here for Rest of Washington Post Article)
___________________________________________________________________________
Below, we cover the underlying Medical Sciences of Immunology and Immunotherapy:
Excerpted from Stanford University: "Immunology & Immunotherapy of Cancer Program":
The Program's two major goals are:
- To understand the nature of the immune system and its response to malignancies.
- To explore auto- and allo-immune responses to cancer with the goal of enabling the discovery and development of more effective anti-tumor immunotherapy.
These goals will be achieved by fostering collaborative research, advancing the latest technologies to probe immunological mechanisms, and by enhancing the infrastructure for clinical translation.
Research by program members has resulted in exciting new developments in both understanding immune function and developing novel therapies. Advances include the development and application of CyTOF and high-throughput sequencing for evaluating cellular function and responses and the translation of important concepts to the clinic in promising early phase clinical trials.
___________________________________________________________________________
Immunology is a branch of biology that covers the study of immune systems in all organisms. The Russian biologist Ilya Ilyich Mechnikov advanced studies on immunology and received the Nobel Prize for his work in 1908. He pinned small thorns into starfish larvae and noticed unusual cells surrounding the thorns. This was the active response of the body trying to maintain its integrity. It was Mechnikov who first observed the phenomenon of phagocytosis, in which the body defends itself against a foreign body, and coined the term.
Immunology charts, measures, and contextualizes:
- physiological functioning of the immune system in states of both health and diseases;
- malfunctions of the immune system in immunological disorders (such as autoimmune diseases, hypersensitivities, immune deficiency, and transplant rejection);
- and the physical, chemical and physiological characteristics of the components of the immune system in vitro, in situ, and in vivo.
Immunology has applications in numerous disciplines of medicine, particularly in the fields of organ transplantation, oncology, virology, bacteriology, parasitology, psychiatry, and dermatology.
Prior to the designation of immunity from the etymological root immunis, which is Latin for "exempt"; early physicians characterized organs that would later be proven as essential components of the immune system.
The important lymphoid organs of the immune system are the thymus and bone marrow, and chief lymphatic tissues such as spleen, tonsils, lymph vessels, lymph nodes, adenoids, and liver.
When health conditions worsen to emergency status, portions of immune system organs including the thymus, spleen, bone marrow, lymph nodes and other lymphatic tissues can be surgically excised for examination while patients are still alive.
Many components of the immune system are typically cellular in nature and not associated with any specific organ; but rather are embedded or circulating in various tissues located throughout the body.
Click on any of the following blue hyperlinks for more about Immunology:
- Classical immunology
- Clinical immunology
- Developmental immunology
- Ecoimmunology and behavioural immunity
- Immunotherapy: see next topic
- Diagnostic immunology
- Cancer immunology
- Reproductive immunology
- Theoretical immunology
- Immunologist
- Career in immunology
- See also:
- History of immunology
- Immunomics
- International Reviews of Immunology
- List of immunologists
- Osteoimmunology
- Outline of immunology
- American Association of Immunologists
- British Society for Immunology
- Annual Review of Immunology journal
- BMC: Immunology at BioMed Central, an open-access journal publishing original peer-reviewed research articles.
- Nature reviews: Immunology
- The Immunology Database and Analysis Portal, a NIAID-funded database resource.
- Federation of Clinical Immunology Societies
Immunotherapy is the "treatment of disease by inducing, enhancing, or suppressing an immune response".
Immunotherapies designed to elicit or amplify an immune response are classified as activation immunotherapies, while immunotherapies that reduce or suppress are classified as suppression immunotherapies. In recent years, immunotherapy has become of great interest to researchers, clinicians and pharmaceutical companies, particularly in its promise to treat various forms of cancer.
Immunomodulatory regimens often have fewer side effects than existing drugs, including less potential for creating resistance when treating microbial disease.
Cell-based immunotherapies are effective for some cancers. Immune effector cells such as lymphocytes, macrophages, dendritic cells, natural killer cells (NK Cell), cytotoxic T lymphocytes (CTL), etc., work together to defend the body against cancer by targeting abnormal antigens expressed on the surface of tumor cells.
Therapies such as granulocyte colony-stimulating factor (G-CSF), interferons, imiquimod and cellular membrane fractions from bacteria are licensed for medical use. Others including IL-2, IL-7, IL-12, various chemokines, synthetic cytosine phosphate-guanosine (CpG) oligodeoxynucleotides and glucans are involved in clinical and preclinical studies.
Click on any of the following blue hyperlinks for more about Immunotherapy:
- Immunomodulators
- Activation immunotherapies
- Suppression immunotherapies
- Helminthic therapies
- See also:
- Biological response modifiers
- Interleukin-2 immunotherapy
- Microtransplantation
- Langreth, Robert (12 February 2009). "Cancer Miracles". Forbes.
- International Society for Biological Therapy of Cancer
Medical Ultrasound Technology
YouTube Video Diagnostic Ultrasonography Procedure
Pictured: Portable Ultrasound Machine
Medical ultrasound (also known as diagnostic sonography or ultrasonography) is a diagnostic imaging technique based on the application of ultrasound.
It is used to see internal body structures such as tendons, muscles, joints, vessels and internal organs. Its aim is often to find a source of a disease or to exclude any pathology. The practice of examining pregnant women using ultrasound is called obstetric ultrasound, and is widely used. But, ultrasound is also used for non-invasive examination of body organs like the liver.
Ultrasound is sound waves with frequencies which are higher than those audible to humans (>20,000 Hz). Ultrasonic images also known as sonograms are made by sending pulses of ultrasound into tissue using a probe.
The sound echoes off the tissue; with different tissues reflecting varying degrees of sound. These echoes are recorded and displayed as an image to the operator.
Many different types of images can be formed using sonographic instruments. The most well-known type is a B-mode image, which displays the acoustic impedance of a two-dimensional cross-section of tissue. Other types of image can display blood flow, motion of tissue over time, the location of blood, the presence of specific molecules, the stiffness of tissue, or the anatomy of a three-dimensional region.
Compared to other prominent methods of medical imaging, ultrasound has several advantages. It provides images in real-time, it is portable and can be brought to the bedside, it is substantially lower in cost, and it does not use harmful ionizing radiation.
Drawbacks of ultrasonography include various limits on its field of view including patient cooperation and physique, difficulty imaging structures behind bone and air, and its dependence on a skilled operator.
It is used to see internal body structures such as tendons, muscles, joints, vessels and internal organs. Its aim is often to find a source of a disease or to exclude any pathology. The practice of examining pregnant women using ultrasound is called obstetric ultrasound, and is widely used. But, ultrasound is also used for non-invasive examination of body organs like the liver.
Ultrasound is sound waves with frequencies which are higher than those audible to humans (>20,000 Hz). Ultrasonic images also known as sonograms are made by sending pulses of ultrasound into tissue using a probe.
The sound echoes off the tissue; with different tissues reflecting varying degrees of sound. These echoes are recorded and displayed as an image to the operator.
Many different types of images can be formed using sonographic instruments. The most well-known type is a B-mode image, which displays the acoustic impedance of a two-dimensional cross-section of tissue. Other types of image can display blood flow, motion of tissue over time, the location of blood, the presence of specific molecules, the stiffness of tissue, or the anatomy of a three-dimensional region.
Compared to other prominent methods of medical imaging, ultrasound has several advantages. It provides images in real-time, it is portable and can be brought to the bedside, it is substantially lower in cost, and it does not use harmful ionizing radiation.
Drawbacks of ultrasonography include various limits on its field of view including patient cooperation and physique, difficulty imaging structures behind bone and air, and its dependence on a skilled operator.
Brain-Computer Interface
YouTube Video EEG mind controlled wheelchair - UC Berkeley ME102B/ME135 (demo)
Pictured: LEFT: The groundbreaking mind-controlled BIONIC ARM that plugs into the body and has given wearers back their sense of touch; RIGHT: The First Ever Bionic Leg is powered by man’s brain.
A brain–computer interface (BCI), sometimes called a mind-machine interface (MMI), direct neural interface (DNI), or brain–machine interface (BMI), is a direct communication pathway between an enhanced or wired brain and an external device.
BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.
Research on BCIs began in the 1970s at the University of California, Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA. The papers published after this research also mark the first appearance of the expression brain–computer interface in scientific literature.
The field of BCI research and development has since focused primarily on neuroprosthetics applications that aim at restoring damaged hearing, sight and movement. Thanks to the remarkable cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector channels.
Following years of animal experimentation, the first neuroprosthetic devices implanted in humans appeared in the mid-1990s.
For further amplification about this topic, click here.
BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.
Research on BCIs began in the 1970s at the University of California, Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA. The papers published after this research also mark the first appearance of the expression brain–computer interface in scientific literature.
The field of BCI research and development has since focused primarily on neuroprosthetics applications that aim at restoring damaged hearing, sight and movement. Thanks to the remarkable cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector channels.
Following years of animal experimentation, the first neuroprosthetic devices implanted in humans appeared in the mid-1990s.
For further amplification about this topic, click here.
Advancements in Neuro-imaging Technology
YouTube Video: New center advances biomedical and brain imaging
Pictured: Click for an article about the use of neuroimaging: From the National Institute of Mental Health: Neuroimaging and Mental Illness: A Window Into the Brain
Neuroimaging or brain imaging is the use of various techniques to either directly or indirectly image the structure, function/pharmacology of the nervous system.
It is a relatively new discipline within medicine and neuroscience/psychology.
Physicians who specialize in the performance and interpretation of neuroimaging in the clinical setting are neuroradiologists.
Neuroimaging falls into two broad categories:
Functional imaging enables, for example, the processing of information by centers in the brain to be visualized directly. Such processing causes the involved area of the brain to increase metabolism and "light up" on the scan. One of the more controversial uses of neuroimaging has been research into "thought identification" or mind-reading.
For further amplification, click on any of the following:
It is a relatively new discipline within medicine and neuroscience/psychology.
Physicians who specialize in the performance and interpretation of neuroimaging in the clinical setting are neuroradiologists.
Neuroimaging falls into two broad categories:
- Structural imaging, which deals with the structure of the nervous system and the diagnosis of gross (large scale) intracranial disease (such as tumor), and injury, and
- Functional imaging, which is used to diagnose metabolic diseases and lesions on a finer scale (such as Alzheimer's disease) and also for neurological and cognitive psychology research and building brain-computer interfaces.
Functional imaging enables, for example, the processing of information by centers in the brain to be visualized directly. Such processing causes the involved area of the brain to increase metabolism and "light up" on the scan. One of the more controversial uses of neuroimaging has been research into "thought identification" or mind-reading.
For further amplification, click on any of the following:
- 1 History
- 2 Indications
- 3 Brain imaging techniques
- 3.1 Computed axial tomography
- 3.2 Diffuse optical imaging
- 3.3 Event-related optical signal
- 3.4 Magnetic resonance imaging
- 3.5 Functional magnetic resonance imaging
- 3.6 Magnetoencephalography
- 3.7 Positron emission tomography
- 3.8 Single-photon emission computed tomography
- 3.9 Cranial Ultrasound
- 3.10 Comparison of imaging types
- 4 See also
- 5 References
- 6 External links
Advancements in Radiology
YouTube Video MRI Brain Sequences - radiology video tutorial
Pictured: Los Angeles Times May 12, 2016 issue: Radiologists use MRIs to find biomarker for Alzheimer's disease
Radiology is a medical specialty that uses imaging to diagnose and treat diseases seen within the body.
A variety of imaging techniques such as
Interventional radiology is the performance of (usually minimally invasive) medical procedures with the guidance of imaging technologies.
The acquisition of medical imaging is usually carried out by the radiographer, often known as a radiologic technologist.
Depending on location, the diagnostic radiologist, or reporting radiographer, then interprets or "reads" the images and produces a report of their findings and impression or diagnosis. This report is then transmitted to the clinician who requested the imaging, either routinely or emergently.
Imaging exams are stored digitally in the picture archiving and communication system (PACS) where they can be viewed by all members of the healthcare team within the same health system and compared later on with future imaging exams.
Click on any of the following hyperlinks for amplification:
A variety of imaging techniques such as
- X-ray radiography,
- ultrasound,
- computed tomography (CT),
- nuclear medicine including positron emission tomography (PET),
- and magnetic resonance imaging (MRI)
Interventional radiology is the performance of (usually minimally invasive) medical procedures with the guidance of imaging technologies.
The acquisition of medical imaging is usually carried out by the radiographer, often known as a radiologic technologist.
Depending on location, the diagnostic radiologist, or reporting radiographer, then interprets or "reads" the images and produces a report of their findings and impression or diagnosis. This report is then transmitted to the clinician who requested the imaging, either routinely or emergently.
Imaging exams are stored digitally in the picture archiving and communication system (PACS) where they can be viewed by all members of the healthcare team within the same health system and compared later on with future imaging exams.
Click on any of the following hyperlinks for amplification:
- 1 Diagnostic imaging modalities
- 2 Interventional radiology
- 3 Teleradiology
- 4 Professional training
- 5 See also
- 6 References
- 7 External links
Advancements in Organ Transplants
YouTube Video: Donation and Transplantation: How does it work?
Pictured: How Organ Transplants Work
Organ transplantation is the moving of an organ from one body to another or from a donor site to another location on the person's own body, to replace the recipient's damaged or absent organ.
Organs and/or tissues that are transplanted within the same person's body are called autografts. Transplants that are recently performed between two subjects of the same species are called allografts.
Allografts can either be from a living or cadaveric source.
Organs that can be transplanted are the heart, kidneys, liver, lungs, pancreas, intestine, and thymus. Tissues include bones, tendons (both referred to as musculoskeletal grafts), cornea, skin, heart valves, nerves and veins.
Worldwide, the kidneys are the most commonly transplanted organs, followed by the liver and then the heart. Cornea and musculoskeletal grafts are the most commonly transplanted tissues; these outnumber organ transplants by more than tenfold.
Organ donors may be living, brain dead, or dead via circulatory death. Tissue may be recovered from donors who die of circulatory death, as well as of brain death – up to 24 hours past the cessation of heartbeat.
Unlike organs, most tissues (with the exception of corneas) can be preserved and stored for up to five years, meaning they can be "banked". Transplantation raises a number of bioethical issues, including the definition of death, when and how consent should be given for an organ to be transplanted, and payment for organs for transplantation.
Other ethical issues include transplantation tourism and more broadly the socio-economic context in which organ procurement or transplantation may occur. A particular problem is organ trafficking.
Some organs, such as the brain, cannot be transplanted.
Transplantation medicine is one of the most challenging and complex areas of modern medicine. Some of the key areas for medical management are the problems of transplant rejection, during which the body has an immune response to the transplanted organ, possibly leading to transplant failure and the need to immediately remove the organ from the recipient.
When possible, transplant rejection can be reduced through serotyping to determine the most appropriate donor-recipient match and through the use of immunosuppressant drugs.
For further information, click on any of the following:
Organs and/or tissues that are transplanted within the same person's body are called autografts. Transplants that are recently performed between two subjects of the same species are called allografts.
Allografts can either be from a living or cadaveric source.
Organs that can be transplanted are the heart, kidneys, liver, lungs, pancreas, intestine, and thymus. Tissues include bones, tendons (both referred to as musculoskeletal grafts), cornea, skin, heart valves, nerves and veins.
Worldwide, the kidneys are the most commonly transplanted organs, followed by the liver and then the heart. Cornea and musculoskeletal grafts are the most commonly transplanted tissues; these outnumber organ transplants by more than tenfold.
Organ donors may be living, brain dead, or dead via circulatory death. Tissue may be recovered from donors who die of circulatory death, as well as of brain death – up to 24 hours past the cessation of heartbeat.
Unlike organs, most tissues (with the exception of corneas) can be preserved and stored for up to five years, meaning they can be "banked". Transplantation raises a number of bioethical issues, including the definition of death, when and how consent should be given for an organ to be transplanted, and payment for organs for transplantation.
Other ethical issues include transplantation tourism and more broadly the socio-economic context in which organ procurement or transplantation may occur. A particular problem is organ trafficking.
Some organs, such as the brain, cannot be transplanted.
Transplantation medicine is one of the most challenging and complex areas of modern medicine. Some of the key areas for medical management are the problems of transplant rejection, during which the body has an immune response to the transplanted organ, possibly leading to transplant failure and the need to immediately remove the organ from the recipient.
When possible, transplant rejection can be reduced through serotyping to determine the most appropriate donor-recipient match and through the use of immunosuppressant drugs.
For further information, click on any of the following:
- 1 Types of transplant
- 2 Organs and tissues transplanted
- 3 Types of donor
- 4 Allocation of organs
- 5 Reasons for donation and ethical issues
- 6 Usage
- 7 History
- 8 Society and culture
- 9 Research
- 10 See also
- 11 References
- 12 Further reading
- 13 External links
Advancements in Neurosurgery
YouTube Video A day in the life of a neurosurgeon
Pictured: LEFT: Atlas of Neurosurgical Techniques; RIGHT: Illustration of Spinal Fusion Surgery
Neurosurgery (or neurological surgery) is the medical specialty concerned with the prevention, diagnosis, treatment, and rehabilitation of disorders which affect any portion of the nervous system including the brain, spinal cord, peripheral nerves, and extra-cranial cerebrovascular system.
In the United States, a neurosurgeon must generally complete four years of undergraduate education, four years of medical school, and seven years of residency
Most, but not all, residency programs have some component of basic science or clinical research. Neurosurgeons may pursue additional training in the form of a fellowship, after residency or in some cases, as a senior resident.
These fellowships include pediatric neurosurgery, trauma/neurocritical care, functional and stereotactic surgery, surgical neuro-oncology, radiosurgery, neurovascular surgery, skull-base surgery, peripheral nerve and spine surgery.
In the U.S., neurosurgery is considered a highly competitive specialty composed of 0.6% of all practicing physicians.
Main divisions of neurosurgery:
General neurosurgery involves most neurosurgical conditions including neuro-trauma and other neuro-emergencies such as intracranial hemorrhage. Most level 1 hospitals have this kind of practice.
Specialized branches have developed to cater to special and difficult conditions. These specialized branches co-exist with general neurosurgery in more sophisticated hospitals. To practice advanced specialization within neurosurgery, additional higher fellowship training of one to two years is expected from the neurosurgeon.
Some of these divisions of neurosurgery are:
Neuroanesthesia is a field of anesthesiology which focuses on neurosurgery. Anesthesia is not used during the middle of an "awake" brain surgery. Awake brain surgery is where the patient is conscious for the middle of the procedure and sedated for the beginning and end. This procedure is used when the tumor does not have clear boundaries and the surgeon wants to know if they are invading on critical regions of the brain which involve functions like talking, cognition, vision, and hearing. It will also be conducted for procedures which the surgeon is trying to combat epileptic seizures.
Neurosurgery methods
Neuroradiology plays a key role not only in diagnosis but also in the operative phase of neurosurgery.
Neuroradiology methods are used in modern neurosurgery diagnosis and treatment. They include:
Some neurosurgery procedures involve the use of intra-operative MRI and functional MRI.
In conventional open surgery the neurosurgeon opens the skull, creating a large opening to access the brain. Techniques involving smaller openings with the aid of microscopes and endoscopes are now being used as well.
Methods that utilize small craniotomies in conjunction with high-clarity microscopic visualization of neural tissue offer excellent results. However, the open methods are still traditionally used in trauma or emergency situations.
Microsurgery is utilized in many aspects of neurological surgery. Microvascular techniques are used in EC-IC bypass surgery and in restoration carotid endarterectomy. The clipping of an aneurysm is performed under microscopic vision. minimally-invasive spine surgery utilizes microscopes or endoscopes. Procedures such as microdiscectomy, laminectomy, and artificial disc replacement rely on microsurgery.
Using stereotaxy neurosurgeons can approach a minute target in the brain through a minimal opening. This is used in functional neurosurgery where electrodes are implanted or gene therapy is instituted with high level of accuracy as in the case of Parkinson's disease or Alzheimer's disease. Using the combination method of open and stereotactic surgery, intraventricular hemorrhages can potentially be evacuated successfully.
Conventional surgery using image guidance technologies is also becoming common and is referred to as surgical navigation, computer assisted surgery, navigated surgery, stereotactic navigation. Similar to a car or mobile Global Positioning System (GPS), image guided surgery systems, like Curve Image Guided Surgery and StealthStation, use cameras or electromagnetic fields to capture and relay the patient’s anatomy and the surgeon’s precise movements in relation to the patient, to computer monitors in the operating room.
These sophisticated computerized systems are used before and during surgery to help orient the surgeon with three-dimensional images of the patient’s anatomy including the tumor.
Minimally invasive endoscopic surgery is commonly utilized by neurosurgeons when appropriate. Techniques such as endoscopic endonasal surgery are used in pituitary tumors, craniopharyngiomas, chordomas, and the repair of cerebrospinal fluid leaks.
Ventricular endoscopy is used in the treatment of intraventricular bleeds, hydrocephalus, colloid cyst and neurocysticercosis. Endonasal endoscopy is at times carried out with neurosurgeons and ENT surgeons working together as a team.
Repair of craniofacial disorders and disturbance of cerebrospinal fluid circulation is done by neurosurgeons who also occasionally team up with maxillofacial and plastic surgeons. Cranioplasty for craniosynostosis is performed by pediatric neurosurgeons with or without plastic surgeons.
Neurosurgeons are involved in stereotactic radiosurgery along with radiation oncologists in tumor and AVM treatment. Radiosurgical methods such as Gamma knife, Cyberknife and Novalis Radiosurgery are used as well.
Endovascular Neurosurgery utilize endovascular image guided procedures for the treatment of aneurysms, AVMs, carotid stenosis, strokes, and spinal malformations, and vasospasms. Techniques such as angioplasty, stenting, clot retrieval, embolization, and diagnostic angiography are endovascular procedures.
A common procedure performed in neurosurgery is the placement of Ventriculo-Peritoneal Shunt (VP Shunt). In pediatric practice this is often implemented in cases of congenital hydrocephalus. The most common indication for this procedure in adults is Normal Pressure Hydrocephalus (NPH).
Neurosurgery of the spine covers the cervical, thoracic and lumbar spine. Some indications for spine surgery include spinal cord compression resulting from trauma, arthritis of the spinal discs, or spondylosis. In cervical cord compression, patients may have difficulty with gait, balance issues, and/or numbness and tingling in the hands or feet.
Spondylosis is the condition of spinal disc degeneration and arthritis that may compress the spinal canal. This condition can often result in bone-spurring and disc herniation. Power drills and special instruments are often used to correct any compression problems of the spinal canal. Disc herniations of spinal vertebral discs are removed with special rongeurs. This procedure is known as a discectomy.
Generally once a disc is removed it is replaced by an implant which will create a bony fusion between vertebral bodies above and below. Instead, a mobile disc could be implanted into the disc space to maintain mobility. This is commonly used in cervical disc surgery. At times instead of disc removal a Laser discectomy could be used to decompress a nerve root. This method is mainly used for lumbar discs. Laminectomy is the removal of the Lamina portion of the vertebrae of the spine in order to make room for the compressed nerve tissue.
Radiology assisted spine surgery uses minimally-invasive procedures. They include the techniques of vertebroplasty and kyphoplasty in which certain types of spinal fractures are managed.
Potentially unstable spines will need spine fusions. At present these procedures include complex instrumentation. Spine fusions could be performed as open surgery or as minimally invasive surgery. Anterior cervical diskectomy and fusion is a common surgery that is performed for disc disease of cervical spine.
However, each method described above may not work in all patients. Therefore, careful selection of patients for each procedure is important. It has to be noted that if there is prior permanent neural tissue damage spinal surgery may not take away the symptoms.
Surgery for chronic pain is a sub branch of functional neurosurgery. Some of the techniques include implantation of deep brain stimulators, spinal cord stimulators, peripheral stimulators and pain pumps.
Surgery of the peripheral nervous system is also possible, and includes the very common procedures of carpal tunnel decompression and peripheral nerve transposition. Numerous other types of nerve entrapment conditions and other problems with the peripheral nervous system are treated as well.
Other conditions treated by neurosurgeons include:
In the United States, a neurosurgeon must generally complete four years of undergraduate education, four years of medical school, and seven years of residency
Most, but not all, residency programs have some component of basic science or clinical research. Neurosurgeons may pursue additional training in the form of a fellowship, after residency or in some cases, as a senior resident.
These fellowships include pediatric neurosurgery, trauma/neurocritical care, functional and stereotactic surgery, surgical neuro-oncology, radiosurgery, neurovascular surgery, skull-base surgery, peripheral nerve and spine surgery.
In the U.S., neurosurgery is considered a highly competitive specialty composed of 0.6% of all practicing physicians.
Main divisions of neurosurgery:
General neurosurgery involves most neurosurgical conditions including neuro-trauma and other neuro-emergencies such as intracranial hemorrhage. Most level 1 hospitals have this kind of practice.
Specialized branches have developed to cater to special and difficult conditions. These specialized branches co-exist with general neurosurgery in more sophisticated hospitals. To practice advanced specialization within neurosurgery, additional higher fellowship training of one to two years is expected from the neurosurgeon.
Some of these divisions of neurosurgery are:
- stereotactic neurosurgery, functional neurosurgery, and epilepsy surgery (the latter includes partial or total corpus callosotomy- severing part or all of the corpus callosum to stop or lessen seizure spread and activity, and the surgical removal of functional- physiological- and/or anatomical pieces or divisions of the brain, called epileptic foci, that are operable and that are causing seizures, and also the more radical and very, very rare partial or total lobectomy, or even hemispherectomy- the removal of part or all of one of the lobes, or one of the cerebral hemispheres of the brain; those two procedures, when possible, are also very, very rarely used in oncological neurosurgery or to treat very severe neurological trauma, such as stab or gunshot wounds to the brain)
- oncological neurosurgery also called neurosurgical oncology; includes pediatric oncological neurosurgery; treatment of benign and malignant central and peripheral nervous system cancers and pre-cancerous lesions in adults and children (including, among others, glioblastoma multiforme and other gliomas, brain stem cancer, astrocytoma, pontine glioma, medulloblastoma, spinal cancer, tumors of the meninges and intracranial spaces, secondary metastases to the brain, spine, and nerves, and peripheral nervous system tumors)
- peripheral nerve surgery
- pediatric neurosurgery (for cancer, seizures, bleeding, stroke, cognitive disorders or congenital neurological disorders)
- neuropsychiatric surgery (neurosurgery for the treatment of adult or pediatric mental illnesses)
- geriatric neurosurgery (for the treatment of neurological disorders and dementias and mental impairments due to age, but not due to a stroke, seizure, tumor, concussion, or neurovascular cause- namely, Parkinsonism, Alzheimer's, multiple sclerosis, and similar disorders)
Neuroanesthesia is a field of anesthesiology which focuses on neurosurgery. Anesthesia is not used during the middle of an "awake" brain surgery. Awake brain surgery is where the patient is conscious for the middle of the procedure and sedated for the beginning and end. This procedure is used when the tumor does not have clear boundaries and the surgeon wants to know if they are invading on critical regions of the brain which involve functions like talking, cognition, vision, and hearing. It will also be conducted for procedures which the surgeon is trying to combat epileptic seizures.
Neurosurgery methods
Neuroradiology plays a key role not only in diagnosis but also in the operative phase of neurosurgery.
Neuroradiology methods are used in modern neurosurgery diagnosis and treatment. They include:
- computer assisted imaging computed tomography (CT),
- magnetic resonance imaging (MRI),
- positron emission tomography (PET),
- magnetoencephalography (MEG),
- and stereotactic radiosurgery.
Some neurosurgery procedures involve the use of intra-operative MRI and functional MRI.
In conventional open surgery the neurosurgeon opens the skull, creating a large opening to access the brain. Techniques involving smaller openings with the aid of microscopes and endoscopes are now being used as well.
Methods that utilize small craniotomies in conjunction with high-clarity microscopic visualization of neural tissue offer excellent results. However, the open methods are still traditionally used in trauma or emergency situations.
Microsurgery is utilized in many aspects of neurological surgery. Microvascular techniques are used in EC-IC bypass surgery and in restoration carotid endarterectomy. The clipping of an aneurysm is performed under microscopic vision. minimally-invasive spine surgery utilizes microscopes or endoscopes. Procedures such as microdiscectomy, laminectomy, and artificial disc replacement rely on microsurgery.
Using stereotaxy neurosurgeons can approach a minute target in the brain through a minimal opening. This is used in functional neurosurgery where electrodes are implanted or gene therapy is instituted with high level of accuracy as in the case of Parkinson's disease or Alzheimer's disease. Using the combination method of open and stereotactic surgery, intraventricular hemorrhages can potentially be evacuated successfully.
Conventional surgery using image guidance technologies is also becoming common and is referred to as surgical navigation, computer assisted surgery, navigated surgery, stereotactic navigation. Similar to a car or mobile Global Positioning System (GPS), image guided surgery systems, like Curve Image Guided Surgery and StealthStation, use cameras or electromagnetic fields to capture and relay the patient’s anatomy and the surgeon’s precise movements in relation to the patient, to computer monitors in the operating room.
These sophisticated computerized systems are used before and during surgery to help orient the surgeon with three-dimensional images of the patient’s anatomy including the tumor.
Minimally invasive endoscopic surgery is commonly utilized by neurosurgeons when appropriate. Techniques such as endoscopic endonasal surgery are used in pituitary tumors, craniopharyngiomas, chordomas, and the repair of cerebrospinal fluid leaks.
Ventricular endoscopy is used in the treatment of intraventricular bleeds, hydrocephalus, colloid cyst and neurocysticercosis. Endonasal endoscopy is at times carried out with neurosurgeons and ENT surgeons working together as a team.
Repair of craniofacial disorders and disturbance of cerebrospinal fluid circulation is done by neurosurgeons who also occasionally team up with maxillofacial and plastic surgeons. Cranioplasty for craniosynostosis is performed by pediatric neurosurgeons with or without plastic surgeons.
Neurosurgeons are involved in stereotactic radiosurgery along with radiation oncologists in tumor and AVM treatment. Radiosurgical methods such as Gamma knife, Cyberknife and Novalis Radiosurgery are used as well.
Endovascular Neurosurgery utilize endovascular image guided procedures for the treatment of aneurysms, AVMs, carotid stenosis, strokes, and spinal malformations, and vasospasms. Techniques such as angioplasty, stenting, clot retrieval, embolization, and diagnostic angiography are endovascular procedures.
A common procedure performed in neurosurgery is the placement of Ventriculo-Peritoneal Shunt (VP Shunt). In pediatric practice this is often implemented in cases of congenital hydrocephalus. The most common indication for this procedure in adults is Normal Pressure Hydrocephalus (NPH).
Neurosurgery of the spine covers the cervical, thoracic and lumbar spine. Some indications for spine surgery include spinal cord compression resulting from trauma, arthritis of the spinal discs, or spondylosis. In cervical cord compression, patients may have difficulty with gait, balance issues, and/or numbness and tingling in the hands or feet.
Spondylosis is the condition of spinal disc degeneration and arthritis that may compress the spinal canal. This condition can often result in bone-spurring and disc herniation. Power drills and special instruments are often used to correct any compression problems of the spinal canal. Disc herniations of spinal vertebral discs are removed with special rongeurs. This procedure is known as a discectomy.
Generally once a disc is removed it is replaced by an implant which will create a bony fusion between vertebral bodies above and below. Instead, a mobile disc could be implanted into the disc space to maintain mobility. This is commonly used in cervical disc surgery. At times instead of disc removal a Laser discectomy could be used to decompress a nerve root. This method is mainly used for lumbar discs. Laminectomy is the removal of the Lamina portion of the vertebrae of the spine in order to make room for the compressed nerve tissue.
Radiology assisted spine surgery uses minimally-invasive procedures. They include the techniques of vertebroplasty and kyphoplasty in which certain types of spinal fractures are managed.
Potentially unstable spines will need spine fusions. At present these procedures include complex instrumentation. Spine fusions could be performed as open surgery or as minimally invasive surgery. Anterior cervical diskectomy and fusion is a common surgery that is performed for disc disease of cervical spine.
However, each method described above may not work in all patients. Therefore, careful selection of patients for each procedure is important. It has to be noted that if there is prior permanent neural tissue damage spinal surgery may not take away the symptoms.
Surgery for chronic pain is a sub branch of functional neurosurgery. Some of the techniques include implantation of deep brain stimulators, spinal cord stimulators, peripheral stimulators and pain pumps.
Surgery of the peripheral nervous system is also possible, and includes the very common procedures of carpal tunnel decompression and peripheral nerve transposition. Numerous other types of nerve entrapment conditions and other problems with the peripheral nervous system are treated as well.
Other conditions treated by neurosurgeons include:
- Meningitis and other central nervous system infections including abscesses
- Spinal disc herniation
- Cervical spinal stenosis and Lumbar spinal stenosis
- Hydrocephalus
- Head trauma (brain hemorrhages, skull fractures, etc.)
- Spinal cord trauma
- Traumatic injuries of peripheral nerves
- Tumors of the spine, spinal cord and peripheral nerves
- Intracerebral hemorrhage, such as subarachnoid hemorrhage, interdepartmental, and intracellular hemorrhages
- Some forms of drug-resistant epilepsy
- Some forms of movement disorders (advanced Parkinson's disease, chorea) – this involves the use of specially developed minimally invasive stereotactic techniques (functional, stereotactic neurosurgery) such as ablative surgery and deep brain stimulation surgery
- Intractable pain of cancer or trauma patients and cranial/peripheral nerve pain
- Some forms of intractable psychiatric disorders
- Vascular malformations (i.e., arteriovenous malformations, venous angiomas, cavernous angiomas, capillary telangectasias) of the brain and spinal cord
- Moyamoya disease.
Advancements in Heart and Lung Surgery
YouTube Video by the American College of Surgeons (ACS) Education for a Better Recovery Program, 'Your Lung Operation.'
Pictured: The image shows how a heart-lung bypass machine works during surgery (courtesy of the National Heart, Lung and Blood Institute: a department of the U.S. National Institute of Health
Cardiothoracic surgery is the field of medicine involved in surgical treatment of organs inside the thorax (the chest)—generally treatment of conditions of the heart (heart disease) and lungs (lung disease).
A cardiac surgery residency typically comprises anywhere from 9 to 14 years (or longer) of training to become a fully qualified surgeon.
Cardiac surgery training may be combined with thoracic surgery and / or vascular surgery and called cardiovascular (CV) / cardiothoracic (CT) / cardiovascular thoracic (CVT) surgery.
Cardiac surgeons may enter a cardiac surgery residency directly from medical school, or first complete a general surgery residency followed by a fellowship.
Cardiac surgeons may further sub-specialize cardiac surgery by doing a fellowship in a variety of topics including: pediatric cardiac surgery, cardiac transplantation, adult acquired heart disease, weak heart issues, and many more problems in the heart.
Cardiac surgery training in the United States is combined with thoracic surgery and called cardiothoracic surgery. A cardiothoracic surgeon in the U.S. is a physician (D.O. or M.D.) who first completes a general surgery residency (typically 5–7 years), followed by a cardiothoracic surgery fellowship (typically 2–3 years).
The cardiothoracic surgery fellowship typically spans two or three years, but certification is based on the number of surgeries performed as the operating surgeon, not the time spent in the program, in addition to passing rigorous board certification tests. Recently, however, options for an integrated 6-year cardiothoracic residency (in place of the general surgery residency plus cardiothoracic residency) have been established at several programs.
Applicants match into these I-6 programs directly out of medical school, and the application process has been extremely competitive for these positions as there were approximately 160 applicants for 10 spots in the U.S. in 2010. As of May 2013, there are now 20 approved programs, which include the following:
Click Here for List of Approved Programs
Click on any of the following Hyperlinks for amplification:
A cardiac surgery residency typically comprises anywhere from 9 to 14 years (or longer) of training to become a fully qualified surgeon.
Cardiac surgery training may be combined with thoracic surgery and / or vascular surgery and called cardiovascular (CV) / cardiothoracic (CT) / cardiovascular thoracic (CVT) surgery.
Cardiac surgeons may enter a cardiac surgery residency directly from medical school, or first complete a general surgery residency followed by a fellowship.
Cardiac surgeons may further sub-specialize cardiac surgery by doing a fellowship in a variety of topics including: pediatric cardiac surgery, cardiac transplantation, adult acquired heart disease, weak heart issues, and many more problems in the heart.
Cardiac surgery training in the United States is combined with thoracic surgery and called cardiothoracic surgery. A cardiothoracic surgeon in the U.S. is a physician (D.O. or M.D.) who first completes a general surgery residency (typically 5–7 years), followed by a cardiothoracic surgery fellowship (typically 2–3 years).
The cardiothoracic surgery fellowship typically spans two or three years, but certification is based on the number of surgeries performed as the operating surgeon, not the time spent in the program, in addition to passing rigorous board certification tests. Recently, however, options for an integrated 6-year cardiothoracic residency (in place of the general surgery residency plus cardiothoracic residency) have been established at several programs.
Applicants match into these I-6 programs directly out of medical school, and the application process has been extremely competitive for these positions as there were approximately 160 applicants for 10 spots in the U.S. in 2010. As of May 2013, there are now 20 approved programs, which include the following:
Click Here for List of Approved Programs
Click on any of the following Hyperlinks for amplification:
Dental Technology Advancements by Consumer Guide to Dentistry including the Timeline for the History of Dentistry by the American Dental Association (ADA)
YouTube Video: The Newest Technology in Dentistry
To the casual observer, it may appear that not much has changed in dentistry. Yet dental technologies have been evolving continually, helping to transform the field of dentistry. New technologies are developed with a focus on creating products and developing techniques that can be used by your dentist to help prevent, diagnose and/or treat dental conditions and diseases early and effectively.
There are many dental technologies for your dentist to choose from, with benefits that can provide you with more comfort and ease during treatment.
Some of the more popular technologies include:
Air-Abrasion: Serving as an alternative to a traditional dental drill, an air-abrasion system is primarily used to treat small cavities, preserving healthy tooth structure without the use of a local anesthetic. Air-abrasion allows for the precise removal of decay through a blast of pellets consisting of air and aluminum oxide. The air-abrasion technique can also be used to help repair old tooth restorations by accessing difficult areas such as those between the teeth.
Bone Replacement: There are three types of bone replacement used to assist people suffering from bone loss or those requiring extraction:
Platelet-rich growth factors can help induce rapid bone growth and healing. It is used in bone replacement and offers rapid healing properties. Bone replacement performed by clinicians today is more refined than in the past and can be more easily assimilated into the existing bone structure.
CAD/CAM: CAD/CAM (computer assisted design, computer assisted manufacture) technology allows for the fabrication of dental restorations through computerized technology. Your dentist may work with CAD/CAM in the office to complete tooth restorations in one visit that would otherwise require two visits to complete. These procedures can include: inlays, onlays, porcelain veneers, dental crowns and dental bridges.
Caries Detection Solution: Caries detection solution is a liquid red dye that is applied over a tooth to confirm that all tooth decay is removed from an affected area that has been treated. This solution is very similar to plaque disclosing tablets that are used after brushing to highlight any areas you missed or that aren't thoroughly cleaned.
CAT Scans: A 3-D image CAT scan is used to help implantologists (dentists who provide surgical and restorative implant services) view and work on the jawbone or surrounding bone structure to produce more accurate results. CAT scan technology has become increasingly specialized for dentistry as implants, rather than dentures, have become the standard of care for tooth replacement.
Composite Materials: Composite resin materials are now used in some veneers and other restorations, to fill cavities and to bond onto or rebuild a tooth. Composite resins offer a tooth-like resolution and have grown in popularity over the years. They are continually being improved and refined to better replicate tooth colors, to be easier to apply and to hold their shape without slumping off the tooth. The handling of and the speeds associated with curing composite resins, coupled with the translucent qualities of the newer materials, has helped to produce beautiful natural looking results.
Diagnodent: Diagnodent is a tool used for the early detection of cavities. The advanced technology uses sound pulse and laser to detect caries earlier than traditional methods allowed, so that treatment can commence immediately limiting the amount of decay allowed to occur. This helps preserve the maximum amount of natural healthy tooth structure.
Dental Implants: Dental implant technology continues to improve. Mini-implants can now accommodate small tooth replacements. The bio-integration of the titanium tooth root with human bone is completely predictable. The results are very long lasting. The industry has also focused on reducing treatment time and some implants can be placed immediately after tooth extraction as opposed to waiting six months after extraction. In many instances, dental implants are now the standard of care if a tooth needs to be extracted or if there is a question as to whether a root canal procedure should be done.
Desensitizers: Desensitizers can be used by your dentist or hygienist prior to dental treatment if you have sensitive teeth, so that you are comfortable throughout treatment. Desensitizers can be used alone or in conjunction with other pain and anxiety relief modalities such as a local anesthetic or sedation dentistry.
Digital X-rays: Digital radiographs offer a way to capture dental images through a sensor that processes the image onto a computer screen. Digital X-rays can provide greater comfort than traditional X-rays and certainly reduce exposure to radiation. Four digital radiographs are about equal to one "paper" X-ray.
Electric Hand Pieces: Electric hand pieces can assist in hygiene procedures and can be used with rotary cutting instruments. Electric hand pieces offer a smooth delivery of material which puts less stress on the healthy tooth structure. They are often quieter too.
Internet: Today, dental appointments can be made through a practice website. Dentists can send dental technicians emails with your intra-oral photos in real time so that they can discuss the case in real time with you in the chair. The Internet also serves as an informational tool through which you can receive updates as they relate to the field of dentistry.
Intra-Oral Camera: Intra-oral cameras can produce accurate images of your teeth and the supporting structure. This allows you, your dentist and a dental technician that might be involved in your treatment, to see tooth defects. Intra-oral cameras also help you to learn more about dental hygiene practices, including where to focus on brushing your teeth.
Lasers: Lasers offer reduced discomfort and in some cases, a suture free option for the treatment of benign tumors, cold sores, crown lengthening, decay removal, gummy smile changes, dental fillings, tongue tie and speech impediment improvements, nerve regeneration for damaged nerves and blood vessels and scars. Lasers may also be applied in the treatment of select dental conditions such as sleep apnea, certain cases of TMD and tooth sensitivity. This is a very exciting area of development in dental technologies. Lasers use light energy as their method of operation, resulting in a shortened and almost painless healing period.
Optical Scanners: Optical Scanners provide a digital map of the tooth and create a 3-D replica model of the dental structure. This helps in accurate color analysis for cosmetic restorations made in a dental laboratory such as porcelain veneers, crowns and bridges.
Microscope: The use of microscopes in dentistry has been one of the latest trends to perfect acute vision for dentists. Microscopes offer dentists the ability to see micro-cracks, weakened underlying tooth structure and the proximity of the dental nerves with precise accuracy. Microscopes also offer more accuracy in removing affected tooth structure while preserving healthy tooth structure. The microscope is an improved diagnostic tool as well as a treatment-assist dental tool.
NTI Splint: The NTI splint is an anterior splint used for TMD patients. The NTI and other splints, such as the Kois Deprogrammer have the potential application of helping to treat patients affected by migraine headaches.
Periodontal Antibiotics: There are "site specific" antibiotics that are designed to concentrate the treatment in specific locations. Periodontal antibiotics are applied directly to the diseased site, enhancing the effectiveness of treatment for periodontal disease.
VELscope: VELscope is a brand new FDA-approved oral cancer screening system that uses incandescent light so your dentist can see abnormalities that may not be apparent or visible to the naked eye. VELscope is also used by oral surgeons to help identify diseased tissue around a lesion and determine the appropriate margin for surgical excision.
ViziLite: ViziLite is a recently approved oral lesion identification and marking device. It is a painless screening tool for the detection of small changes in your mouth. Vizilite also identifies, evaluates and monitors oral abnormalities in patients who are at increased risk for oral cancer.
Periometer: The Periometer is an instrumented percussion diagnostic system designed for a range of dental applications, including monitoring implant osseointegration and the formation of cracks in natural teeth. Thus far, the results have been correlated with the presence and location of defects as well as the overall dynamic properties of any oral structure. The use of the Periometer for determining optimum dental therapeutics has also been explored. Current research indicates that routine use of the Periometer in a dental practice can be critically important for avoiding catastrophic failure of both implants and teeth.
The Wand: The Wand is a computerized tool that can deliver anesthesia in a slow and methodic manner. The sensation of pain often associated with an injection is caused as a result of the pressure of the liquid being injected. The slow and gentle delivery associated with The Wand often makes injections painless. The delivery holder is small and easy for the dentist to use.
There are an increasing number of dental technologies from which your dentist can choose. The benefits that you can receive by visiting your dentist today may offer you more comfort and ease than in days past.
There are many dental technologies for your dentist to choose from, with benefits that can provide you with more comfort and ease during treatment.
Some of the more popular technologies include:
Air-Abrasion: Serving as an alternative to a traditional dental drill, an air-abrasion system is primarily used to treat small cavities, preserving healthy tooth structure without the use of a local anesthetic. Air-abrasion allows for the precise removal of decay through a blast of pellets consisting of air and aluminum oxide. The air-abrasion technique can also be used to help repair old tooth restorations by accessing difficult areas such as those between the teeth.
Bone Replacement: There are three types of bone replacement used to assist people suffering from bone loss or those requiring extraction:
- Autogenous Man Made Bone Replacement: A freeze dried material that is made in the laboratory.
- Cadaver/Animal Bone Replacement: Bone that is preserved, processed and sterilized from a deceased individual or animal source.
- Grafting Bone Replacement: Bone taken from another area of the body, such as the iliac crest section of the pelvis.
Platelet-rich growth factors can help induce rapid bone growth and healing. It is used in bone replacement and offers rapid healing properties. Bone replacement performed by clinicians today is more refined than in the past and can be more easily assimilated into the existing bone structure.
CAD/CAM: CAD/CAM (computer assisted design, computer assisted manufacture) technology allows for the fabrication of dental restorations through computerized technology. Your dentist may work with CAD/CAM in the office to complete tooth restorations in one visit that would otherwise require two visits to complete. These procedures can include: inlays, onlays, porcelain veneers, dental crowns and dental bridges.
Caries Detection Solution: Caries detection solution is a liquid red dye that is applied over a tooth to confirm that all tooth decay is removed from an affected area that has been treated. This solution is very similar to plaque disclosing tablets that are used after brushing to highlight any areas you missed or that aren't thoroughly cleaned.
CAT Scans: A 3-D image CAT scan is used to help implantologists (dentists who provide surgical and restorative implant services) view and work on the jawbone or surrounding bone structure to produce more accurate results. CAT scan technology has become increasingly specialized for dentistry as implants, rather than dentures, have become the standard of care for tooth replacement.
Composite Materials: Composite resin materials are now used in some veneers and other restorations, to fill cavities and to bond onto or rebuild a tooth. Composite resins offer a tooth-like resolution and have grown in popularity over the years. They are continually being improved and refined to better replicate tooth colors, to be easier to apply and to hold their shape without slumping off the tooth. The handling of and the speeds associated with curing composite resins, coupled with the translucent qualities of the newer materials, has helped to produce beautiful natural looking results.
Diagnodent: Diagnodent is a tool used for the early detection of cavities. The advanced technology uses sound pulse and laser to detect caries earlier than traditional methods allowed, so that treatment can commence immediately limiting the amount of decay allowed to occur. This helps preserve the maximum amount of natural healthy tooth structure.
Dental Implants: Dental implant technology continues to improve. Mini-implants can now accommodate small tooth replacements. The bio-integration of the titanium tooth root with human bone is completely predictable. The results are very long lasting. The industry has also focused on reducing treatment time and some implants can be placed immediately after tooth extraction as opposed to waiting six months after extraction. In many instances, dental implants are now the standard of care if a tooth needs to be extracted or if there is a question as to whether a root canal procedure should be done.
Desensitizers: Desensitizers can be used by your dentist or hygienist prior to dental treatment if you have sensitive teeth, so that you are comfortable throughout treatment. Desensitizers can be used alone or in conjunction with other pain and anxiety relief modalities such as a local anesthetic or sedation dentistry.
Digital X-rays: Digital radiographs offer a way to capture dental images through a sensor that processes the image onto a computer screen. Digital X-rays can provide greater comfort than traditional X-rays and certainly reduce exposure to radiation. Four digital radiographs are about equal to one "paper" X-ray.
Electric Hand Pieces: Electric hand pieces can assist in hygiene procedures and can be used with rotary cutting instruments. Electric hand pieces offer a smooth delivery of material which puts less stress on the healthy tooth structure. They are often quieter too.
Internet: Today, dental appointments can be made through a practice website. Dentists can send dental technicians emails with your intra-oral photos in real time so that they can discuss the case in real time with you in the chair. The Internet also serves as an informational tool through which you can receive updates as they relate to the field of dentistry.
Intra-Oral Camera: Intra-oral cameras can produce accurate images of your teeth and the supporting structure. This allows you, your dentist and a dental technician that might be involved in your treatment, to see tooth defects. Intra-oral cameras also help you to learn more about dental hygiene practices, including where to focus on brushing your teeth.
Lasers: Lasers offer reduced discomfort and in some cases, a suture free option for the treatment of benign tumors, cold sores, crown lengthening, decay removal, gummy smile changes, dental fillings, tongue tie and speech impediment improvements, nerve regeneration for damaged nerves and blood vessels and scars. Lasers may also be applied in the treatment of select dental conditions such as sleep apnea, certain cases of TMD and tooth sensitivity. This is a very exciting area of development in dental technologies. Lasers use light energy as their method of operation, resulting in a shortened and almost painless healing period.
Optical Scanners: Optical Scanners provide a digital map of the tooth and create a 3-D replica model of the dental structure. This helps in accurate color analysis for cosmetic restorations made in a dental laboratory such as porcelain veneers, crowns and bridges.
Microscope: The use of microscopes in dentistry has been one of the latest trends to perfect acute vision for dentists. Microscopes offer dentists the ability to see micro-cracks, weakened underlying tooth structure and the proximity of the dental nerves with precise accuracy. Microscopes also offer more accuracy in removing affected tooth structure while preserving healthy tooth structure. The microscope is an improved diagnostic tool as well as a treatment-assist dental tool.
NTI Splint: The NTI splint is an anterior splint used for TMD patients. The NTI and other splints, such as the Kois Deprogrammer have the potential application of helping to treat patients affected by migraine headaches.
Periodontal Antibiotics: There are "site specific" antibiotics that are designed to concentrate the treatment in specific locations. Periodontal antibiotics are applied directly to the diseased site, enhancing the effectiveness of treatment for periodontal disease.
VELscope: VELscope is a brand new FDA-approved oral cancer screening system that uses incandescent light so your dentist can see abnormalities that may not be apparent or visible to the naked eye. VELscope is also used by oral surgeons to help identify diseased tissue around a lesion and determine the appropriate margin for surgical excision.
ViziLite: ViziLite is a recently approved oral lesion identification and marking device. It is a painless screening tool for the detection of small changes in your mouth. Vizilite also identifies, evaluates and monitors oral abnormalities in patients who are at increased risk for oral cancer.
Periometer: The Periometer is an instrumented percussion diagnostic system designed for a range of dental applications, including monitoring implant osseointegration and the formation of cracks in natural teeth. Thus far, the results have been correlated with the presence and location of defects as well as the overall dynamic properties of any oral structure. The use of the Periometer for determining optimum dental therapeutics has also been explored. Current research indicates that routine use of the Periometer in a dental practice can be critically important for avoiding catastrophic failure of both implants and teeth.
The Wand: The Wand is a computerized tool that can deliver anesthesia in a slow and methodic manner. The sensation of pain often associated with an injection is caused as a result of the pressure of the liquid being injected. The slow and gentle delivery associated with The Wand often makes injections painless. The delivery holder is small and easy for the dentist to use.
There are an increasing number of dental technologies from which your dentist can choose. The benefits that you can receive by visiting your dentist today may offer you more comfort and ease than in days past.
Facial Transplants
YouTube Video: Man Gets Full Facial Transplant*
*--Meet Patrick Hardison, a firefighter who lost his face and received the first successful total face transplant in 2015. You can read more about him here: http://www.nymag.com/face-transplant
Pictured: Man with LEFT: Original face; CENTER: after Injuries; and RIGHT: Today
A face transplant is a medical procedure to replace all or part of a person's face using tissue from a cadaver. The world's first partial face transplant on a living human was carried out in France in 2005. The world's first full face transplant was completed in Spain in 2010. Turkey, France, the United States and Spain (in order of total number of successful face transplants performed) are considered the leading countries in the research into the procedure.
People with faces disfigured by trauma, burns, disease, or birth defects might aesthetically benefit from the procedure. Professor Peter Butler at the Royal Free Hospital first suggested this approach in treating people with facial disfigurement in a Lancet article in 2002. This suggestion caused considerable debate at the time concerning the ethics of this procedure.
An alternative to a face transplant is facial reconstruction, which typically involves moving the patient's own skin from their back, buttocks, thighs, or chest to their face in a series of as many as 50 operations to regain even limited functionality, and a face that is often likened to a mask or a living quilt.
L. Scott Levin, M.D., FACS, Chair, Department of Orthopaedic Surgery at University of Pennsylvania School of Medicine, has described the procedure as "the single most important area of reconstructive research".
Click on any of the following blue hyperlinks for further amplification:
People with faces disfigured by trauma, burns, disease, or birth defects might aesthetically benefit from the procedure. Professor Peter Butler at the Royal Free Hospital first suggested this approach in treating people with facial disfigurement in a Lancet article in 2002. This suggestion caused considerable debate at the time concerning the ethics of this procedure.
An alternative to a face transplant is facial reconstruction, which typically involves moving the patient's own skin from their back, buttocks, thighs, or chest to their face in a series of as many as 50 operations to regain even limited functionality, and a face that is often likened to a mask or a living quilt.
L. Scott Levin, M.D., FACS, Chair, Department of Orthopaedic Surgery at University of Pennsylvania School of Medicine, has described the procedure as "the single most important area of reconstructive research".
Click on any of the following blue hyperlinks for further amplification:
Potential Cure for Cancer through Immunotherapy, including an Article by the Washington Post*
*- By Arthur Allen February 17, 2014
YouTube Video Immunotherapy & Chemo: What's the Difference? Trailer
YouTube & Picture: by Cancer Research Institute
Click here to read full article in the February 17, 2014 issue of the Washington Post.
The article begins:
"In the summer of 2012, a year after his wife had died of lung cancer, Michael Harris scraped open an old mole on his back and it would not stop bleeding. The doctors said he had stage 4 melanoma, with a virtually inoperable tumor, and that patients in his condition typically lived about eight months. By last June, the cancer had spread to his liver and lungs.
At that point Harris joined a clinical trial at Georgetown University, one of scores that have sprung up around the country to test a new class of cancer drugs called immune-checkpoint inhibitors. Two weeks after his first infusion, Harris’s primary tumor was fading, along with the black cancerous beads around it. A month later, his liver and lungs were clean.
“This stuff was like vanishing cream,” says Harris’s daughter, Rhonda Farrell. Today, Harris, a sun-leathered 66-year-old Vietnam veteran from Waldorf, Md., is back at work. And though his doctors won’t declare him cured, he says, “I feel like a normal person.”
Because it can be so inexorable and deadly, cancer tends to inspire hopes of miracle cures. Because of all the failed miracle cures, cancer doctors are a cautious lot. This makes it all the more astounding to hear cautious clinicians and scientists describe the treatments Harris and thousands of others are receiving.
“It’s a breakthrough,” says oncologist Michael Atkins, who recruited Harris to the trial at Georgetown’s Lombardi Cancer Center. “This is real,” adds Louis Weiner, the physician who leads the center. “We’re still in a bit of shock,” says Suzanne Topalian, a cancer immunologist at Johns Hopkins University who has been a key player in bringing the substances into clinical trials."
___________________________________________________________________________
Cancer immunotherapy is the use of the immune system to treat cancer.
Immunotherapies can be categorized as active, passive or hybrid (active and passive). These approaches exploit the fact that cancer cells often have molecules on their surface that can be detected by the immune system, known as tumour-associated antigens (TAAs); they are often proteins or other macromolecules (e.g. carbohydrates).
Active immunotherapy directs the immune system to attack tumor cells by targeting TAAs.
Passive immunotherapies enhance existing anti-tumor responses and include the use of monoclonal antibodies, lymphocytes and cytokines.
Among these, multiple antibody therapies are approved in various jurisdictions to treat a wide range of cancers. Antibodies are proteins produced by the immune system that bind to a target antigen on the cell surface. The immune system normally uses them to fight pathogens.
Each antibody is specific to one or a few proteins. Those that bind to tumor antigens treat cancer. Cell surface receptors are common targets for antibody therapies and include CD20, CD274 and CD279. Once bound to a cancer antigen, antibodies can induce antibody-dependent cell-mediated cytotoxicity, activate the complement system, or prevent a receptor from interacting with its ligand, all of which can lead to cell death.
Approved antibodies include:
Active cellular therapies usually involve the removal of immune cells from the blood or from a tumor. Those specific for the tumor are cultured and returned to the patient where they attack the tumor; alternatively, immune cells can be genetically engineered to express a tumor-specific receptor, cultured and returned to the patient. Cell types that can be used in this way are natural killer cells, lymphokine-activated killer cells, cytotoxic T cells and dendritic cells.
The only US-approved cell-based therapy is Dendreon's Provenge, for the treatment of prostate cancer.
Interleukin-2 and interferon-α are cytokines, proteins that regulate and coordinate the behaviour of the immune system. They have the ability to enhance anti-tumor activity and thus can be used as passive cancer treatments.
Interferon-α is used in the treatment of hairy-cell leukaemia, AIDS-related Kaposi's sarcoma, follicular lymphoma, chronic myeloid leukaemia and malignant melanoma. Interleukin-2 is used in the treatment of malignant melanoma and renal cell carcinoma.
Click on any of the following blue hyperlinks for more about Cancer immunotherapy:
The article begins:
"In the summer of 2012, a year after his wife had died of lung cancer, Michael Harris scraped open an old mole on his back and it would not stop bleeding. The doctors said he had stage 4 melanoma, with a virtually inoperable tumor, and that patients in his condition typically lived about eight months. By last June, the cancer had spread to his liver and lungs.
At that point Harris joined a clinical trial at Georgetown University, one of scores that have sprung up around the country to test a new class of cancer drugs called immune-checkpoint inhibitors. Two weeks after his first infusion, Harris’s primary tumor was fading, along with the black cancerous beads around it. A month later, his liver and lungs were clean.
“This stuff was like vanishing cream,” says Harris’s daughter, Rhonda Farrell. Today, Harris, a sun-leathered 66-year-old Vietnam veteran from Waldorf, Md., is back at work. And though his doctors won’t declare him cured, he says, “I feel like a normal person.”
Because it can be so inexorable and deadly, cancer tends to inspire hopes of miracle cures. Because of all the failed miracle cures, cancer doctors are a cautious lot. This makes it all the more astounding to hear cautious clinicians and scientists describe the treatments Harris and thousands of others are receiving.
“It’s a breakthrough,” says oncologist Michael Atkins, who recruited Harris to the trial at Georgetown’s Lombardi Cancer Center. “This is real,” adds Louis Weiner, the physician who leads the center. “We’re still in a bit of shock,” says Suzanne Topalian, a cancer immunologist at Johns Hopkins University who has been a key player in bringing the substances into clinical trials."
___________________________________________________________________________
Cancer immunotherapy is the use of the immune system to treat cancer.
Immunotherapies can be categorized as active, passive or hybrid (active and passive). These approaches exploit the fact that cancer cells often have molecules on their surface that can be detected by the immune system, known as tumour-associated antigens (TAAs); they are often proteins or other macromolecules (e.g. carbohydrates).
Active immunotherapy directs the immune system to attack tumor cells by targeting TAAs.
Passive immunotherapies enhance existing anti-tumor responses and include the use of monoclonal antibodies, lymphocytes and cytokines.
Among these, multiple antibody therapies are approved in various jurisdictions to treat a wide range of cancers. Antibodies are proteins produced by the immune system that bind to a target antigen on the cell surface. The immune system normally uses them to fight pathogens.
Each antibody is specific to one or a few proteins. Those that bind to tumor antigens treat cancer. Cell surface receptors are common targets for antibody therapies and include CD20, CD274 and CD279. Once bound to a cancer antigen, antibodies can induce antibody-dependent cell-mediated cytotoxicity, activate the complement system, or prevent a receptor from interacting with its ligand, all of which can lead to cell death.
Approved antibodies include:
Active cellular therapies usually involve the removal of immune cells from the blood or from a tumor. Those specific for the tumor are cultured and returned to the patient where they attack the tumor; alternatively, immune cells can be genetically engineered to express a tumor-specific receptor, cultured and returned to the patient. Cell types that can be used in this way are natural killer cells, lymphokine-activated killer cells, cytotoxic T cells and dendritic cells.
The only US-approved cell-based therapy is Dendreon's Provenge, for the treatment of prostate cancer.
Interleukin-2 and interferon-α are cytokines, proteins that regulate and coordinate the behaviour of the immune system. They have the ability to enhance anti-tumor activity and thus can be used as passive cancer treatments.
Interferon-α is used in the treatment of hairy-cell leukaemia, AIDS-related Kaposi's sarcoma, follicular lymphoma, chronic myeloid leukaemia and malignant melanoma. Interleukin-2 is used in the treatment of malignant melanoma and renal cell carcinoma.
Click on any of the following blue hyperlinks for more about Cancer immunotherapy:
Milestones in Medical Technology
(Reported in New York Times October 5, 2012)
YouTube Video: Robotic Surgery Demonstration Using Da Vinci Surgical System
Pictured: Fetal Ultrasound: Dr. Edward Hon of Yale reported using a Doppler monitor on a woman's abdomen to detect fetal heartbeat. Ultrasound's principles had been known for more than a century (a Swedish physicist, Christian Andreas Doppler, gave his name to the phenomenon in 1842), but this was its first use in prenatal care.
From eyeglasses to the stethoscope to imaging the brain at work, a long list of inventions and innovations have changed medicine.
Medical breakthroughs since 1950 are excerpted below:
1950:
Intraocular Lens
Dr. Harold Ridley, a British ophthalmologist, implanted the first permanently placed intraocular lens to correct cataract.
1952:
Mechanical Heart
Henry Opitek, 41, was operated on using an artificial heart, the Dodrill GMR heart machine, manufactured by General Motors and generally considered the first mechanical heart. The surgeon, Dr. Forest Dewey Dodrill, successfully repaired the patient's mitral valve, and Mr. Opitek lived until 1981.
Magnetic Resonance
The Nobel Prize in Physics was awarded to Felix Bloch and Edward Mills Purcell for their work in developing nuclear magnetic resonance, the principle behind M.R.I. machines.
1953:
Heart-Lung Bypass
Dr. John Heysham Gibbon used his new invention, the heart-lung bypass machine, for the first time in open-heart surgery, completely supporting a patient's heart and lung functions for about half the time of the surgery. It was the culmination of his decades of work in developing the machine.
Cochlear Prosthesis
Dr. André Djourno of France developed a cochlear prosthesis, a method of stimulating the cochlear nerve in deaf people. It was the beginning of the long road to the development of effective cochlear implants. In 1957, he performed the first cochlear implantation. He believed that medical devices should remain in the public domain and refused to patent his invention.
1954:
Kidney Transplant
In the first successful kidney transplant, after at least nine failures, a team of surgeons at Peter Bent Brigham Hospital in Boston transplanted a kidney from a 24-year-old man to his twin brother. The recipient lived 11 years more, and in 1990 the lead surgeon, Dr. Joseph E. Murray, won the Nobel Prize in Medicine or Physiology.
1958:
Pacemaker
Dr. Seymour Furman, a cardiologist at Montefiore Hospital in the Bronx, succeeded in extending a patient's life by more than two months using a cardiac pacemaker, a large machine to which the patient was attached by a 50-foot extension cord. By the next year, portable versions of the machine were in use.
Fetal Ultrasound
Dr. Edward Hon of Yale reported using a Doppler monitor on a woman's abdomen to detect fetal heartbeat. Ultrasound's principles had been known for more than a century (a Swedish physicist, Christian Andreas Doppler, gave his name to the phenomenon in 1842), but this was its first use in prenatal care.
1961:
Minimally Invasive Surgery
Dr. Thomas J. Fogarty came up with the idea for the balloon embolectomy catheter for removing blood clots, and used it on a patient six weeks later. It was the first minimally invasive surgery technique.
1963:
Artificial Heart
Paul Winchell, the ventriloquist and inventor, patented the first artificial heart, developed in collaboration with Dr. Henry J. Heimlich, later famous for the Heimlich maneuver.
Liver Transplant
The first human liver transplant was performed by Dr. Thomas E. Starzl. The patient, a 3-year-old child, rapidly bled to death.
1965:
Portable Defibrillator
Dr. Frank Pantridge installed the first portable defibrillator in an ambulance in Belfast, Northern Ireland. It weighed 150 pounds and was powered by car batteries.
Commercial Ultrasound
Walter Erich Krause of the Siemens Corporation filed a patent for the first practical commercial ultrasound machine. According to the patent, his machine could be "used for practical ultra-sonic-optical examination to achieve a lifelike reproduction of the body part under examination."
1967:
Heart Transplant
Dr. Christiaan Barnard, a South African, performed the first human heart transplant. The patient, a 53-year-old man, died 18 days later.
1971:
CT Scanner
The first commercial CT scanner, developed by Dr. Godfrey Hounsfield, was used on a patient in London. Dr. Hounsfield shared the 1979 Nobel Prize in Physiology or Medicine for his invention.
1973:
Insulin Pump
An inventor and entrepreneur, Dean L. Kamen, patented the first insulin pump. He became perhaps even better known for a later invention, the Segway transporter.
1978:
M.R.I.
Dr. Raymond V. Damadian announced that he had patented a technique using nuclear magnetic resonance to distinguish between normal and cancerous tissue. In 2003, two other researchers won a Nobel Prize for further discoveries.
1992:
DNA Sequencing
Dr. Leroy E. Hood patents his invention of the automated DNA sequencing technique. The patent is owned by the California Institute of Technology.
Imaging Thought
A paper in the journal Magnetic Resonance Medicine by a group of researchers at the Medical College of Wisconsin announced the first use of functional magnetic resonance imaging to detect brain blood flow in conjunction with a human mental activity.
2000:
Human Genome
The first draft of the human genome was announced. Three years later, it was declared complete three years later.
2004:
Adaptive Artificial Knee
The Rheo knee, a plastic prosthetic joint that adapts to a user's walking style and changes in terrain, was produced by the Ossur Corporation.
2006:
Artificial Liver
Dr. Colin McGucklin and Dr. Nico Forraz of Newcastle University developed a liver grown from stem cells. The size of a small coin, it was not an organ that could be implanted in a human.
Medical breakthroughs since 1950 are excerpted below:
1950:
Intraocular Lens
Dr. Harold Ridley, a British ophthalmologist, implanted the first permanently placed intraocular lens to correct cataract.
1952:
Mechanical Heart
Henry Opitek, 41, was operated on using an artificial heart, the Dodrill GMR heart machine, manufactured by General Motors and generally considered the first mechanical heart. The surgeon, Dr. Forest Dewey Dodrill, successfully repaired the patient's mitral valve, and Mr. Opitek lived until 1981.
Magnetic Resonance
The Nobel Prize in Physics was awarded to Felix Bloch and Edward Mills Purcell for their work in developing nuclear magnetic resonance, the principle behind M.R.I. machines.
1953:
Heart-Lung Bypass
Dr. John Heysham Gibbon used his new invention, the heart-lung bypass machine, for the first time in open-heart surgery, completely supporting a patient's heart and lung functions for about half the time of the surgery. It was the culmination of his decades of work in developing the machine.
Cochlear Prosthesis
Dr. André Djourno of France developed a cochlear prosthesis, a method of stimulating the cochlear nerve in deaf people. It was the beginning of the long road to the development of effective cochlear implants. In 1957, he performed the first cochlear implantation. He believed that medical devices should remain in the public domain and refused to patent his invention.
1954:
Kidney Transplant
In the first successful kidney transplant, after at least nine failures, a team of surgeons at Peter Bent Brigham Hospital in Boston transplanted a kidney from a 24-year-old man to his twin brother. The recipient lived 11 years more, and in 1990 the lead surgeon, Dr. Joseph E. Murray, won the Nobel Prize in Medicine or Physiology.
1958:
Pacemaker
Dr. Seymour Furman, a cardiologist at Montefiore Hospital in the Bronx, succeeded in extending a patient's life by more than two months using a cardiac pacemaker, a large machine to which the patient was attached by a 50-foot extension cord. By the next year, portable versions of the machine were in use.
Fetal Ultrasound
Dr. Edward Hon of Yale reported using a Doppler monitor on a woman's abdomen to detect fetal heartbeat. Ultrasound's principles had been known for more than a century (a Swedish physicist, Christian Andreas Doppler, gave his name to the phenomenon in 1842), but this was its first use in prenatal care.
1961:
Minimally Invasive Surgery
Dr. Thomas J. Fogarty came up with the idea for the balloon embolectomy catheter for removing blood clots, and used it on a patient six weeks later. It was the first minimally invasive surgery technique.
1963:
Artificial Heart
Paul Winchell, the ventriloquist and inventor, patented the first artificial heart, developed in collaboration with Dr. Henry J. Heimlich, later famous for the Heimlich maneuver.
Liver Transplant
The first human liver transplant was performed by Dr. Thomas E. Starzl. The patient, a 3-year-old child, rapidly bled to death.
1965:
Portable Defibrillator
Dr. Frank Pantridge installed the first portable defibrillator in an ambulance in Belfast, Northern Ireland. It weighed 150 pounds and was powered by car batteries.
Commercial Ultrasound
Walter Erich Krause of the Siemens Corporation filed a patent for the first practical commercial ultrasound machine. According to the patent, his machine could be "used for practical ultra-sonic-optical examination to achieve a lifelike reproduction of the body part under examination."
1967:
Heart Transplant
Dr. Christiaan Barnard, a South African, performed the first human heart transplant. The patient, a 53-year-old man, died 18 days later.
1971:
CT Scanner
The first commercial CT scanner, developed by Dr. Godfrey Hounsfield, was used on a patient in London. Dr. Hounsfield shared the 1979 Nobel Prize in Physiology or Medicine for his invention.
1973:
Insulin Pump
An inventor and entrepreneur, Dean L. Kamen, patented the first insulin pump. He became perhaps even better known for a later invention, the Segway transporter.
1978:
M.R.I.
Dr. Raymond V. Damadian announced that he had patented a technique using nuclear magnetic resonance to distinguish between normal and cancerous tissue. In 2003, two other researchers won a Nobel Prize for further discoveries.
1992:
DNA Sequencing
Dr. Leroy E. Hood patents his invention of the automated DNA sequencing technique. The patent is owned by the California Institute of Technology.
Imaging Thought
A paper in the journal Magnetic Resonance Medicine by a group of researchers at the Medical College of Wisconsin announced the first use of functional magnetic resonance imaging to detect brain blood flow in conjunction with a human mental activity.
2000:
Human Genome
The first draft of the human genome was announced. Three years later, it was declared complete three years later.
2004:
Adaptive Artificial Knee
The Rheo knee, a plastic prosthetic joint that adapts to a user's walking style and changes in terrain, was produced by the Ossur Corporation.
2006:
Artificial Liver
Dr. Colin McGucklin and Dr. Nico Forraz of Newcastle University developed a liver grown from stem cells. The size of a small coin, it was not an organ that could be implanted in a human.
Timeline of medicine and medical technology
YouTube Video of The Electric Eye: A Visual Prosthesis*
* -- Science for the Public, October 26, 2010. Shawn Kelly, Senior Systems Scientist, Institute for Complex Engineered Systems, Carnegie Mellon University: The complexity of the eye makes restoration of vision a formidable challenge. Dr. Shawn Kelly explains how the eye "sees," how blindness affects the retinal cells, and the difficulty of restoring even partial vision. He shows the development and testing of the visual prosthesis developed by the Boston Retinal Implant Project. This project, which involves teams from multiple institutions, is an outstanding example of the great achievements of science and engineering.
Pictured: This Day in Science June 26, 2000 – First DNA sequence released by human genome project
The history of medicine, as practiced by trained professionals, shows how societies have changed in their approach to illness and disease from ancient times to the present.
Early medical traditions include those of Babylon, China, Egypt and India. The Greeks went even further, introducing the concepts of medical diagnosis, prognosis, and advanced medical ethics.
The Hippocratic Oath, still taken (although significantly changed from the original) by doctors up to today, was written in Greece in the 5th century BCE. In the medieval age, surgical practices inherited from the ancient masters were improved and then systematized in Rogerius's The Practice of Surgery. Universities began systematic training of physicians around the years 1220 in Italy. During the Renaissance, understanding of anatomy improved, and the microscope was invented.
The germ theory of disease in the 19th century led to cures for many infectious diseases. Military doctors advanced the methods of trauma treatment and surgery. Public health measures were developed especially in the 19th century as the rapid growth of cities required systematic sanitary measures. Advanced research centers opened in the early 20th century, often connected with major hospitals.
The mid-20th century was characterized by new biological treatments, such as antibiotics. These advancements, along with developments in chemistry, genetics, and lab technology (such as the x-ray) led to modern medicine. Medicine was heavily professionalized in the 20th century, and new careers opened to women as nurses (from the 1870s) and as physicians (especially after 1970). The 21st century is characterized by highly advanced research involving numerous fields of science.
Health technology is defined by the World Health Organization as the "application of organized knowledge and skills in the form of devices, medicines, vaccines, procedures and systems developed to solve a health problem and improve quality of lives." This includes the pharmaceuticals, devices, procedures and organizational systems used in health care.
Click on the following blue hyperlinks for a listing of the timeline for medicine and medical technology:
Early medical traditions include those of Babylon, China, Egypt and India. The Greeks went even further, introducing the concepts of medical diagnosis, prognosis, and advanced medical ethics.
The Hippocratic Oath, still taken (although significantly changed from the original) by doctors up to today, was written in Greece in the 5th century BCE. In the medieval age, surgical practices inherited from the ancient masters were improved and then systematized in Rogerius's The Practice of Surgery. Universities began systematic training of physicians around the years 1220 in Italy. During the Renaissance, understanding of anatomy improved, and the microscope was invented.
The germ theory of disease in the 19th century led to cures for many infectious diseases. Military doctors advanced the methods of trauma treatment and surgery. Public health measures were developed especially in the 19th century as the rapid growth of cities required systematic sanitary measures. Advanced research centers opened in the early 20th century, often connected with major hospitals.
The mid-20th century was characterized by new biological treatments, such as antibiotics. These advancements, along with developments in chemistry, genetics, and lab technology (such as the x-ray) led to modern medicine. Medicine was heavily professionalized in the 20th century, and new careers opened to women as nurses (from the 1870s) and as physicians (especially after 1970). The 21st century is characterized by highly advanced research involving numerous fields of science.
Health technology is defined by the World Health Organization as the "application of organized knowledge and skills in the form of devices, medicines, vaccines, procedures and systems developed to solve a health problem and improve quality of lives." This includes the pharmaceuticals, devices, procedures and organizational systems used in health care.
Click on the following blue hyperlinks for a listing of the timeline for medicine and medical technology:
Stem Cell Therapy
YouTube Video: Another Promising Application for Stem Cells: Crohn’s Complications - Mayo Clinic
YouTube Video: Bone Marrow Transplant-Mayo Clinic
Pictured: Different applications for Stem Cell Therapy
Stem-cell therapy is the use of stem cells to treat or prevent a disease or condition.
Bone marrow transplant is the most widely used stem-cell therapy, but some therapies derived from umbilical cord blood are also in use. Research is underway to develop various sources for stem cells, and to apply stem-cell treatments for neurodegenerative diseases and conditions such as diabetes, heart disease, and other conditions.
Stem-cell therapy has become controversial following developments such as the ability of scientists to isolate and culture embryonic stem cells, to create stem cells using somatic cell nuclear transfer and their use of techniques to create induced pluripotent stem cells. This controversy is often related to abortion politics and to human cloning. Additionally, efforts to market treatments based on transplant of stored umbilical cord blood have been controversial.
Medical Uses:
For over 30 years, bone marrow has been used to treat cancer patients with conditions such as leukaemia and lymphoma; this is the only form of stem-cell therapy that is widely practiced.
During chemotherapy, most growing cells are killed by the cytotoxic agents. These agents, however, cannot discriminate between the leukaemia or neoplastic cells, and the hematopoietic stem cells within the bone marrow. It is this side effect of conventional chemotherapy strategies that the stem-cell transplant attempts to reverse; a donor's healthy bone marrow reintroduces functional stem cells to replace the cells lost in the host's body during treatment.
The transplanted cells also generate an immune response that helps to kill off the cancer cells; this process can go too far, however, leading to graft vs host disease, the most serious side effect of this treatment.
Another stem-cell therapy called Prochymal, was conditionally approved in Canada in 2012 for the management of acute graft-vs-host disease in children who are unresponsive to steroids. It is an allogenic stem therapy based on mesenchymal stem cells (MSCs) derived from the bone marrow of adult donors. MSCs are purified from the marrow, cultured and packaged, with up to 10,000 doses derived from a single donor. The doses are stored frozen until needed.
The FDA has approved five hematopoietic stem-cell products derived from umbilical cord blood, for the treatment of blood and immunological diseases.
In 2014, the European Medicines Agency recommended approval of Holoclar, a treatment involving stem cells, for use in the European Union. Holoclar is used for people with severe limbal stem cell deficiency due to burns in the eye.
Click on any of the following blue hyperlinks for more about Stem Cell Therapy:
Bone marrow transplant is the most widely used stem-cell therapy, but some therapies derived from umbilical cord blood are also in use. Research is underway to develop various sources for stem cells, and to apply stem-cell treatments for neurodegenerative diseases and conditions such as diabetes, heart disease, and other conditions.
Stem-cell therapy has become controversial following developments such as the ability of scientists to isolate and culture embryonic stem cells, to create stem cells using somatic cell nuclear transfer and their use of techniques to create induced pluripotent stem cells. This controversy is often related to abortion politics and to human cloning. Additionally, efforts to market treatments based on transplant of stored umbilical cord blood have been controversial.
Medical Uses:
For over 30 years, bone marrow has been used to treat cancer patients with conditions such as leukaemia and lymphoma; this is the only form of stem-cell therapy that is widely practiced.
During chemotherapy, most growing cells are killed by the cytotoxic agents. These agents, however, cannot discriminate between the leukaemia or neoplastic cells, and the hematopoietic stem cells within the bone marrow. It is this side effect of conventional chemotherapy strategies that the stem-cell transplant attempts to reverse; a donor's healthy bone marrow reintroduces functional stem cells to replace the cells lost in the host's body during treatment.
The transplanted cells also generate an immune response that helps to kill off the cancer cells; this process can go too far, however, leading to graft vs host disease, the most serious side effect of this treatment.
Another stem-cell therapy called Prochymal, was conditionally approved in Canada in 2012 for the management of acute graft-vs-host disease in children who are unresponsive to steroids. It is an allogenic stem therapy based on mesenchymal stem cells (MSCs) derived from the bone marrow of adult donors. MSCs are purified from the marrow, cultured and packaged, with up to 10,000 doses derived from a single donor. The doses are stored frozen until needed.
The FDA has approved five hematopoietic stem-cell products derived from umbilical cord blood, for the treatment of blood and immunological diseases.
In 2014, the European Medicines Agency recommended approval of Holoclar, a treatment involving stem cells, for use in the European Union. Holoclar is used for people with severe limbal stem cell deficiency due to burns in the eye.
Click on any of the following blue hyperlinks for more about Stem Cell Therapy:
Robotic-assisted Surgery, including a List of Categories
YouTube Video: Robot-Assisted Surgery: Autonomous Tumor Localization and Extraction
YouTube Video: da Vinci Robot-Assisted Surgery
Pictured: TOP: Robotic Cardiac Surgery; BOTTOM: Robotic Knee Replacement Surgery
Robotic surgery, computer-assisted surgery, and robotic-assisted surgery are terms for technological developments that use robotic systems to aid in surgical procedures. Robotically-assisted surgery was developed to overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capabilities of surgeons performing open surgery.
In the case of robotically-assisted minimally-invasive surgery, instead of directly moving the instruments, the surgeon uses one of two methods to control the instruments; either a direct telemanipulator or through computer control. A telemanipulator is a remote manipulator that allows the surgeon to perform the normal movements associated with the surgery while the robotic arms carry out those movements using end-effectors and manipulators to perform the actual surgery on the patient.
In computer-controlled systems the surgeon uses a computer to control the robotic arms and its end-effectors, though these systems can also still use telemanipulators for their input. One advantage of using the computerised method is that the surgeon does not have to be present, but can be anywhere in the world, leading to the possibility for remote surgery.
In the case of enhanced open surgery, autonomous instruments (in familiar configurations) replace traditional steel tools, performing certain actions (such as rib spreading) with much smoother, feedback-controlled motions than could be achieved by a human hand.
The main object of such smart instruments is to reduce or eliminate the tissue trauma traditionally associated with open surgery without requiring more than a few minutes' training on the part of surgeons. This approach seeks to improve open surgeries, particularly cardio-thoracic, that have so far not benefited from minimally-invasive techniques.
Robotic surgery has been criticized for its expense, by one estimate costing $1,500 to $2000 more per patient.
Click on any of the following blue hyperlinks for more about Robotic-assisted Surgery:
In the case of robotically-assisted minimally-invasive surgery, instead of directly moving the instruments, the surgeon uses one of two methods to control the instruments; either a direct telemanipulator or through computer control. A telemanipulator is a remote manipulator that allows the surgeon to perform the normal movements associated with the surgery while the robotic arms carry out those movements using end-effectors and manipulators to perform the actual surgery on the patient.
In computer-controlled systems the surgeon uses a computer to control the robotic arms and its end-effectors, though these systems can also still use telemanipulators for their input. One advantage of using the computerised method is that the surgeon does not have to be present, but can be anywhere in the world, leading to the possibility for remote surgery.
In the case of enhanced open surgery, autonomous instruments (in familiar configurations) replace traditional steel tools, performing certain actions (such as rib spreading) with much smoother, feedback-controlled motions than could be achieved by a human hand.
The main object of such smart instruments is to reduce or eliminate the tissue trauma traditionally associated with open surgery without requiring more than a few minutes' training on the part of surgeons. This approach seeks to improve open surgeries, particularly cardio-thoracic, that have so far not benefited from minimally-invasive techniques.
Robotic surgery has been criticized for its expense, by one estimate costing $1,500 to $2000 more per patient.
Click on any of the following blue hyperlinks for more about Robotic-assisted Surgery:
"Scientists Develop Blood Test That Spots Tumor-Derived DNA in People With Early-Stage Cancers"
(By John Hopkins University, (8/16/2017)
YouTube Video: The Future of Personalized Cancer Medicine (John Hopkins University)
The article follows:
In a bid to detect cancers early and in a noninvasive way, scientists at the Johns Hopkins Kimmel Cancer Center report they have developed a test that spots tiny amounts of cancer-specific DNA in blood and have used it to accurately identify more than half of 138 people with relatively early-stage colorectal, breast, lung and ovarian cancers. The test, the scientists say, is novel in that it can distinguish between DNA shed from tumors and other altered DNA that can be mistaken for cancer biomarkers.
A report on the research, performed on blood and tumor tissue samples from 200 people with all stages of cancer in the U.S., Denmark and the Netherlands, appears in the Aug. 16 issue of Science Translational Medicine.
“This study shows that identifying cancer early using DNA changes in the blood is feasible and that our high accuracy sequencing method is a promising approach to achieve this goal,” says Victor Velculescu, M.D., Ph.D., professor of oncology at the Johns Hopkins Kimmel Cancer Center.
Blood tests for cancer are a growing part of clinical oncology, but they remain in the early stages of development. To find small bits of cancer-derived DNA in the blood of cancer patients, scientists have frequently relied on DNA alterations found in patients’ biopsied tumor samples as guideposts for the genetic mistakes they should be looking for among the masses of DNA circulating in those patients’ blood samples.
To develop a cancer screening test that could be used to screen seemingly healthy people, scientists had to find novel ways to spot DNA alterations that could be lurking in a person’s blood but had not been previously identified.
“The challenge was to develop a blood test that could predict the probable presence of cancer without knowing the genetic mutations present in a person’s tumor,” says Velculescu.
The goal, adds Jillian Phallen, a graduate student at the Johns Hopkins Kimmel Cancer Center who was involved in the research, was to develop a screening test that is highly specific for cancer and accurate enough to detect the cancer when present, while reducing the risk of “false positive” results that often lead to unnecessary overtesting and overtreatments.
The task is notably complicated, says Phallen, by the need to sort between true cancer-derived mutations and genetic alterations that occur in blood cells and as part of normal, inherited variations in DNA.
As blood cells divide, for example, Velculescu says there is a chance these cells will acquire mistakes or mutations. In a small fraction of people, these changes will spur a blood cell to multiply faster than its neighboring cells, potentially leading to pre-leukemic conditions.
However, most of the time, the blood-derived mutations are not cancer-initiating.
His team also ruled out so-called “germline” mutations. While germline mutations are indeed alterations in DNA, they occur as a result of normal variations between individuals, and are not usually linked to particular cancers.
To develop the new test, Velculescu, Phallen and their colleagues obtained blood samples from 200 patients with breast, lung, ovarian and colorectal cancer. The scientists’ blood test screened the patients’ blood samples for mutations within 58 genes widely linked to various cancers.
Overall, the scientists were able to detect 86 of 138 (62 percent) stage I and II cancers. More specifically, among 42 people with colorectal cancer, the test correctly predicted cancer in half of the eight patients with stage I disease, eight of nine (89 percent) with stage II disease, nine of 10 (90 percent) with stage III and 14 of 15 (93 percent) with stage IV disease.
Of 71 people with lung cancer, the scientists’ test identified cancer among:
For 42 patients with ovarian cancer,
Among 45 breast cancer patients, the test spotted cancer-derived mutations in two of three (67 percent) patients with stage I disease, 17 of 29 (59 percent) with stage II disease and six of 13 (46 percent) with stage III cancers.
They found none of the cancer-derived mutations among blood samples of 44 healthy individuals.
Despite these initial promising results for early detection, the blood test needs to be validated in studies of much larger numbers of people, say the scientists.
Velculescu and his team also performed independent genomic sequencing on available tumors removed from 100 of the 200 patients with cancer and found that 82 (82 percent) had mutations in their tumors that correlated with the genetic alterations found in the blood.
The Johns Hopkins-developed blood test uses a type of genomic sequencing the researchers call “targeted error correction sequencing.” The sequencing method is based on deep sequencing, which reads each chemical code in DNA 30,000 times. “We’re trying to find the needle in the haystack, so when we do find a DNA alteration, we want to make sure it is what we think it is,” says Velculescu.
Such deep sequencing, covering more than 80,000 base pairs of DNA, has the potential to be very costly, but Velculescu says sequencing technology is becoming cheaper, and his research team may eventually be able to reduce the number of DNA locations they screen while preserving the test’s accuracy.
He says the populations that could benefit most from such a DNA-based blood test include those at high risk for cancer including smokers — for whom standard computed tomography scans for identifying lung cancer often lead to false positives — and women with hereditary mutations for breast and ovarian cancer within BRCA1 and BRCA2 genes.
Scientists who contributed to the research include:
Funding for the study was provided by:
Phallen, Sausen, Diaz and Velculescu are inventors on patent applications related to this research. Velculescu, a founder of Personal Genome Diagnostics and a member of its scientific advisory board and board of directors, owns Personal Genome Diagnostics stock, which is subject to certain restrictions under university policy.
Velculescu is also on the scientific advisory board for Ignyta. The terms of these arrangements are managed by The Johns Hopkins University in accordance with its conflict of interest policies.
Useful Links
Show me more...
In a bid to detect cancers early and in a noninvasive way, scientists at the Johns Hopkins Kimmel Cancer Center report they have developed a test that spots tiny amounts of cancer-specific DNA in blood and have used it to accurately identify more than half of 138 people with relatively early-stage colorectal, breast, lung and ovarian cancers. The test, the scientists say, is novel in that it can distinguish between DNA shed from tumors and other altered DNA that can be mistaken for cancer biomarkers.
A report on the research, performed on blood and tumor tissue samples from 200 people with all stages of cancer in the U.S., Denmark and the Netherlands, appears in the Aug. 16 issue of Science Translational Medicine.
“This study shows that identifying cancer early using DNA changes in the blood is feasible and that our high accuracy sequencing method is a promising approach to achieve this goal,” says Victor Velculescu, M.D., Ph.D., professor of oncology at the Johns Hopkins Kimmel Cancer Center.
Blood tests for cancer are a growing part of clinical oncology, but they remain in the early stages of development. To find small bits of cancer-derived DNA in the blood of cancer patients, scientists have frequently relied on DNA alterations found in patients’ biopsied tumor samples as guideposts for the genetic mistakes they should be looking for among the masses of DNA circulating in those patients’ blood samples.
To develop a cancer screening test that could be used to screen seemingly healthy people, scientists had to find novel ways to spot DNA alterations that could be lurking in a person’s blood but had not been previously identified.
“The challenge was to develop a blood test that could predict the probable presence of cancer without knowing the genetic mutations present in a person’s tumor,” says Velculescu.
The goal, adds Jillian Phallen, a graduate student at the Johns Hopkins Kimmel Cancer Center who was involved in the research, was to develop a screening test that is highly specific for cancer and accurate enough to detect the cancer when present, while reducing the risk of “false positive” results that often lead to unnecessary overtesting and overtreatments.
The task is notably complicated, says Phallen, by the need to sort between true cancer-derived mutations and genetic alterations that occur in blood cells and as part of normal, inherited variations in DNA.
As blood cells divide, for example, Velculescu says there is a chance these cells will acquire mistakes or mutations. In a small fraction of people, these changes will spur a blood cell to multiply faster than its neighboring cells, potentially leading to pre-leukemic conditions.
However, most of the time, the blood-derived mutations are not cancer-initiating.
His team also ruled out so-called “germline” mutations. While germline mutations are indeed alterations in DNA, they occur as a result of normal variations between individuals, and are not usually linked to particular cancers.
To develop the new test, Velculescu, Phallen and their colleagues obtained blood samples from 200 patients with breast, lung, ovarian and colorectal cancer. The scientists’ blood test screened the patients’ blood samples for mutations within 58 genes widely linked to various cancers.
Overall, the scientists were able to detect 86 of 138 (62 percent) stage I and II cancers. More specifically, among 42 people with colorectal cancer, the test correctly predicted cancer in half of the eight patients with stage I disease, eight of nine (89 percent) with stage II disease, nine of 10 (90 percent) with stage III and 14 of 15 (93 percent) with stage IV disease.
Of 71 people with lung cancer, the scientists’ test identified cancer among:
- 13 of 29 (45 percent) with stage I disease,
- 23 of 32 (72 percent) with stage II disease,
- three of four (75 percent) with stage III disease
- and five of six (83 percent) with stage IV cancer.
For 42 patients with ovarian cancer,
- 16 of 24 (67 percent) with stage I disease were correctly identified,
- as well as three of four (75 percent) with stage II disease,
- six of eight (75 percent) with stage III cancer
- and five of six (83 percent) with stage IV disease.
Among 45 breast cancer patients, the test spotted cancer-derived mutations in two of three (67 percent) patients with stage I disease, 17 of 29 (59 percent) with stage II disease and six of 13 (46 percent) with stage III cancers.
They found none of the cancer-derived mutations among blood samples of 44 healthy individuals.
Despite these initial promising results for early detection, the blood test needs to be validated in studies of much larger numbers of people, say the scientists.
Velculescu and his team also performed independent genomic sequencing on available tumors removed from 100 of the 200 patients with cancer and found that 82 (82 percent) had mutations in their tumors that correlated with the genetic alterations found in the blood.
The Johns Hopkins-developed blood test uses a type of genomic sequencing the researchers call “targeted error correction sequencing.” The sequencing method is based on deep sequencing, which reads each chemical code in DNA 30,000 times. “We’re trying to find the needle in the haystack, so when we do find a DNA alteration, we want to make sure it is what we think it is,” says Velculescu.
Such deep sequencing, covering more than 80,000 base pairs of DNA, has the potential to be very costly, but Velculescu says sequencing technology is becoming cheaper, and his research team may eventually be able to reduce the number of DNA locations they screen while preserving the test’s accuracy.
He says the populations that could benefit most from such a DNA-based blood test include those at high risk for cancer including smokers — for whom standard computed tomography scans for identifying lung cancer often lead to false positives — and women with hereditary mutations for breast and ovarian cancer within BRCA1 and BRCA2 genes.
Scientists who contributed to the research include:
- Mark Sausen, Derek Murphy, Sonya Parpart-Li, David Riley, Monica Nesselbush, Naomi Sengamalay, Andrew Georgiadis, Siân Jones and Sam Angiuoli from Personal Genome Diagnostics;
- Vilmos Adleff, Alessandro Leal, Carolyn Hruban, James White, Valsamo Anagnostou, Jacob Fiksel, Stephen Cristiano, Eniko Papp, Savannah Speir Qing Kay Li, Robert B Scharpf and Luis A. Diaz Jr. from Johns Hopkins;
- Thomas Reinert, Mai-Britt Worm Orntoft, Frank Viborg Mortensen, Torben Ørntoft and Claus Lindbjerg Andersen from Aarhus University Hospital, Denmark; Brian D Woodward and Hatim Husain from the University of California, San Diego;
- Mogens Rørbæk Madsen from the Herning Regional Hospital, Denmark;
- Joost Huiskens and Cornelis Punt from the University of Amsterdam, The Netherlands;
- Nicole van Grieken from the VU University Medical Center, The Netherlands;
- Remond Fijneman and Gerrit Meijer from The Netherlands Cancer Institute and Hans Jørgen Nielsen from Hvidovre Hospital, Denmark.
Funding for the study was provided by:
- the Dr. Miriam and Sheldon G. Adelson Medical Research Foundation;
- the Stand Up to Cancer-Dutch Cancer Society International Translational Cancer Research Dream Team Grant;
- the Commonwealth Foundation;
- the Cigarette Restitution Fund Program;
- the Burroughs Wellcome Fund;
- the Maryland-Genetics, Epidemiology and Medicine Training Program;
- the International Association for the Study of Lung Cancer/Prevent Cancer Foundation;
- the National Institutes of Health’s National Cancer Institute (grants CA121113, CA006973 and CA180950); the Danish Council for Independent Research;
- the Danish Council for Strategic Research; the Novo Nordisk Foundation; and the Danish Cancer Society.
Phallen, Sausen, Diaz and Velculescu are inventors on patent applications related to this research. Velculescu, a founder of Personal Genome Diagnostics and a member of its scientific advisory board and board of directors, owns Personal Genome Diagnostics stock, which is subject to certain restrictions under university policy.
Velculescu is also on the scientific advisory board for Ignyta. The terms of these arrangements are managed by The Johns Hopkins University in accordance with its conflict of interest policies.
Useful Links
- Contact JHM Media Team
- Find a Doctor
- Search Contacts by Beat
- Subscribe to RSS Feeds
- Sign-Up for E-Newsletters
Show me more...
"FDA Approves First-of-Its-Kind Cancer Treatment" (by WebMD)
(FDA: Food and Drug Administration)
YouTube Video: How Car T-Cell Therapy Works
By WebMD August 30, 2017 -- The FDA has for the first time approved a treatment that uses a patient’s own genetically modified cells to attack a type of leukemia, opening the door to what the agency calls "a new frontier" in medicine.
The approval Wednesday allows a process known as CAR T-cell therapy to be used in children or young adults fighting an often fatal recurrence of the most common childhood cancer -- B-cell acute lymphoblastic leukemia.
And it clears the way for a new approach to fighting cancer by harnessing the body’s immune system -- a long-sought goal of medical researchers.
“This is a dream come true,” says Henry Fung, MD, director of the Fox Chase Cancer Center-Temple University Hospital Bone Marrow Transplant Program. “It’s now limited to one disease in children only, but that platform potentially can benefit a lot of different types of cancer patients, particularly blood cancer patients.”
'A New Frontier'
FDA Commissioner Scott Gottlieb, MD, called the approval of the therapy -- brand named Kymriah -- a "new frontier in medical innovation."
In a news conference on Wednesday, Gottlieb said the FDA had 76 active investigational new drug applications related to CAR T-cell products, and more than 500 for gene therapy products are being studied for a variety of ailments, ranging from genetic disorders to autoimmune diseases, diabetes, cancer, and HIV.
"New technologies such as gene and cell therapies hold out the potential to transform medicine and create an inflection point in our ability to treat and even cure many intractable illnesses," Gottlieb says.
Fung, who's also vice chairman of hematology/oncology at Fox Chase, says the treatment could help patients beat back an illness that has resisted conventional treatments like chemotherapy and radiation, leaving them facing death. “This is the breakthrough of the century,” he says.
And Hetty Carraway, MD, an acute leukemia doctor at the Cleveland Clinic, says the newly approved therapy represents a first step for a new way of treating cancer. “If it can bring this kind of paradigm to other types of cancers, that’s really where I think the larger implications are,” she says.
Taking the Fight to Cancer:
B-cell acute lymphoblastic leukemia attacks the blood cells that make antibodies, which help your body fight off disease. Most of the time, it’s treated successfully with chemotherapy, radiation, or by transplants of bone marrow, which produces blood cells. But in some cases, treatment fails to beat back the cancer, or it comes back. When that happens, the odds of survival fall to as little as 1 in 10.
The new treatment is a one-time infusion developed by researchers at the University of Pennsylvania and the pharmaceutical company Novartis. Officially known as chimeric antigen receptor T-cell therapy, it starts with doctors extracting disease-fighting white blood cells, known as T cells, from a patient’s blood. The cells are frozen and shipped to a laboratory, where they’re genetically engineered to attack a specific protein on the cancerous B cells.
They’re then put back into the body, where they seek out and destroy cancer cells. And because they’re cells taken from the patient’s own body, there’s no need for anti-rejection drugs, which are needed after transplants.
“This is really combining everything together,” Fung says. “This is truly using patients’ own immune cells to fight cancer.”
Dangerous Side Effects Remain a Concern:
The therapy can have dangerous side effects -- mainly a condition known as cytokine release syndrome (CRS). That happens when T cells release a lot of a chemical messenger into the bloodstream. This affects the vascular system, causing high fevers and sharp drops in blood pressure. More than 60% of patients in clinical trials had side effects due to cytokine release, Novartis reported, but none of those reactions were fatal.
Emily Whitehead, the first pediatric patient to try the therapy in 2011, had such a bad reaction initially that she was in a coma for 14 days. Her doctors told the family to say their good-byes.
“They believed she had less than a 1-in-1,000 chance of surviving to the next morning,” says her father, Tom Whitehead.
As a last hope, doctors gave Emily the arthritis drug Actemra (tocilizumab), which blocks one of the main inflammatory signals driving the CRS. On Wednesday, the FDA also approved Actemra as a treatment for CRS.
In fact, under the conditions of approval, doctor's can't use CAR T therapy unless they also have Actemra on hand to manage side effects.
Within 12 hours, Emily started to recover. She has been cancer free for five years.
Because of the side effects, Kymriah won't be available everywhere. Hopsitals and clinics will have to be specially certified to administer the treatment. Doctors and other staff will also have get additional training before they can prescribe it.
“We know and expect that type of side effect will happen, and we know that we can successfully manage it,” she says. “But it needs to be managed by people who are familiar with this type of side effect and how best to support patients,” Carraway says.
Other side effects included anemia, nausea, diarrhea, and headaches.
In three trials involving about 150 people, the remission rates were 69%, 83%, and 95%. A total of 17 patients died after receiving the treatment; 14 of them from the disease and three from infections, according to documents the company filed with the FDA.
“We believe this treatment can change the world,” says Tom Whitehead, who frequently speaks about his daughter’s experience and testified before the FDA about the treatment. He also helps raise money for children’s cancer research through The Emily Whitehead Foundation. “But we know some children relapse and we know children who didn’t make it.”
Big Possibilities and a Big Price Tag:
Another concern is the price tag associated with the therapy: The process is reported to cost as much as $475,000.
In a press release, the Center for Medicare and Medicaid Services (CMS) announced that it was exploring "innovative payment modes and arrangements" for Kymriah and other potentially life-saving treatments.
In a news release, Novartis, the company that makes Kymriah, said it was collaborating with CMS on an outcomes-based approach to pricing, which would mean that the company would only be reimbursed if a patient responds to the therapy by the end of the first month of treatment.
“Certainly, it’s far and above the expense that we typically see for drugs,” Carraway says. But current treatments can also run into the low six figures, sometimes with little success. The number of patients with relapsed acute lymphoblastic leukemia is small, “and the options for them in their young lives are pretty limited.”
“Our hope is we’ll get better at making these medications, and hopefully, with time, the cost of this will decrease,” she adds.
Novartis spokeswoman Julie Masow says the company will do “everything we can” to help get the treatment to patients who need it.
“We are carefully considering the appropriate price for CTL019, taking into consideration the value that this treatment represents for patients, society, and the health care system, both near-term and long-term, as well as input from external health economic experts,” Masow says.
The therapy was produced “via pioneering technology and a sophisticated manufacturing process,” she says -- however, “We recognize our responsibility in bringing this innovative treatment to patients.”
'He's Started School'
One of the more recent patients to have CAR T-cell therapy is 5-year-old Liam Thistlethwaite. He has been cancer free for 4 months since starting the therapy to treat his acute lymphoblastic leukemia.
First diagnosed shortly before his second birthday, Liam had gotten 32 months of different kinds of chemotherapy drugs to poison the cancer out of his small body. The treatment is harsh but almost always successful. Doctors told Liam’s parents he had a 96% chance of a cure if he could finish it.
But 8 months later, Liam’s cancer came back, with a vengeance. Leukemia cells spread to his spinal fluid. Tumors grew on two glands in his brain.
Liam’s doctor, Ching-Hon Pui, MD, chairman of the Oncology Department at St. Jude, had recently been to a medical conference that discussed the results of the CAR T-cell therapy.
He convinced Children’s Hospital of Philadelphia to put him on its waiting list, which was about 6 months long at the time.
Because Liam was relatively healthy and had a low cancer burden when he was treated, his father thinks he avoided some of the most severe side effects of the therapy. He spiked very high fevers and spent a few days in the hospital but pulled through.
“He’s started school. He’s doing wonderfully,” says Patrick Thistlethwaite.
Despite Optimism, Major Questions Remain:
One of the unanswered questions is how long CAR T cells can last in the body. In some patients, they’ve persisted for as long as 5 years. Others have their cells die in weeks or months. Another big question is whether the cancer will come back if the CAR T cells are gone.
The Thistlethwaites say it was very hard to know whether to try CAR T on a toddler.
“Our physician truly felt that we’d have the same odds, so to speak, as going into a stem cell transplant with heavy radiation. He believed CAR T to have high side effects up front, but no high long-term side effects," Patrick Thistlethwaite says.
They knew radiation to Liam’s brain and spinal cord could cause long-term damage.
“We still have those options,” Patrick says. “We hope we never have to use them.”
“We hope CAR T is the end of it all.”
[End of Article]
The approval Wednesday allows a process known as CAR T-cell therapy to be used in children or young adults fighting an often fatal recurrence of the most common childhood cancer -- B-cell acute lymphoblastic leukemia.
And it clears the way for a new approach to fighting cancer by harnessing the body’s immune system -- a long-sought goal of medical researchers.
“This is a dream come true,” says Henry Fung, MD, director of the Fox Chase Cancer Center-Temple University Hospital Bone Marrow Transplant Program. “It’s now limited to one disease in children only, but that platform potentially can benefit a lot of different types of cancer patients, particularly blood cancer patients.”
'A New Frontier'
FDA Commissioner Scott Gottlieb, MD, called the approval of the therapy -- brand named Kymriah -- a "new frontier in medical innovation."
In a news conference on Wednesday, Gottlieb said the FDA had 76 active investigational new drug applications related to CAR T-cell products, and more than 500 for gene therapy products are being studied for a variety of ailments, ranging from genetic disorders to autoimmune diseases, diabetes, cancer, and HIV.
"New technologies such as gene and cell therapies hold out the potential to transform medicine and create an inflection point in our ability to treat and even cure many intractable illnesses," Gottlieb says.
Fung, who's also vice chairman of hematology/oncology at Fox Chase, says the treatment could help patients beat back an illness that has resisted conventional treatments like chemotherapy and radiation, leaving them facing death. “This is the breakthrough of the century,” he says.
And Hetty Carraway, MD, an acute leukemia doctor at the Cleveland Clinic, says the newly approved therapy represents a first step for a new way of treating cancer. “If it can bring this kind of paradigm to other types of cancers, that’s really where I think the larger implications are,” she says.
Taking the Fight to Cancer:
B-cell acute lymphoblastic leukemia attacks the blood cells that make antibodies, which help your body fight off disease. Most of the time, it’s treated successfully with chemotherapy, radiation, or by transplants of bone marrow, which produces blood cells. But in some cases, treatment fails to beat back the cancer, or it comes back. When that happens, the odds of survival fall to as little as 1 in 10.
The new treatment is a one-time infusion developed by researchers at the University of Pennsylvania and the pharmaceutical company Novartis. Officially known as chimeric antigen receptor T-cell therapy, it starts with doctors extracting disease-fighting white blood cells, known as T cells, from a patient’s blood. The cells are frozen and shipped to a laboratory, where they’re genetically engineered to attack a specific protein on the cancerous B cells.
They’re then put back into the body, where they seek out and destroy cancer cells. And because they’re cells taken from the patient’s own body, there’s no need for anti-rejection drugs, which are needed after transplants.
“This is really combining everything together,” Fung says. “This is truly using patients’ own immune cells to fight cancer.”
Dangerous Side Effects Remain a Concern:
The therapy can have dangerous side effects -- mainly a condition known as cytokine release syndrome (CRS). That happens when T cells release a lot of a chemical messenger into the bloodstream. This affects the vascular system, causing high fevers and sharp drops in blood pressure. More than 60% of patients in clinical trials had side effects due to cytokine release, Novartis reported, but none of those reactions were fatal.
Emily Whitehead, the first pediatric patient to try the therapy in 2011, had such a bad reaction initially that she was in a coma for 14 days. Her doctors told the family to say their good-byes.
“They believed she had less than a 1-in-1,000 chance of surviving to the next morning,” says her father, Tom Whitehead.
As a last hope, doctors gave Emily the arthritis drug Actemra (tocilizumab), which blocks one of the main inflammatory signals driving the CRS. On Wednesday, the FDA also approved Actemra as a treatment for CRS.
In fact, under the conditions of approval, doctor's can't use CAR T therapy unless they also have Actemra on hand to manage side effects.
Within 12 hours, Emily started to recover. She has been cancer free for five years.
Because of the side effects, Kymriah won't be available everywhere. Hopsitals and clinics will have to be specially certified to administer the treatment. Doctors and other staff will also have get additional training before they can prescribe it.
“We know and expect that type of side effect will happen, and we know that we can successfully manage it,” she says. “But it needs to be managed by people who are familiar with this type of side effect and how best to support patients,” Carraway says.
Other side effects included anemia, nausea, diarrhea, and headaches.
In three trials involving about 150 people, the remission rates were 69%, 83%, and 95%. A total of 17 patients died after receiving the treatment; 14 of them from the disease and three from infections, according to documents the company filed with the FDA.
“We believe this treatment can change the world,” says Tom Whitehead, who frequently speaks about his daughter’s experience and testified before the FDA about the treatment. He also helps raise money for children’s cancer research through The Emily Whitehead Foundation. “But we know some children relapse and we know children who didn’t make it.”
Big Possibilities and a Big Price Tag:
Another concern is the price tag associated with the therapy: The process is reported to cost as much as $475,000.
In a press release, the Center for Medicare and Medicaid Services (CMS) announced that it was exploring "innovative payment modes and arrangements" for Kymriah and other potentially life-saving treatments.
In a news release, Novartis, the company that makes Kymriah, said it was collaborating with CMS on an outcomes-based approach to pricing, which would mean that the company would only be reimbursed if a patient responds to the therapy by the end of the first month of treatment.
“Certainly, it’s far and above the expense that we typically see for drugs,” Carraway says. But current treatments can also run into the low six figures, sometimes with little success. The number of patients with relapsed acute lymphoblastic leukemia is small, “and the options for them in their young lives are pretty limited.”
“Our hope is we’ll get better at making these medications, and hopefully, with time, the cost of this will decrease,” she adds.
Novartis spokeswoman Julie Masow says the company will do “everything we can” to help get the treatment to patients who need it.
“We are carefully considering the appropriate price for CTL019, taking into consideration the value that this treatment represents for patients, society, and the health care system, both near-term and long-term, as well as input from external health economic experts,” Masow says.
The therapy was produced “via pioneering technology and a sophisticated manufacturing process,” she says -- however, “We recognize our responsibility in bringing this innovative treatment to patients.”
'He's Started School'
One of the more recent patients to have CAR T-cell therapy is 5-year-old Liam Thistlethwaite. He has been cancer free for 4 months since starting the therapy to treat his acute lymphoblastic leukemia.
First diagnosed shortly before his second birthday, Liam had gotten 32 months of different kinds of chemotherapy drugs to poison the cancer out of his small body. The treatment is harsh but almost always successful. Doctors told Liam’s parents he had a 96% chance of a cure if he could finish it.
But 8 months later, Liam’s cancer came back, with a vengeance. Leukemia cells spread to his spinal fluid. Tumors grew on two glands in his brain.
Liam’s doctor, Ching-Hon Pui, MD, chairman of the Oncology Department at St. Jude, had recently been to a medical conference that discussed the results of the CAR T-cell therapy.
He convinced Children’s Hospital of Philadelphia to put him on its waiting list, which was about 6 months long at the time.
Because Liam was relatively healthy and had a low cancer burden when he was treated, his father thinks he avoided some of the most severe side effects of the therapy. He spiked very high fevers and spent a few days in the hospital but pulled through.
“He’s started school. He’s doing wonderfully,” says Patrick Thistlethwaite.
Despite Optimism, Major Questions Remain:
One of the unanswered questions is how long CAR T cells can last in the body. In some patients, they’ve persisted for as long as 5 years. Others have their cells die in weeks or months. Another big question is whether the cancer will come back if the CAR T cells are gone.
The Thistlethwaites say it was very hard to know whether to try CAR T on a toddler.
“Our physician truly felt that we’d have the same odds, so to speak, as going into a stem cell transplant with heavy radiation. He believed CAR T to have high side effects up front, but no high long-term side effects," Patrick Thistlethwaite says.
They knew radiation to Liam’s brain and spinal cord could cause long-term damage.
“We still have those options,” Patrick says. “We hope we never have to use them.”
“We hope CAR T is the end of it all.”
[End of Article]
[Your Webhost: I actually had a pacemaker implanted into my chest recently. Since I have been able to resume my cardio workout at the gym, thanks to the MedTronic device!]
Artificial Cardiac Pacemakers
YouTube Video: What Is a Pacemaker and How Does It Work?
Pictured: (L) position of pacemaker, (R) pacemaker
A pacemaker (or artificial pacemaker, so as not to be confused with the heart's natural pacemaker) is a medical device which uses electrical impulses, delivered by electrodes contracting the heart muscles, to regulate the beating of the heart.
The primary purpose of a pacemaker is to maintain an adequate heart rate, either because the heart's natural pacemaker is not fast enough, or because there is a block in the heart's electrical conduction system.
Modern pacemakers are externally programmable and allow a cardiologist to select the optimum pacing modes for individual patients. Some combine a pacemaker and defibrillator in a single implantable device. Others have multiple electrodes stimulating differing positions within the heart to improve synchronization of the lower chambers (ventricles) of the heart.
Click on any of the following blue hyperlinks for more about Artificial Cardiac Pacemakers:
The primary purpose of a pacemaker is to maintain an adequate heart rate, either because the heart's natural pacemaker is not fast enough, or because there is a block in the heart's electrical conduction system.
Modern pacemakers are externally programmable and allow a cardiologist to select the optimum pacing modes for individual patients. Some combine a pacemaker and defibrillator in a single implantable device. Others have multiple electrodes stimulating differing positions within the heart to improve synchronization of the lower chambers (ventricles) of the heart.
Click on any of the following blue hyperlinks for more about Artificial Cardiac Pacemakers:
- Methods of pacing
- Basic function
- Biventricular pacing
- Advancements in function
- Considerations
- Other devices
- History
- See also:
- Biological pacemaker
- Button cell
- Electrical conduction system of the heart
- Implantable cardioverter-defibrillator
- Infective endocarditis
- Pacemaker syndrome
- WiTricity
- Qi (inductive power standard)
- Detecting and Distinguishing Cardiac Pacing Artifacts
- Implantable Cardioverter Defibrillator from National Heart, Lung and Blood Institute
(U.S.) National Institutes of Health (NIH), including Medical Advancements to Eliminate Forms of Cancer
YouTube Video: Jim Parsons* on Documentary "First In Human"
* -- Jim Parsons
Pictured: Aerial photograph from the north of the Mark O. Hatfield Clinical Research Center (Building 10) on the National Institutes of Health Bethesda, Maryland campus.
The National Institutes of Health (NIH) is the primary agency of the United States government responsible for biomedical and public health research, founded in the late 1870s. It is part of the United States Department of Health and Human Services with facilities mainly located in Bethesda, Maryland.
The NIH conducts its own scientific research through its Intramural Research Program (IRP) and provides major biomedical research funding to non-NIH research facilities through its Extramural Research Program.
As of 2013, the IRP had 1,200 principal investigators and more than 4,000 postdoctoral fellows in basic, translational, and clinical research, being the largest biomedical research institution in the world, while, as of 2003, the extramural arm provided 28% of biomedical research funding spent annually in the U.S., or about US$26.4 billion.
The NIH comprises 27 separate institutes and centers of different biomedical disciplines and is responsible for many scientific accomplishments, including the discovery of fluoride to prevent tooth decay, the use of lithium to manage bipolar disorder, and the creation of vaccines against hepatitis, Haemophilus influenzae (HIB), and human papillomavirus (HPV).
Click on any of the following blue hyperlinks for more about The National Institutes of Health (NIH):
Medical Advancements to Eliminate Forms of Cancer (Fact Sheet by NIH)
Click on the following blue hyperlinks for more about each topic listed below:
The NIH conducts its own scientific research through its Intramural Research Program (IRP) and provides major biomedical research funding to non-NIH research facilities through its Extramural Research Program.
As of 2013, the IRP had 1,200 principal investigators and more than 4,000 postdoctoral fellows in basic, translational, and clinical research, being the largest biomedical research institution in the world, while, as of 2003, the extramural arm provided 28% of biomedical research funding spent annually in the U.S., or about US$26.4 billion.
The NIH comprises 27 separate institutes and centers of different biomedical disciplines and is responsible for many scientific accomplishments, including the discovery of fluoride to prevent tooth decay, the use of lithium to manage bipolar disorder, and the creation of vaccines against hepatitis, Haemophilus influenzae (HIB), and human papillomavirus (HPV).
Click on any of the following blue hyperlinks for more about The National Institutes of Health (NIH):
- History
- Directors
- Locations and campuses
- Research
- Funding
- Stakeholders
- Commercial partnerships
- Institutes and centers
- See also:
- List of institutes and centers of the National Institutes of Health
- United States Public Health Service
- National Institutes of Health Stroke Scale
- Heads of International Research Organizations
- NIH Toolbox
- Official website
- National Institutes of Health in the Federal Register
- Regional Medical Programs Collection of information on NIH's Regional Medical Programs, from the National Library of Medicine
- Nice health tips for all age people
Medical Advancements to Eliminate Forms of Cancer (Fact Sheet by NIH)
Click on the following blue hyperlinks for more about each topic listed below:
- What is the immune system?
- Are cancer cells recognized by the immune system?
- What are vaccines?
- What are cancer vaccines?
- How do cancer preventive vaccines work?
- What cancer preventive vaccines are approved in the United States?
- How are cancer treatment vaccines designed to work?
- Has the FDA approved any cancer treatment vaccines?
- How are cancer vaccines made?
- Are adjuvants used with cancer vaccines?
- Do cancer vaccines have side effects?
- Can cancer treatment vaccines be combined with other types of cancer therapy?
- What additional research is under way to improve cancer treatment vaccines?
- What types of vaccines are being studied in clinical trials?
Pharmaceutical Industry ("Big Pharma") including List of pharmaceutical companies
TOP: US Pharmaceutical Pricing: An Overview
BOTTOM: Why do Americans spend so much on pharmaceuticals?
- YouTube Video: Top 10 Largest Pharmaceutical Companies
- YouTube Video: The Facts on America's Opioid Epidemic (NY Times)
- YouTube Video: How Does the FDA Approve a Drug?
TOP: US Pharmaceutical Pricing: An Overview
BOTTOM: Why do Americans spend so much on pharmaceuticals?
Click to set custom HTML
Click here for a List of Pharmaceutical Companies that comprise "Big Pharma".
The pharmaceutical industry discovers, develops, produces, and markets drugs or pharmaceutical drugs for use as medications. Pharmaceutical companies may deal in generic or brand medications and medical devices. They are subject to a variety of laws and regulations that govern the patenting, testing, safety, efficacy and marketing of drugs.
Research and development:
Main articles: Drug discovery and Drug development
Drug discovery is the process by which potential drugs are discovered or designed. In the past most drugs have been discovered either by isolating the active ingredient from traditional remedies or by serendipitous discovery. Modern biotechnology often focuses on understanding the metabolic pathways related to a disease state or pathogen, and manipulating these pathways using molecular biology or biochemistry.
A great deal of early-stage drug discovery has traditionally been carried out by universities and research institutions.
Drug development refers to activities undertaken after a compound is identified as a potential drug in order to establish its suitability as a medication. Objectives of drug development are to determine appropriate formulation and dosing, as well as to establish safety. Research in these areas generally includes a combination of in vitro studies, in vivo studies, and clinical trials. The cost of late stage development has meant it is usually done by the larger pharmaceutical companies.
Often, large multinational corporations exhibit vertical integration, participating in a broad range of drug discovery and development, manufacturing and quality control, marketing, sales, and distribution.
Smaller organizations, on the other hand, often focus on a specific aspect such as discovering drug candidates or developing formulations. Often, collaborative agreements between research organizations and large pharmaceutical companies are formed to explore the potential of new drug substances.
More recently, multi-nationals are increasingly relying on contract research organizations to manage drug development.
The cost of innovation:
Drug discovery and development is very expensive; of all compounds investigated for use in humans only a small fraction are eventually approved in most nations by government appointed medical institutions or boards, who have to approve new drugs before they can be marketed in those countries.
In 2010 18 NMEs (New Molecular Entities) were approved and three biologics by the FDA, or 21 in total, which is down from 26 in 2009 and 24 in 2008. On the other hand, there were only 18 approvals in total in 2007 and 22 back in 2006.
Since 2001, the Center for Drug Evaluation and Research has averaged 22.9 approvals a year. This approval comes only after heavy investment in pre-clinical development and clinical trials, as well as a commitment to ongoing safety monitoring.
Drugs which fail part-way through this process often incur large costs, while generating no revenue in return. If the cost of these failed drugs is taken into account, the cost of developing a successful new drug (new chemical entity, or NCE), has been estimated at about US$1.3 billion, not including marketing expenses). Professors Light and Lexchin reported in 2012, however, that the rate of approval for new drugs has been a relatively stable average rate of 15 to 25 for decades.
Industry-wide research and investment reached a record $65.3 billion in 2009.[78] While the cost of research in the U.S. was about $34.2 billion between 1995 and 2010, revenues rose faster (revenues rose by $200.4 billion in that time).
A study by the consulting firm Bain & Company reported that the cost for discovering, developing and launching (which factored in marketing and other business expenses) a new drug (along with the prospective drugs that fail) rose over a five-year period to nearly $1.7 billion in 2003. According to Forbes, by 2010 development costs were between $4 billion to $11 billion per drug.
Some of these estimates also take into account the opportunity cost of investing capital many years before revenues are realized (see Time-value of money). Because of the very long time needed for discovery, development, and approval of pharmaceuticals, these costs can accumulate to nearly half the total expense.
A direct consequence within the pharmaceutical industry value chain is that major pharmaceutical multinationals tend to increasingly outsource risks related to fundamental research, which somewhat reshapes the industry ecosystem with biotechnology companies playing an increasingly important role, and overall strategies being redefined accordingly.
Some approved drugs, such as those based on re-formulation of an existing active ingredient (also referred to as Line-extensions) are much less expensive to develop.
Controversies:
Due to repeated accusations and findings that some clinical trials conducted or funded by pharmaceutical companies may report only positive results for the preferred medication, the industry has been looked at much more closely by independent groups and government agencies.
In response to specific cases in which unfavorable data from pharmaceutical company-sponsored research was not published, the Pharmaceutical Research and Manufacturers of America have published new guidelines urging companies to report all findings and limit the financial involvement in drug companies of researchers.
US congress signed into law a bill which requires phase II and phase III clinical trials to be registered by the sponsor on the clinicaltrials.gov website run by the NIH.
Drug researchers not directly employed by pharmaceutical companies often look to companies for grants, and companies often look to researchers for studies that will make their products look favorable. Sponsored researchers are rewarded by drug companies, for example with support for their conference/symposium costs. Lecture scripts and even journal articles presented by academic researchers may actually be "ghost-written" by pharmaceutical companies.
An investigation by ProPublica found that at least 21 doctors have been paid more than $500,000 for speeches and consulting by drugs manufacturers since 2009, with half of the top earners working in psychiatry, and about $2 billion in total paid to doctors for such services. AstraZeneca, Johnson & Johnson and Eli Lilly have paid billions of dollars in federal settlements over allegations that they paid doctors to promote drugs for unapproved uses. Some prominent medical schools have since tightened rules on faculty acceptance of such payments by drug companies.
In contrast to this viewpoint, an article and associated editorial in the New England Journal of Medicine in May 2015 emphasized the importance of pharmaceutical industry-physician interactions for the development of novel treatments, and argued that moral outrage over industry malfeasance had unjustifiably led many to overemphasize the problems created by financial conflicts of interest.
The article noted that major healthcare organizations such as National Center for Advancing Translational Sciences of the National Institutes of Health, the President's Council of Advisors on Science and Technology, the World Economic Forum, the Gates Foundation, the Wellcome Trust, and the Food and Drug Administration had encouraged greater interactions between physicians and industry in order to bring greater benefits to patients.
Product approval:
In the United States, new pharmaceutical products must be approved by the Food and Drug Administration (FDA) as being both safe and effective. This process generally involves submission of an Investigational New Drug filing with sufficient pre-clinical data to support proceeding with human trials. Following IND approval, three phases of progressively larger human clinical trials may be conducted.
Phase I generally studies toxicity using healthy volunteers. Phase II can include pharmacokinetics and dosing in patients, and Phase III is a very large study of efficacy in the intended patient population.
Following the successful completion of phase III testing, a New Drug Application is submitted to the FDA. The FDA review the data and if the product is seen as having a positive benefit-risk assessment, approval to market the product in the US is granted.
A fourth phase of post-approval surveillance is also often required due to the fact that even the largest clinical trials cannot effectively predict the prevalence of rare side-effects.
Postmarketing surveillance ensures that after marketing the safety of a drug is monitored closely. In certain instances, its indication may need to be limited to particular patient groups, and in others the substance is withdrawn from the market completely. The FDA provides information about approved drugs at the Orange Book site.
In the UK, the Medicines and Healthcare Products Regulatory Agency approves drugs for use, though the evaluation is done by the European Medicines Agency, an agency of the European Union based in London. Normally an approval in the UK and other European countries comes later than one in the USA. Then it is the National Institute for Health and Care Excellence (NICE), for England and Wales, who decides if and how the National Health Service (NHS) will allow (in the sense of paying for) their use. The British National Formulary is the core guide for pharmacists and clinicians.
In many non-US western countries a 'fourth hurdle' of cost effectiveness analysis has developed before new technologies can be provided. This focuses on the efficiency (in terms of the cost per QALY) of the technologies in question rather than their efficacy. In England and Wales NICE decides whether and in what circumstances drugs and technologies will be made available by the NHS, whilst similar arrangements exist with the Scottish Medicines Consortium in Scotland, and the Pharmaceutical Benefits Advisory Committee in Australia. A product must pass the threshold for cost-effectiveness if it is to be approved.
Treatments must represent 'value for money' and a net benefit to society.
Orphan drugs:
Main article: Orphan drug
There are special rules for certain rare diseases ("orphan diseases") in several major drug regulatory territories. For example, diseases involving fewer than 200,000 patients in the United States, or larger populations in certain circumstances are subject to the Orphan Drug Act.
Because medical research and development of drugs to treat such diseases is financially disadvantageous, companies that do so are rewarded with tax reductions, fee waivers, and market exclusivity on that drug for a limited time (seven years), regardless of whether the drug is protected by patents.
Click on any of the following blue hyperlinks for more abou the Pharmaceutical Industry:
The pharmaceutical industry discovers, develops, produces, and markets drugs or pharmaceutical drugs for use as medications. Pharmaceutical companies may deal in generic or brand medications and medical devices. They are subject to a variety of laws and regulations that govern the patenting, testing, safety, efficacy and marketing of drugs.
Research and development:
Main articles: Drug discovery and Drug development
Drug discovery is the process by which potential drugs are discovered or designed. In the past most drugs have been discovered either by isolating the active ingredient from traditional remedies or by serendipitous discovery. Modern biotechnology often focuses on understanding the metabolic pathways related to a disease state or pathogen, and manipulating these pathways using molecular biology or biochemistry.
A great deal of early-stage drug discovery has traditionally been carried out by universities and research institutions.
Drug development refers to activities undertaken after a compound is identified as a potential drug in order to establish its suitability as a medication. Objectives of drug development are to determine appropriate formulation and dosing, as well as to establish safety. Research in these areas generally includes a combination of in vitro studies, in vivo studies, and clinical trials. The cost of late stage development has meant it is usually done by the larger pharmaceutical companies.
Often, large multinational corporations exhibit vertical integration, participating in a broad range of drug discovery and development, manufacturing and quality control, marketing, sales, and distribution.
Smaller organizations, on the other hand, often focus on a specific aspect such as discovering drug candidates or developing formulations. Often, collaborative agreements between research organizations and large pharmaceutical companies are formed to explore the potential of new drug substances.
More recently, multi-nationals are increasingly relying on contract research organizations to manage drug development.
The cost of innovation:
Drug discovery and development is very expensive; of all compounds investigated for use in humans only a small fraction are eventually approved in most nations by government appointed medical institutions or boards, who have to approve new drugs before they can be marketed in those countries.
In 2010 18 NMEs (New Molecular Entities) were approved and three biologics by the FDA, or 21 in total, which is down from 26 in 2009 and 24 in 2008. On the other hand, there were only 18 approvals in total in 2007 and 22 back in 2006.
Since 2001, the Center for Drug Evaluation and Research has averaged 22.9 approvals a year. This approval comes only after heavy investment in pre-clinical development and clinical trials, as well as a commitment to ongoing safety monitoring.
Drugs which fail part-way through this process often incur large costs, while generating no revenue in return. If the cost of these failed drugs is taken into account, the cost of developing a successful new drug (new chemical entity, or NCE), has been estimated at about US$1.3 billion, not including marketing expenses). Professors Light and Lexchin reported in 2012, however, that the rate of approval for new drugs has been a relatively stable average rate of 15 to 25 for decades.
Industry-wide research and investment reached a record $65.3 billion in 2009.[78] While the cost of research in the U.S. was about $34.2 billion between 1995 and 2010, revenues rose faster (revenues rose by $200.4 billion in that time).
A study by the consulting firm Bain & Company reported that the cost for discovering, developing and launching (which factored in marketing and other business expenses) a new drug (along with the prospective drugs that fail) rose over a five-year period to nearly $1.7 billion in 2003. According to Forbes, by 2010 development costs were between $4 billion to $11 billion per drug.
Some of these estimates also take into account the opportunity cost of investing capital many years before revenues are realized (see Time-value of money). Because of the very long time needed for discovery, development, and approval of pharmaceuticals, these costs can accumulate to nearly half the total expense.
A direct consequence within the pharmaceutical industry value chain is that major pharmaceutical multinationals tend to increasingly outsource risks related to fundamental research, which somewhat reshapes the industry ecosystem with biotechnology companies playing an increasingly important role, and overall strategies being redefined accordingly.
Some approved drugs, such as those based on re-formulation of an existing active ingredient (also referred to as Line-extensions) are much less expensive to develop.
Controversies:
Due to repeated accusations and findings that some clinical trials conducted or funded by pharmaceutical companies may report only positive results for the preferred medication, the industry has been looked at much more closely by independent groups and government agencies.
In response to specific cases in which unfavorable data from pharmaceutical company-sponsored research was not published, the Pharmaceutical Research and Manufacturers of America have published new guidelines urging companies to report all findings and limit the financial involvement in drug companies of researchers.
US congress signed into law a bill which requires phase II and phase III clinical trials to be registered by the sponsor on the clinicaltrials.gov website run by the NIH.
Drug researchers not directly employed by pharmaceutical companies often look to companies for grants, and companies often look to researchers for studies that will make their products look favorable. Sponsored researchers are rewarded by drug companies, for example with support for their conference/symposium costs. Lecture scripts and even journal articles presented by academic researchers may actually be "ghost-written" by pharmaceutical companies.
An investigation by ProPublica found that at least 21 doctors have been paid more than $500,000 for speeches and consulting by drugs manufacturers since 2009, with half of the top earners working in psychiatry, and about $2 billion in total paid to doctors for such services. AstraZeneca, Johnson & Johnson and Eli Lilly have paid billions of dollars in federal settlements over allegations that they paid doctors to promote drugs for unapproved uses. Some prominent medical schools have since tightened rules on faculty acceptance of such payments by drug companies.
In contrast to this viewpoint, an article and associated editorial in the New England Journal of Medicine in May 2015 emphasized the importance of pharmaceutical industry-physician interactions for the development of novel treatments, and argued that moral outrage over industry malfeasance had unjustifiably led many to overemphasize the problems created by financial conflicts of interest.
The article noted that major healthcare organizations such as National Center for Advancing Translational Sciences of the National Institutes of Health, the President's Council of Advisors on Science and Technology, the World Economic Forum, the Gates Foundation, the Wellcome Trust, and the Food and Drug Administration had encouraged greater interactions between physicians and industry in order to bring greater benefits to patients.
Product approval:
In the United States, new pharmaceutical products must be approved by the Food and Drug Administration (FDA) as being both safe and effective. This process generally involves submission of an Investigational New Drug filing with sufficient pre-clinical data to support proceeding with human trials. Following IND approval, three phases of progressively larger human clinical trials may be conducted.
Phase I generally studies toxicity using healthy volunteers. Phase II can include pharmacokinetics and dosing in patients, and Phase III is a very large study of efficacy in the intended patient population.
Following the successful completion of phase III testing, a New Drug Application is submitted to the FDA. The FDA review the data and if the product is seen as having a positive benefit-risk assessment, approval to market the product in the US is granted.
A fourth phase of post-approval surveillance is also often required due to the fact that even the largest clinical trials cannot effectively predict the prevalence of rare side-effects.
Postmarketing surveillance ensures that after marketing the safety of a drug is monitored closely. In certain instances, its indication may need to be limited to particular patient groups, and in others the substance is withdrawn from the market completely. The FDA provides information about approved drugs at the Orange Book site.
In the UK, the Medicines and Healthcare Products Regulatory Agency approves drugs for use, though the evaluation is done by the European Medicines Agency, an agency of the European Union based in London. Normally an approval in the UK and other European countries comes later than one in the USA. Then it is the National Institute for Health and Care Excellence (NICE), for England and Wales, who decides if and how the National Health Service (NHS) will allow (in the sense of paying for) their use. The British National Formulary is the core guide for pharmacists and clinicians.
In many non-US western countries a 'fourth hurdle' of cost effectiveness analysis has developed before new technologies can be provided. This focuses on the efficiency (in terms of the cost per QALY) of the technologies in question rather than their efficacy. In England and Wales NICE decides whether and in what circumstances drugs and technologies will be made available by the NHS, whilst similar arrangements exist with the Scottish Medicines Consortium in Scotland, and the Pharmaceutical Benefits Advisory Committee in Australia. A product must pass the threshold for cost-effectiveness if it is to be approved.
Treatments must represent 'value for money' and a net benefit to society.
Orphan drugs:
Main article: Orphan drug
There are special rules for certain rare diseases ("orphan diseases") in several major drug regulatory territories. For example, diseases involving fewer than 200,000 patients in the United States, or larger populations in certain circumstances are subject to the Orphan Drug Act.
Because medical research and development of drugs to treat such diseases is financially disadvantageous, companies that do so are rewarded with tax reductions, fee waivers, and market exclusivity on that drug for a limited time (seven years), regardless of whether the drug is protected by patents.
Click on any of the following blue hyperlinks for more abou the Pharmaceutical Industry:
- History
- Global sales
- Marketing
- Controversy about drug marketing and lobbying
- Developing world
- See also:
Generic Medications and their Manufacturers
YouTube Video: Meet Blue: Generic Medications PSA
Pictured: 2017 Generic Drug Access and Savings in the U.S. Report
Click here for an alphabetical listing of generic drug manufactures.
A generic drug is a pharmaceutical drug that is equivalent to a brand-name product in dosage, strength, route of administration, quality, performance and intended use. The term may also refer to any drug marketed under its chemical name without advertising, or to the chemical makeup of a drug rather than the brand name under which the drug is sold.
Although they may not be associated with a particular company, generic drugs are usually subject to government regulations in the countries where they are dispensed. They are labeled with the name of the manufacturer and a generic nonproprietary name such as the United States Adopted Name or international non-proprietary name of the drug.
A generic drug must contain the same active ingredients as the original brand-name formulation. The U.S. Food and Drug Administration (FDA) requires that generics be identical to, or within an acceptable bioequivalent range of, their brand-name counterparts with respect to pharmacokinetic and pharmacodynamic properties. (The FDA's use of the word "identical" is a legal interpretation, not literal.)
Biopharmaceuticals such as monoclonal antibodies differ biologically from small molecule drugs. Generic versions of these drugs, known as biosimilars, are typically regulated under an extended set of rules.
In most cases, generic products become available after the patent protections afforded to a drug's original developer expire. Once generic drugs enter the market, competition often leads to substantially lower prices for both the original brand-name product and its generic equivalents. In most countries, patents give 20 years of protection.
However, many countries and regions, such as the European Union and the United States,may grant up to five years of additional protection ("patent term restoration") if manufacturers meet specific goals, such as conducting clinical trials for pediatric patients. Manufacturers, wholesalers, insurers, and drugstores can each increase prices at various stages of production and distribution.
In 2014, according to an analysis by the Generic Pharmaceutical Association, generic drugs accounted for 88% of the 4.3 billion prescriptions filled in the United States.
Click on any of the blue hyperlinks for more about Generic Drugs:
A generic drug is a pharmaceutical drug that is equivalent to a brand-name product in dosage, strength, route of administration, quality, performance and intended use. The term may also refer to any drug marketed under its chemical name without advertising, or to the chemical makeup of a drug rather than the brand name under which the drug is sold.
Although they may not be associated with a particular company, generic drugs are usually subject to government regulations in the countries where they are dispensed. They are labeled with the name of the manufacturer and a generic nonproprietary name such as the United States Adopted Name or international non-proprietary name of the drug.
A generic drug must contain the same active ingredients as the original brand-name formulation. The U.S. Food and Drug Administration (FDA) requires that generics be identical to, or within an acceptable bioequivalent range of, their brand-name counterparts with respect to pharmacokinetic and pharmacodynamic properties. (The FDA's use of the word "identical" is a legal interpretation, not literal.)
Biopharmaceuticals such as monoclonal antibodies differ biologically from small molecule drugs. Generic versions of these drugs, known as biosimilars, are typically regulated under an extended set of rules.
In most cases, generic products become available after the patent protections afforded to a drug's original developer expire. Once generic drugs enter the market, competition often leads to substantially lower prices for both the original brand-name product and its generic equivalents. In most countries, patents give 20 years of protection.
However, many countries and regions, such as the European Union and the United States,may grant up to five years of additional protection ("patent term restoration") if manufacturers meet specific goals, such as conducting clinical trials for pediatric patients. Manufacturers, wholesalers, insurers, and drugstores can each increase prices at various stages of production and distribution.
In 2014, according to an analysis by the Generic Pharmaceutical Association, generic drugs accounted for 88% of the 4.3 billion prescriptions filled in the United States.
Click on any of the blue hyperlinks for more about Generic Drugs:
- Nomenclature
- Economics
- Regulation in the United States
- See also:
- Anti-Counterfeiting Trade Agreement#Criminalizing generic medicine (ACTA)
- Bayh–Dole Act
- Biosimilars
- Chemical patents
- Evergreening
- Generic brand
- International Nonproprietary Name
- Inverse benefit law
- Prescription costs
- Research exemption
- SOPA#Protection against counterfeit drugs
- Trans-Pacific Partnership Intellectual Property Provisions
- Transatlantic Trade and Investment Partnership
- United States Adopted Names Program, generic drug naming process, lists of adopted names
- The Medical Letter on Drugs and Therapeutics
- Collection of national and international Guidelines
- GPhA Generic Pharmaceutical Association
Surgery, including a List of Surgical Procedures
YouTube Video: Laparoscopic Cholecystectomy (Gallbladder Removal)
Pictured: (L-R) Surgeons repairing a ruptured Achilles tendon on a man; Stereotactic Surgery
Click Here for a List of Surgical Procedures.
Surgery is a medical specialty that uses operative manual and instrumental techniques on a patient to investigate or treat a pathological condition such as a disease or injury, to help improve bodily function or appearance or to repair unwanted ruptured areas.
An act of performing surgery may be called a "surgical procedure", "operation", or simply "surgery". In this context, the verb "operate" means to perform surgery. The adjective "surgical" means pertaining to surgery; e.g. surgical instruments or surgical nurse. The patient or subject on which the surgery is performed can be a person or an animal.
A surgeon is a person who practices surgery and a surgeon's assistant is a person who practices surgical assistance. A surgical team is made up of surgeon, surgeon's assistant, anesthesia provider, circulating nurse and surgical technologist. Surgery usually spans minutes to hours, but it is typically not an ongoing or periodic type of treatment. The term "surgery" can also refer to the place where surgery is performed, or simply the office of a physician, dentist, or veterinarian.
Click on the following blue hyperlinks for more about Surgery:
Surgery is a medical specialty that uses operative manual and instrumental techniques on a patient to investigate or treat a pathological condition such as a disease or injury, to help improve bodily function or appearance or to repair unwanted ruptured areas.
An act of performing surgery may be called a "surgical procedure", "operation", or simply "surgery". In this context, the verb "operate" means to perform surgery. The adjective "surgical" means pertaining to surgery; e.g. surgical instruments or surgical nurse. The patient or subject on which the surgery is performed can be a person or an animal.
A surgeon is a person who practices surgery and a surgeon's assistant is a person who practices surgical assistance. A surgical team is made up of surgeon, surgeon's assistant, anesthesia provider, circulating nurse and surgical technologist. Surgery usually spans minutes to hours, but it is typically not an ongoing or periodic type of treatment. The term "surgery" can also refer to the place where surgery is performed, or simply the office of a physician, dentist, or veterinarian.
Click on the following blue hyperlinks for more about Surgery:
- Definitions
- Description of surgical procedure
- Epidemiology
- Special populations
- In low- and middle-income countries
- History
- Modern surgery
- Surgical specialties
- National societies
- See also:
- Anesthesia
- ASA physical status classification system
- Biomaterial
- Cardiac surgery
- Current Procedural Terminology (CPT; for outpatient surgical procedures medical coding)
- Surgical drain
- Endoscopy
- Fluorescence image-guided surgery
- Hypnosurgery
- ICD-10-PCS (International Classification of Diseases, 10th edition, Procedural Coding System; inpatient surgical procedures medical coding)
- Jet ventilation
- Minimally invasive procedure
- Operative report
- Perioperative mortality
- Remote surgery
- Robotic surgery
- Surgeon's assistant
- Surgical Outcomes Analysis and Research
- Surgical Sieve
- Trauma surgery
- Reconstructive surgery
- Rheumasurgery
- WHO Surgical Safety Checklist
Cardiac Surgery
YouTube Video: Open Heart Surgery Performed at Christian Hospital in St. Louis, Missouri
Pictured: Two cardiac surgeons performing coronary artery bypass surgery. Note the use of a steel retractor to forcefully maintain the exposure of the heart.
Cardiac surgery, or cardiovascular surgery, is surgery on the heart or great vessels performed by cardiac surgeons. It is often used to treat complications of ischemic heart disease (for example, with coronary artery bypass grafting); to correct congenital heart disease; or to treat valvular heart disease from various causes, including endocarditis, rheumatic heart disease, and atherosclerosis. It also includes heart transplantation.
Types of Cardiac Surgery:
Open heart surgery:
In open heart surgery, the patient's heart is opened and surgery is performed on its internal structures.
Dr. Wilfred G. Bigelow of the University of Toronto found that such procedures could be performed better in a bloodless and motionless environment. Therefore, during open heart surgery, the heart is temporarily stopped, and the patient is placed on cardiopulmonary bypass, meaning a machine pumps their blood and oxygen.
Because the machine cannot function the same way as the heart, surgeons try to minimize the time a patient spends on it.
Cardiopulmonary bypass was developed after surgeons realized the limitations of hypothermia in cardiac surgery: Complex intracardiac repairs take time, and the patient needs blood flow to the body (particularly to the brain), as well as heart and lung function.
In 1953, Dr. John Heysham Gibbon of Jefferson Medical School in Philadelphia reported the first successful use of extracorporeal circulation by means of an oxygenator, but he abandoned the method after subsequent failures.
In 1954, Dr. Lillehei performed a series of successful operations with the controlled cross-circulation technique, in which the patient's mother or father was used as a "heart-lung machine". Dr. John W. Kirklin at the Mayo Clinic was the first to use a Gibbon-type pump-oxygenator.
Nazih Zuhdi performed the first total intentional hemodilution open heart surgery on Terry Gene Nix, age 7, on 25 February 1960 at Mercy Hospital in Oklahoma City. The operation was a success; however, Nix died three years later. In March 1961, Zuhdi, Carey, and Greer performed open heart surgery on a child, age 3 1⁄2, using the total intentional hemodilution machine.
Modern beating-heart surgery:
In the early 1990s, surgeons began to perform off-pump coronary artery bypass, done without cardiopulmonary bypass. In these operations, the heart continues beating during surgery, but is stabilized to provide an almost still work area in which to connect a conduit vessel that bypasses a blockage using a technique known as endoscopic vessel harvesting (EVH).
Heart transplant:
In 1945, the Soviet pathologist Nikolai Sinitsyn successfully transplanted a heart from one frog to another frog and from one dog to another dog.
Norman Shumway is widely regarded as the father of human heart transplantation, although the world's first adult heart transplant was performed by a South African cardiac surgeon, Christiaan Barnard, using techniques developed by Shumway and Richard Lower.
Barnard performed the first transplant on Louis Washkansky on 3 December 1967 at Groote Schuur Hospital in Cape Town. Adrian Kantrowitz performed the first pediatric heart transplant on 6 December 1967 at Maimonides Hospital (now Maimonides Medical Center) in Brooklyn, New York, barely three days later. Shumway performed the first adult heart transplant in the United States on 6 January 1968 at Stanford University Hospital.
Coronary artery bypass grafting:
Coronary artery bypass grafting, also called revascularization, is a common surgical procedure to create an alternative path to deliver blood supply to the heart and body, with the goal of preventing clot formation. This can be done in many ways, and the arteries used can be taken from several areas of the body. Arteries are typically harvested from the chest, arm, or wrist and then attached to a portion of the coronary artery, relieving pressure and limiting clotting factors in that area of the heart.
The procedure is typically performed because of coronary artery disease (CAD), in which a plaque-like substance builds up in the coronary artery, the main pathway carrying oxygen-rich blood to the heart. This can cause a blockage and/or a rupture, which can lead to a heart attack.
Minimally invasive surgery:
As an alternative to open heart surgery, which involves a five- to eight-inch incision in the chest wall, a surgeon may perform an endoscopic procedure by making very small incisions through which a camera and specialized tools are inserted.
In robot-assisted heart surgery, a machine controlled by a cardiac surgeon is used to perform a procedure. The main advantage to this is the size of the incision required: three small holes instead of an incision big enough for the surgeon's hands.
Post-surgical procedures:
As with any surgical procedure, cardiac surgery requires postoperative precautions to avoid complications. Incision care is needed to avoid infection and minimize scarring. Swelling and loss of appetite are common.
Recovery from open heart surgery begins with about 48 hours in an intensive care unit, where heart rate, blood pressure, and oxygen levels are closely monitored. Chest tubes are inserted to drain blood around the heart and lungs. After discharge from the hospital, compression socks may be recommended in order to regulate blood flow.
Risks:
The advancement of cardiac surgery and cardiopulmonary bypass techniques has greatly reduced the mortality rates of these procedures. For instance, repairs of congenital heart defects are currently estimated to have 4–6% mortality rates.
A major concern with cardiac surgery is neurological damage. Stroke occurs in 2–3% of all people undergoing cardiac surgery, and the rate is higher in patients with other risk factors for stroke.
A more subtle complication attributed to cardiopulmonary bypass is postperfusion syndrome, sometimes called "pumphead". The neurocognitive symptoms of postperfusion syndrome were initially thought to be permanent, but turned out to be transient, with no permanent neurological impairment.
In order to assess the performance of surgical units and individual surgeons, a popular risk model has been created called the EuroSCORE. It takes a number of health factors from a patient and, using precalculated logistic regression coefficients, attempts to quantify the probability that they will survive to discharge.
Within the United Kingdom, the EuroSCORE was used to give a breakdown of all cardiothoracic surgery centers and to indicate whether the units and their individuals surgeons performed within an acceptable range. The results are available on the Care Quality Commission website.
Another important source of complications are the neuropsychological and psychopathologic changes following open heart surgery. One example is Skumin syndrome, described by Victor Skumin in 1978, which is a "cardioprosthetic psychopathological syndrome" associated with mechanical heart valve implants and characterized by irrational fear, anxiety, depression, sleep disorder, and weakness.
Risk reduction:
A 2012 Cochrane systematic review found evidence that preoperative physical therapy reduced postoperative pulmonary complications, such as pneumonia and atelectasis, in patients undergoing elective cardiac surgery. In addition, the researchers found that preoperative physical therapy decreased the length of hospital stay by more than three days on average.
A 2013 Cochrane review showed that both pharmacological and non-pharmacological prevention reduce the risk of atrial fibrillation after an operation and reduced the length of hospital stays. No difference in mortality could be shown.
There is evidence that quitting smoking at least four weeks before surgery may reduce the risk of postoperative complications.
Click on any of the following blue hyperlinks for more about Cardiac Surgery:
Types of Cardiac Surgery:
Open heart surgery:
In open heart surgery, the patient's heart is opened and surgery is performed on its internal structures.
Dr. Wilfred G. Bigelow of the University of Toronto found that such procedures could be performed better in a bloodless and motionless environment. Therefore, during open heart surgery, the heart is temporarily stopped, and the patient is placed on cardiopulmonary bypass, meaning a machine pumps their blood and oxygen.
Because the machine cannot function the same way as the heart, surgeons try to minimize the time a patient spends on it.
Cardiopulmonary bypass was developed after surgeons realized the limitations of hypothermia in cardiac surgery: Complex intracardiac repairs take time, and the patient needs blood flow to the body (particularly to the brain), as well as heart and lung function.
In 1953, Dr. John Heysham Gibbon of Jefferson Medical School in Philadelphia reported the first successful use of extracorporeal circulation by means of an oxygenator, but he abandoned the method after subsequent failures.
In 1954, Dr. Lillehei performed a series of successful operations with the controlled cross-circulation technique, in which the patient's mother or father was used as a "heart-lung machine". Dr. John W. Kirklin at the Mayo Clinic was the first to use a Gibbon-type pump-oxygenator.
Nazih Zuhdi performed the first total intentional hemodilution open heart surgery on Terry Gene Nix, age 7, on 25 February 1960 at Mercy Hospital in Oklahoma City. The operation was a success; however, Nix died three years later. In March 1961, Zuhdi, Carey, and Greer performed open heart surgery on a child, age 3 1⁄2, using the total intentional hemodilution machine.
Modern beating-heart surgery:
In the early 1990s, surgeons began to perform off-pump coronary artery bypass, done without cardiopulmonary bypass. In these operations, the heart continues beating during surgery, but is stabilized to provide an almost still work area in which to connect a conduit vessel that bypasses a blockage using a technique known as endoscopic vessel harvesting (EVH).
Heart transplant:
In 1945, the Soviet pathologist Nikolai Sinitsyn successfully transplanted a heart from one frog to another frog and from one dog to another dog.
Norman Shumway is widely regarded as the father of human heart transplantation, although the world's first adult heart transplant was performed by a South African cardiac surgeon, Christiaan Barnard, using techniques developed by Shumway and Richard Lower.
Barnard performed the first transplant on Louis Washkansky on 3 December 1967 at Groote Schuur Hospital in Cape Town. Adrian Kantrowitz performed the first pediatric heart transplant on 6 December 1967 at Maimonides Hospital (now Maimonides Medical Center) in Brooklyn, New York, barely three days later. Shumway performed the first adult heart transplant in the United States on 6 January 1968 at Stanford University Hospital.
Coronary artery bypass grafting:
Coronary artery bypass grafting, also called revascularization, is a common surgical procedure to create an alternative path to deliver blood supply to the heart and body, with the goal of preventing clot formation. This can be done in many ways, and the arteries used can be taken from several areas of the body. Arteries are typically harvested from the chest, arm, or wrist and then attached to a portion of the coronary artery, relieving pressure and limiting clotting factors in that area of the heart.
The procedure is typically performed because of coronary artery disease (CAD), in which a plaque-like substance builds up in the coronary artery, the main pathway carrying oxygen-rich blood to the heart. This can cause a blockage and/or a rupture, which can lead to a heart attack.
Minimally invasive surgery:
As an alternative to open heart surgery, which involves a five- to eight-inch incision in the chest wall, a surgeon may perform an endoscopic procedure by making very small incisions through which a camera and specialized tools are inserted.
In robot-assisted heart surgery, a machine controlled by a cardiac surgeon is used to perform a procedure. The main advantage to this is the size of the incision required: three small holes instead of an incision big enough for the surgeon's hands.
Post-surgical procedures:
As with any surgical procedure, cardiac surgery requires postoperative precautions to avoid complications. Incision care is needed to avoid infection and minimize scarring. Swelling and loss of appetite are common.
Recovery from open heart surgery begins with about 48 hours in an intensive care unit, where heart rate, blood pressure, and oxygen levels are closely monitored. Chest tubes are inserted to drain blood around the heart and lungs. After discharge from the hospital, compression socks may be recommended in order to regulate blood flow.
Risks:
The advancement of cardiac surgery and cardiopulmonary bypass techniques has greatly reduced the mortality rates of these procedures. For instance, repairs of congenital heart defects are currently estimated to have 4–6% mortality rates.
A major concern with cardiac surgery is neurological damage. Stroke occurs in 2–3% of all people undergoing cardiac surgery, and the rate is higher in patients with other risk factors for stroke.
A more subtle complication attributed to cardiopulmonary bypass is postperfusion syndrome, sometimes called "pumphead". The neurocognitive symptoms of postperfusion syndrome were initially thought to be permanent, but turned out to be transient, with no permanent neurological impairment.
In order to assess the performance of surgical units and individual surgeons, a popular risk model has been created called the EuroSCORE. It takes a number of health factors from a patient and, using precalculated logistic regression coefficients, attempts to quantify the probability that they will survive to discharge.
Within the United Kingdom, the EuroSCORE was used to give a breakdown of all cardiothoracic surgery centers and to indicate whether the units and their individuals surgeons performed within an acceptable range. The results are available on the Care Quality Commission website.
Another important source of complications are the neuropsychological and psychopathologic changes following open heart surgery. One example is Skumin syndrome, described by Victor Skumin in 1978, which is a "cardioprosthetic psychopathological syndrome" associated with mechanical heart valve implants and characterized by irrational fear, anxiety, depression, sleep disorder, and weakness.
Risk reduction:
A 2012 Cochrane systematic review found evidence that preoperative physical therapy reduced postoperative pulmonary complications, such as pneumonia and atelectasis, in patients undergoing elective cardiac surgery. In addition, the researchers found that preoperative physical therapy decreased the length of hospital stay by more than three days on average.
A 2013 Cochrane review showed that both pharmacological and non-pharmacological prevention reduce the risk of atrial fibrillation after an operation and reduced the length of hospital stays. No difference in mortality could be shown.
There is evidence that quitting smoking at least four weeks before surgery may reduce the risk of postoperative complications.
Click on any of the following blue hyperlinks for more about Cardiac Surgery:
- History
- See also:
- Cardioplegia
- Eagle score
- Chest tube
- Overview at American Heart Association
- "Congenital Heart Disease Surgical Corrective Procedures", a list of surgical procedures at learningradiology.com
- "Cardiac Surgery in Adult", Specialized doctors at Narayahahealth.org
- What to expect before, during and after heart surgery from Children's Hospital and Regional Medical Center (Seattle)
- MedlinePlus Encyclopedia Minimally invasive heart surgery
Implant Surgery
YouTube Video: Surgeons Implant Heart Stent Into Awake Patient by KETV Omaha
Pictured: L-R: Orthopedic implants to repair fractures to the radius and ulna. Note the visible break in the ulna. (right forearm); An coronary stent — in this case a drug-eluting stent — is another common item implanted in humans.
An implant is a medical device manufactured to replace a missing biological structure, support a damaged biological structure, or enhance an existing biological structure. Medical implants are man-made devices, in contrast to a transplant, which is a transplanted biomedical tissue.
The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone, or apatite depending on what is the most functional. In some cases implants contain electronics e.g. artificial pacemaker and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills or drug-eluting stents.
Applications:
Implants can roughly be categorized into groups by application:
Sensory and Neurological:
Sensory and neurological implants are used for disorders affecting the major senses and the brain, as well as other neurological disorders. They are predominately used in the treatment of conditions such as the following:
Examples include the:
Cardiovascular:
Cardiovascular medical devices are implanted in cases where the heart, its valves, and the rest of the circulatory system is in disorder. They are used to treat conditions such as the following:
Examples include
Orthopaedic:
Orthopaedic implants help alleviate issues with the bones and joints of the body. They're used to treat bone fractures, osteoarthritis, scoliosis, spinal stenosis, and chronic pain. Examples include a wide variety of pins, rods, screws, and plates used to anchor fractured bones while they heal.
Metallic glasses based on magnesium with zinc and calcium addition are tested as the potential metallic biomaterials for biodegradable medical implants.
Contraception:
Contraceptive implants are primarily used to prevent unintended pregnancy and treat conditions such as non-pathological forms of menorrhagia. Examples include copper-and hormone-based intrauterine devices.
Cosmetic:
Cosmetic implants — often prosthetics — attempt to bring some portion of the body back to an acceptable aesthetic norm. They are used as a follow-up to mastectomy due to breast cancer, for correcting some forms of disfigurement, and modifying aspects of the body (as in buttock augmentation and chin augmentation). Examples include the breast implant, nose prosthesis, ocular prosthesis, and injectable filler.
Other organs and systems:
Other types of organ dysfunction can occur in the systems of the body, including the gastrointestinal, respiratory, and urological systems. Implants are used in those and other locations to treat conditions such as the following:
Examples include the following:
Click on any of the following blue hyperlinks for more about Implant Surgery:
The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone, or apatite depending on what is the most functional. In some cases implants contain electronics e.g. artificial pacemaker and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills or drug-eluting stents.
Applications:
Implants can roughly be categorized into groups by application:
Sensory and Neurological:
Sensory and neurological implants are used for disorders affecting the major senses and the brain, as well as other neurological disorders. They are predominately used in the treatment of conditions such as the following:
- cataract,
- glaucoma,
- keratoconus,
- and other visual impairments;
- otosclerosis and other hearing loss issues, as well as middle ear diseases such as otitis media;
- and neurological diseases such as epilepsy, Parkinson's disease, and treatment-resistant depression.
Examples include the:
- intraocular lens,
- intrastromal corneal ring segment,
- cochlear implant,
- tympanostomy tube,
- and neurostimulator.
Cardiovascular:
Cardiovascular medical devices are implanted in cases where the heart, its valves, and the rest of the circulatory system is in disorder. They are used to treat conditions such as the following:
- heart failure,
- cardiac arrhythmia,
- ventricular tachycardia,
- valvular heart disease,
- angina pectoris,
- and atherosclerosis.
Examples include
- the artificial heart,
- artificial heart valve,
- implantable cardioverter-defibrillator,
- cardiac pacemaker,
- and coronary stent.
Orthopaedic:
Orthopaedic implants help alleviate issues with the bones and joints of the body. They're used to treat bone fractures, osteoarthritis, scoliosis, spinal stenosis, and chronic pain. Examples include a wide variety of pins, rods, screws, and plates used to anchor fractured bones while they heal.
Metallic glasses based on magnesium with zinc and calcium addition are tested as the potential metallic biomaterials for biodegradable medical implants.
Contraception:
Contraceptive implants are primarily used to prevent unintended pregnancy and treat conditions such as non-pathological forms of menorrhagia. Examples include copper-and hormone-based intrauterine devices.
Cosmetic:
Cosmetic implants — often prosthetics — attempt to bring some portion of the body back to an acceptable aesthetic norm. They are used as a follow-up to mastectomy due to breast cancer, for correcting some forms of disfigurement, and modifying aspects of the body (as in buttock augmentation and chin augmentation). Examples include the breast implant, nose prosthesis, ocular prosthesis, and injectable filler.
Other organs and systems:
Other types of organ dysfunction can occur in the systems of the body, including the gastrointestinal, respiratory, and urological systems. Implants are used in those and other locations to treat conditions such as the following:
- gastroesophageal reflux disease,
- gastroparesis,
- respiratory failure,
- sleep apnea,
- urinary and fecal incontinence,
- and erectile dysfunction.
Examples include the following:
- LINX,
- implantable gastric stimulator,
- diaphragmatic/phrenic nerve stimulator,
- neurostimulator,
- surgical mesh,
- and penile prosthesis
Click on any of the following blue hyperlinks for more about Implant Surgery:
- Classification
- Complications
- Failures
- See also:
Joint Replacement Surgery
YouTube Video: How Does Joint Replacement Surgery Work?
Replacement arthroplasty (from Greek arthron, joint, limb, articulate, + plassein, to form, mould, forge, feign, make an image of), or joint replacement surgery, is a procedure of orthopedic surgery in which an arthritic or dysfunctional joint surface is replaced with an orthopedic prosthesis.
Joint replacement is considered as a treatment when severe joint pain or dysfunction is not alleviated by less-invasive therapies. During the latter half of the 20th century, rheumasurgery developed as a sub-specialty focused on these and a few other procedures in patients with rheumatic diseases.
Joint replacement surgery is becoming more common with knees and hips replaced most often. About 773,000 Americans had a hip or knee replaced in 2009
Click on any of the following blue hyperlinks for more about Joint Replacement Surgery:
Joint replacement is considered as a treatment when severe joint pain or dysfunction is not alleviated by less-invasive therapies. During the latter half of the 20th century, rheumasurgery developed as a sub-specialty focused on these and a few other procedures in patients with rheumatic diseases.
Joint replacement surgery is becoming more common with knees and hips replaced most often. About 773,000 Americans had a hip or knee replaced in 2009
Click on any of the following blue hyperlinks for more about Joint Replacement Surgery:
- Background
- Procedural timeline
- Materials
- Risks and complications
- Prosthesis replacement
- See also:
- Specific joint replacements
- Related treatments
- Rheumasurgery
- Arthroplasty
- Orthopedic surgery
- Joint replacement registry
- Specific joint replacements:
- Related treatments:
- Patient Information from the American Academy of Orthopedic Surgeons
- P. Benum; A. Aamodt; and K. Haugan Uncementeed Custom Femoral Components In Hip Arthroplasty
- Finkelstein, JA; Anderson, GI; Richards, RR; Waddell, JP (1991). "Polyethylene synovitis following canine total hip arthroplasty. Histomorphometric analysis". The Journal of arthroplasty. 6 Suppl: S91–6. PMID 1774577. doi:10.1016/s0883-5403(08)80062-9
Dental Surgery
YouTube Video of Famous People with Disabilities Slideshow: Cleft Palate
Pictured: Tom Cruise (L) Before and (R) After Dental Surgery
Dental surgery is any of a number of medical procedures that involve artificially modifying dentition; in other words, surgery of the teeth and jaw bones.
Types of Dental Surgery:
Some of the more common types of dental surgery are,
Professional Dental Care:
See also: Oral hygiene
Regular tooth cleaning by a dental professional is recommended to remove tartar (mineralized plaque) that may develop even with careful brushing and flossing, especially in areas of the mouth that are difficult to clean.
Professional cleaning includes tooth scaling and tooth polishing, as well as debridement if too much tartar has accumulated. This involves the use of various instruments and/or devices to loosen and remove tartar from the teeth.
Most dental hygienists recommend having the teeth professionally cleaned at least every six months.
More frequent cleaning and examination may be necessary during the treatment of many different dental/oral disorders or due to recent surgical procedures such as dental implants. Routine examination of the teeth by a dental professional is recommended at least every year. This may include yearly, select dental X-rays. See also dental plaque identification procedure and removal.
Dental instruments and restorative materials:
Main articles: Dental instruments and Dental restorative materials
Dental anesthesia:
Main article: Dental anesthesia
Dentists inject anesthetic to block sensory transmission by the alveolar nerves. The superior alveolar nerves are not usually anesthetized directly because they are difficult to approach with a needle.
For this reason, the maxillary teeth are usually anesthetized locally by inserting the needle beneath the oral mucosa surrounding the teeth. The inferior alveolar nerve is probably anesthetized more often than any other nerve in the body. To anesthetize this nerve, the dentist inserts the needle somewhat posterior to the patient’s last molar.
See also:
Types of Dental Surgery:
Some of the more common types of dental surgery are,
- Endodontic (surgery involving the pulp or root of the tooth)
- Root canal
- Pulpotomy — the opening of the pulp chamber of the tooth to allow an infection to drain; usually a precursor to a root canal
- Pulpectomy — the removal of the pulp from the pulp chamber to temporarily relieve pain; usually a precursor to a root canal
- Apicoectomy — a root-end resection. Occasionally a root canal alone is be enough to relieve pain and the end of the tooth, called the apex, is removed by entering through the gingiva and surgically extracting the diseased material.
- Root canal
- Prosthodontics (dental prosthetics)
- Crown (caps) — artificial covering of a tooth made from a variety of biocompatible materials, including CMC/PMC (ceramic/porcelain metal composite), gold or a tin/aluminum mixture. The underlying tooth must be reshaped to accommodate these fixed restorations
- Veneers — artificial coverings similar to above, except that they only cover the forward (labial or buccal) surface of the tooth. Usually for aesthetic purposes only.
- Bridge — a fixed prothesis in which two or more crowns are connected together, which replace a missing tooth or teeth through a bridge. Typically used after an extraction.
- Implant — a procedure in which a titanium implant is surgically placed in the bone (mandible or maxilla), allowed to heal, and 4–6 months later an artificial tooth is connected to the implant by cement or retained by a screw.
- Dentures (false teeth) — a partial or complete set of dentition which either attach to neighboring teeth by use of metal or plastic grasps or to the gingival or palatal surface by use of adhesive.
- Implant-supported prosthesis — a combination of dentures and implants; bases are placed into the bone, allowed to heal, and metal appliances are fixed to the gingival surface, following which dentures are placed atop and fixed into place.
- Orthodontic treatment
- Implants and implant-supported prosthesis — also an orthodontic treatment as it involves bones
- Apicoectomy — also an orthodontic treatment as part of the underlying bone structure must be removed
- Extraction — a procedure in which a diseased, redundant, or problematic tooth is removed, either by pulling or cutting out. This procedure can be done under local or general anesthesia and is very common — many people have their wisdom teeth removed before they become problematic.
- Fiberotomy — a procedure to sever the fibers around a tooth, preventing it from relapsing
- Periodontics
- Oral and maxillofacial surgery
Professional Dental Care:
See also: Oral hygiene
Regular tooth cleaning by a dental professional is recommended to remove tartar (mineralized plaque) that may develop even with careful brushing and flossing, especially in areas of the mouth that are difficult to clean.
Professional cleaning includes tooth scaling and tooth polishing, as well as debridement if too much tartar has accumulated. This involves the use of various instruments and/or devices to loosen and remove tartar from the teeth.
Most dental hygienists recommend having the teeth professionally cleaned at least every six months.
More frequent cleaning and examination may be necessary during the treatment of many different dental/oral disorders or due to recent surgical procedures such as dental implants. Routine examination of the teeth by a dental professional is recommended at least every year. This may include yearly, select dental X-rays. See also dental plaque identification procedure and removal.
Dental instruments and restorative materials:
Main articles: Dental instruments and Dental restorative materials
Dental anesthesia:
Main article: Dental anesthesia
Dentists inject anesthetic to block sensory transmission by the alveolar nerves. The superior alveolar nerves are not usually anesthetized directly because they are difficult to approach with a needle.
For this reason, the maxillary teeth are usually anesthetized locally by inserting the needle beneath the oral mucosa surrounding the teeth. The inferior alveolar nerve is probably anesthetized more often than any other nerve in the body. To anesthetize this nerve, the dentist inserts the needle somewhat posterior to the patient’s last molar.
See also:
Laparoscopic Surgery
YouTube Video: Laparoscopic Hysterectomy Procedure Animation
Pictured: (L-R) Surgeons perform laparoscopic stomach surgery; Laparoscopic instruments (Courtesy of Wikipedia)
Laparoscopic surgery, also called minimally invasive surgery (MIS), bandaid surgery, or keyhole surgery, is a modern surgical technique in which operations are performed far from their location through small incisions (usually 0.5–1.5 cm) elsewhere in the body.
There are a number of advantages to the patient with laparoscopic surgery versus the more common, open procedure. Pain and hemorrhaging are reduced due to smaller incisions and recovery times are shorter.
The key element in laparoscopic surgery is the use of a laparoscope, a long fiber optic cable system which allows viewing of the affected area by snaking the cable from a more distant, but more easily accessible location.
There are two types of laparoscope: (1) a telescopic rod lens system, that is usually connected to a video camera (single chip or three chip), or (2) a digital laparoscope where the charge-coupled device is placed at the end of the laparoscope.
Also attached is a fiber optic cable system connected to a "cold" light source (halogen or xenon), to illuminate the operative field, which is inserted through a 5 mm or 10 mm cannula or trocar.
The abdomen is usually insufflated with carbon dioxide gas. This elevates the abdominal wall above the internal organs to create a working and viewing space. CO2 is used because it is common to the human body and can be absorbed by tissue and removed by the respiratory system. It is also non-flammable, which is important because electrosurgical devices are commonly used in laparoscopic procedures.
Laparoscopic surgery includes operations within the abdominal or pelvic cavities, whereas keyhole surgery performed on the thoracic or chest cavity is called thoracoscopic surgery. Specific surgical instruments used in a laparoscopic surgery include: forceps, scissors, probes, dissectors, hooks, retractors and more. Laparoscopic and thoracoscopic surgery belong to the broader field of endoscopy.
Procedures:
Laparoscopic cholecystectomy is the most common laparoscopic procedure performed. In this procedure, 5–10 mm diameter instruments (graspers, scissors, clip applier) can be introduced by the surgeon into the abdomen through trocars (hollow tubes with a seal to keep the CO2 from leaking).
Over one million cholecystectomies are performed in the U.S. annually, with over 96% of those being performed laparoscopically.
There are two different formats for laparoscopic surgery. Multiple incisions are required for technology such as the da Vinci Surgical System, which uses a console located away from the patient, with the surgeon controlling a camera, vacuum pump, saline cleansing solution, cutting tools, etc. each located within its own incision site, but oriented toward the surgical objective.
In contrast, requiring only a single small incision, the "Bonati system" (invented by Dr. Alfred Bonati), uses a single 5-function control, so that a saline solution and the vacuum pump operate together when the laser cutter is activated. A camera and light provide feedback to the surgeon, who sees the enlarged surgical elements on a TV monitor. The Bonati system was designed for spinal surgery and has been promoted only for that purpose.
Rather than a minimum 20 cm incision as in traditional (open) cholecystectomy, four incisions of 0.5–1.0 cm, or more recently a single incision of 1.5–2.0 cm, will be sufficient to perform a laparoscopic removal of a gallbladder. Since the gallbladder is similar to a small balloon that stores and releases bile, it can usually be removed from the abdomen by suctioning out the bile and then removing the deflated gallbladder through the 1 cm incision at the patient's navel. The length of postoperative stay in the hospital is minimal, and same-day discharges are possible in cases of early morning procedures.
In certain advanced laparoscopic procedures, where the size of the specimen being removed would be too large to pull out through a trocar site (as would be done with a gallbladder), an incision larger than 10 mm must be made.
The most common of these procedures are removal of all or part of the colon (colectomy), or removal of the kidney (nephrectomy). Some surgeons perform these procedures completely laparoscopically, making the larger incision toward the end of the procedure for specimen removal, or, in the case of a colectomy, to also prepare the remaining healthy bowel to be reconnected (create an anastomosis).
Many other surgeons feel that since they will have to make a larger incision for specimen removal anyway, they might as well use this incision to have their hand in the operative field during the procedure to aid as a retractor, dissector, and to be able to feel differing tissue densities (palpate), as they would in open surgery. This technique is called hand-assist laparoscopy.
Since they will still be working with scopes and other laparoscopic instruments, CO2 will have to be maintained in the patient's abdomen, so a device known as a hand access port (a sleeve with a seal that allows passage of the hand) must be used.
Surgeons who choose this hand-assist technique feel it reduces operative time significantly versus the straight laparoscopic approach. It also gives them more options in dealing with unexpected adverse events (e.g. uncontrolled bleeding) that may otherwise require creating a much larger incision and converting to a fully open surgical procedure.
Conceptually, the laparoscopic approach is intended to minimize post-operative pain and speed up recovery times, while maintaining an enhanced visual field for surgeons.
Due to improved patient outcomes, in the last two decades, laparoscopic surgery has been adopted by various surgical sub-specialties including gastrointestinal surgery (including bariatric procedures for morbid obesity), gynecologic surgery and urology.
Based on numerous prospective randomized controlled trials, the approach has proven to be beneficial in reducing post-operative morbidities such as wound infections and incisional hernias (especially in morbidly obese patients), and is now deemed safe when applied to surgery for cancers such as cancer of colon.
The restricted vision, the difficulty in handling of the instruments (new hand-eye coordination skills are needed), the lack of tactile perception and the limited working area are factors which add to the technical complexity of this surgical approach. For these reasons, minimally invasive surgery has emerged as a highly competitive new sub-specialty within various fields of surgery.
Surgical residents who wish to focus on this area of surgery gain additional laparoscopic surgery training during one or two years of fellowship after completing their basic surgical residency. In OB-GYN residency programs, the average laparoscopy-to-laparotomy quotient (LPQ) is 0.55.
The first transatlantic surgery (Lindbergh operation) ever performed was a laparoscopic gallbladder removal.
Laparoscopic techniques have also been developed in the field of veterinary medicine. Due to the relative high cost of the equipment required, however, it has not become commonplace in most traditional practices today but rather limited to specialty-type practices. Many of the same surgeries performed in humans can be applied to animal cases – everything from an egg-bound tortoise to a German Shepherd can benefit from MIS.
A paper published in JAVMA (Journal of the American Veterinary Medical Association) in 2005 showed that dogs spayed laparoscopically experienced significantly less pain (65%) than those that were spayed with traditional "open" methods.
Arthroscopy, thoracoscopy, cystoscopy are all performed in veterinary medicine today. The University of Georgia School of Veterinary Medicine and Colorado State University's School of Veterinary Medicine are two of the main centers where veterinary laparoscopy got started and have excellent training programs for veterinarians interested in getting started in MIS.
Advantages:
There are a number of advantages to the patient with laparoscopic surgery versus an open procedure. These include:
Although laparoscopy in adult age group is widely accepted, its advantages in pediatric age group is questioned. Benefits of laparoscopy appears to recede with younger age. Efficacy of laparoscopy is inferior to open surgery in certain conditions such as pyloromyotomy for Infantile hypertrophic pyloric stenosis. Although laparoscopic appendectomy has lesser wound problems than open surgery, the former is associated with more intra-abdominal abscesses.
Disadvantages:
While laparoscopic surgery is clearly advantageous in terms of patient outcomes, the procedure is more difficult from the surgeon's perspective when compared to traditional, open surgery:
Risks:
Some of the risks are briefly described below:
The risk of such injuries is increased in patients who have a low body mass index or have a history of prior abdominal surgery. While these injuries are rare, significant complications can occur, and they are primarily related to the umbilical insertion site. Vascular injuries can result in hemorrhage that may be life-threatening. Injuries to the bowel can cause a delayed peritonitis. It is very important that these injuries be recognized as early as possible.
Robotic laparoscopic surgery:
Main article: Robotic surgery
The process of minimally invasive surgery has been augmented by specialized tools for decades. For example, TransEnterix of Durham, North Carolina received U.S. Food and Drug Administration approval in October 2009 for its SPIDER Surgical System using flexible instruments and one incision in the navel area instead of several, allowing quicker healing for patients. Dr. Richard Stac of Duke University developed the process.
In recent years, electronic tools have been developed to aid surgeons. Some of the features include:
There has been a distinct lack of disclosure regarding nano-scale developments in keyhole surgery and remote medicine, a "disparity of disclosure" which does not correlate with the rapid advancements in both the medical and nanotechnology fields over the last two decades.
Robotic surgery has been touted as a solution to underdeveloped nations, whereby a single central hospital can operate several remote machines at distant locations. The potential for robotic surgery has had strong military interest as well, with the intention of providing mobile medical care while keeping trained doctors safe from battle.
Non-robotic hand guided assistance systems:
There are also user-friendly non robotic assistance systems that are single hand guided devices with a high potential to save time and money. These assistance devices are not bound by the restrictions of common medical robotic systems. The systems enhance the manual possibilities of the surgeon and his/her team, regarding the need of replacing static holding force during the intervention.
Some of the features are:
Click on any of the following blue hyperlinks for more about Laparoscopic Surgery:
There are a number of advantages to the patient with laparoscopic surgery versus the more common, open procedure. Pain and hemorrhaging are reduced due to smaller incisions and recovery times are shorter.
The key element in laparoscopic surgery is the use of a laparoscope, a long fiber optic cable system which allows viewing of the affected area by snaking the cable from a more distant, but more easily accessible location.
There are two types of laparoscope: (1) a telescopic rod lens system, that is usually connected to a video camera (single chip or three chip), or (2) a digital laparoscope where the charge-coupled device is placed at the end of the laparoscope.
Also attached is a fiber optic cable system connected to a "cold" light source (halogen or xenon), to illuminate the operative field, which is inserted through a 5 mm or 10 mm cannula or trocar.
The abdomen is usually insufflated with carbon dioxide gas. This elevates the abdominal wall above the internal organs to create a working and viewing space. CO2 is used because it is common to the human body and can be absorbed by tissue and removed by the respiratory system. It is also non-flammable, which is important because electrosurgical devices are commonly used in laparoscopic procedures.
Laparoscopic surgery includes operations within the abdominal or pelvic cavities, whereas keyhole surgery performed on the thoracic or chest cavity is called thoracoscopic surgery. Specific surgical instruments used in a laparoscopic surgery include: forceps, scissors, probes, dissectors, hooks, retractors and more. Laparoscopic and thoracoscopic surgery belong to the broader field of endoscopy.
Procedures:
Laparoscopic cholecystectomy is the most common laparoscopic procedure performed. In this procedure, 5–10 mm diameter instruments (graspers, scissors, clip applier) can be introduced by the surgeon into the abdomen through trocars (hollow tubes with a seal to keep the CO2 from leaking).
Over one million cholecystectomies are performed in the U.S. annually, with over 96% of those being performed laparoscopically.
There are two different formats for laparoscopic surgery. Multiple incisions are required for technology such as the da Vinci Surgical System, which uses a console located away from the patient, with the surgeon controlling a camera, vacuum pump, saline cleansing solution, cutting tools, etc. each located within its own incision site, but oriented toward the surgical objective.
In contrast, requiring only a single small incision, the "Bonati system" (invented by Dr. Alfred Bonati), uses a single 5-function control, so that a saline solution and the vacuum pump operate together when the laser cutter is activated. A camera and light provide feedback to the surgeon, who sees the enlarged surgical elements on a TV monitor. The Bonati system was designed for spinal surgery and has been promoted only for that purpose.
Rather than a minimum 20 cm incision as in traditional (open) cholecystectomy, four incisions of 0.5–1.0 cm, or more recently a single incision of 1.5–2.0 cm, will be sufficient to perform a laparoscopic removal of a gallbladder. Since the gallbladder is similar to a small balloon that stores and releases bile, it can usually be removed from the abdomen by suctioning out the bile and then removing the deflated gallbladder through the 1 cm incision at the patient's navel. The length of postoperative stay in the hospital is minimal, and same-day discharges are possible in cases of early morning procedures.
In certain advanced laparoscopic procedures, where the size of the specimen being removed would be too large to pull out through a trocar site (as would be done with a gallbladder), an incision larger than 10 mm must be made.
The most common of these procedures are removal of all or part of the colon (colectomy), or removal of the kidney (nephrectomy). Some surgeons perform these procedures completely laparoscopically, making the larger incision toward the end of the procedure for specimen removal, or, in the case of a colectomy, to also prepare the remaining healthy bowel to be reconnected (create an anastomosis).
Many other surgeons feel that since they will have to make a larger incision for specimen removal anyway, they might as well use this incision to have their hand in the operative field during the procedure to aid as a retractor, dissector, and to be able to feel differing tissue densities (palpate), as they would in open surgery. This technique is called hand-assist laparoscopy.
Since they will still be working with scopes and other laparoscopic instruments, CO2 will have to be maintained in the patient's abdomen, so a device known as a hand access port (a sleeve with a seal that allows passage of the hand) must be used.
Surgeons who choose this hand-assist technique feel it reduces operative time significantly versus the straight laparoscopic approach. It also gives them more options in dealing with unexpected adverse events (e.g. uncontrolled bleeding) that may otherwise require creating a much larger incision and converting to a fully open surgical procedure.
Conceptually, the laparoscopic approach is intended to minimize post-operative pain and speed up recovery times, while maintaining an enhanced visual field for surgeons.
Due to improved patient outcomes, in the last two decades, laparoscopic surgery has been adopted by various surgical sub-specialties including gastrointestinal surgery (including bariatric procedures for morbid obesity), gynecologic surgery and urology.
Based on numerous prospective randomized controlled trials, the approach has proven to be beneficial in reducing post-operative morbidities such as wound infections and incisional hernias (especially in morbidly obese patients), and is now deemed safe when applied to surgery for cancers such as cancer of colon.
The restricted vision, the difficulty in handling of the instruments (new hand-eye coordination skills are needed), the lack of tactile perception and the limited working area are factors which add to the technical complexity of this surgical approach. For these reasons, minimally invasive surgery has emerged as a highly competitive new sub-specialty within various fields of surgery.
Surgical residents who wish to focus on this area of surgery gain additional laparoscopic surgery training during one or two years of fellowship after completing their basic surgical residency. In OB-GYN residency programs, the average laparoscopy-to-laparotomy quotient (LPQ) is 0.55.
The first transatlantic surgery (Lindbergh operation) ever performed was a laparoscopic gallbladder removal.
Laparoscopic techniques have also been developed in the field of veterinary medicine. Due to the relative high cost of the equipment required, however, it has not become commonplace in most traditional practices today but rather limited to specialty-type practices. Many of the same surgeries performed in humans can be applied to animal cases – everything from an egg-bound tortoise to a German Shepherd can benefit from MIS.
A paper published in JAVMA (Journal of the American Veterinary Medical Association) in 2005 showed that dogs spayed laparoscopically experienced significantly less pain (65%) than those that were spayed with traditional "open" methods.
Arthroscopy, thoracoscopy, cystoscopy are all performed in veterinary medicine today. The University of Georgia School of Veterinary Medicine and Colorado State University's School of Veterinary Medicine are two of the main centers where veterinary laparoscopy got started and have excellent training programs for veterinarians interested in getting started in MIS.
Advantages:
There are a number of advantages to the patient with laparoscopic surgery versus an open procedure. These include:
- Reduced hemorrhaging, which reduces the chance of needing a blood transfusion.
- Smaller incision, which reduces pain and shortens recovery time, as well as resulting in less post-operative scarring.
- Less pain, leading to less pain medication needed.
- Although procedure times are usually slightly longer, hospital stay is less, and often with a same day discharge which leads to a faster return to everyday living.
- Reduced exposure of internal organs to possible external contaminants thereby reduced risk of acquiring infections.
- There are more indications for laparoscopic surgery in gastrointestinal emergencies as the field develops.
Although laparoscopy in adult age group is widely accepted, its advantages in pediatric age group is questioned. Benefits of laparoscopy appears to recede with younger age. Efficacy of laparoscopy is inferior to open surgery in certain conditions such as pyloromyotomy for Infantile hypertrophic pyloric stenosis. Although laparoscopic appendectomy has lesser wound problems than open surgery, the former is associated with more intra-abdominal abscesses.
Disadvantages:
While laparoscopic surgery is clearly advantageous in terms of patient outcomes, the procedure is more difficult from the surgeon's perspective when compared to traditional, open surgery:
- The surgeon has limited range of motion at the surgical site resulting in a loss of dexterity.
- Poor depth perception.
- Surgeons must use tools to interact with tissue rather than manipulate it directly with their hands. This results in an inability to accurately judge how much force is being applied to tissue as well as a risk of damaging tissue by applying more force than necessary. This limitation also reduces tactile sensation, making it more difficult for the surgeon to feel tissue (sometimes an important diagnostic tool, such as when palpating for tumors) and making delicate operations such as tying sutures more difficult.
- The tool endpoints move in the opposite direction to the surgeon's hands due to the pivot point, making laparoscopic surgery a non-intuitive motor skill that is difficult to learn. This is called the Fulcrum effect.
- Some surgeries (carpal tunnel for instance) generally turn out better for the patient when the area can be opened up, allowing the surgeon to see "the whole picture" surrounding physiology, to better address the issue at hand. In this regard, keyhole surgery can be a disadvantage.
Risks:
Some of the risks are briefly described below:
- The most significant risks are from trocar injuries during insertion into the abdominal cavity, as the trocar is typically inserted blindly. Injuries include abdominal wall hematoma, umbilical hernias, umbilical wound infection, and penetration of blood vessels or small or large bowel.
The risk of such injuries is increased in patients who have a low body mass index or have a history of prior abdominal surgery. While these injuries are rare, significant complications can occur, and they are primarily related to the umbilical insertion site. Vascular injuries can result in hemorrhage that may be life-threatening. Injuries to the bowel can cause a delayed peritonitis. It is very important that these injuries be recognized as early as possible.
- Some patients have sustained electrical burns unseen by surgeons who are working with electrodes that leak current into surrounding tissue. The resulting injuries can result in perforated organs and can also lead to peritonitis. This risk is eliminated by utilizing active electrode monitoring.
- There may be an increased risk of hypothermia and peritoneal trauma due to increased exposure to cold, dry gases during insufflation. The use of Surgical Humidification therapy, which is the use of heated and humidified CO2 for insufflation, has been shown to reduce this risk.
- Many patients with existing pulmonary disorders may not tolerate pneumoperitoneum (gas in the abdominal cavity), resulting in a need for conversion to open surgery after the initial attempt at laparoscopic approach.
- Not all of the CO2 introduced into the abdominal cavity is removed through the incisions during surgery. Gas tends to rise, and when a pocket of CO2 rises in the abdomen, it pushes against the diaphragm (the muscle that separates the abdominal from the thoracic cavities and facilitates breathing), and can exert pressure on the phrenic nerve. This produces a sensation of pain that may extend to the patient's shoulders. For an appendectomy, the right shoulder can be particularly painful. In some cases this can also cause considerable pain when breathing. In all cases, however, the pain is transient, as the body tissues will absorb the CO2 and eliminate it through respiration.
- Coagulation disorders and dense adhesions (scar tissue) from previous abdominal surgery may pose added risk for laparoscopic surgery and are considered relative contra-indications for this approach.
- Intra-abdominal adhesion formation is a risk associated with both laparoscopic and open surgery and remains a significant, unresolved problem. Adhesions are fibrous deposits that connect tissue to organ post surgery. Generally, they occur in 50-100% of all abdominal surgeries, with the risk of developing adhesions being the same for both procedures. Complications of adhesions include chronic pelvic pain, bowel obstruction, and female infertility. In particular, small bowel obstruction poses the most significant problem. The use of surgical humidification therapy during laparoscopic surgery may minimize the incidence of adhesion formation. Other techniques to reduce adhesion formation include the use of physical barriers such as films or gels, or broad-coverage fluid agents to separate tissues during healing following surgery.
Robotic laparoscopic surgery:
Main article: Robotic surgery
The process of minimally invasive surgery has been augmented by specialized tools for decades. For example, TransEnterix of Durham, North Carolina received U.S. Food and Drug Administration approval in October 2009 for its SPIDER Surgical System using flexible instruments and one incision in the navel area instead of several, allowing quicker healing for patients. Dr. Richard Stac of Duke University developed the process.
In recent years, electronic tools have been developed to aid surgeons. Some of the features include:
- Visual magnification — use of a large viewing screen improves visibility
- Stabilization — Electromechanical damping of vibrations, due to machinery or shaky human hands
- Simulators — use of specialized virtual reality training tools to improve physicians' proficiency in surgery
- Reduced number of incisions
There has been a distinct lack of disclosure regarding nano-scale developments in keyhole surgery and remote medicine, a "disparity of disclosure" which does not correlate with the rapid advancements in both the medical and nanotechnology fields over the last two decades.
Robotic surgery has been touted as a solution to underdeveloped nations, whereby a single central hospital can operate several remote machines at distant locations. The potential for robotic surgery has had strong military interest as well, with the intention of providing mobile medical care while keeping trained doctors safe from battle.
Non-robotic hand guided assistance systems:
There are also user-friendly non robotic assistance systems that are single hand guided devices with a high potential to save time and money. These assistance devices are not bound by the restrictions of common medical robotic systems. The systems enhance the manual possibilities of the surgeon and his/her team, regarding the need of replacing static holding force during the intervention.
Some of the features are:
- The stabilization of the camera picture because the whole static workload is conveyed by the assistance system.
- Some systems enable a fast repositioning and very short time for fixation of less than 0.02 seconds at the desired position. Some systems are lightweight constructions (18 kg) and can withstand a force of 20 N in any position and direction.
- The benefit – a physically relaxed intervention team can work concentrated on the main goals during the intervention.
- The potentials of these systems enhance the possibilities of the mobile medical care with those lightweight assistance systems. These assistance systems meet the demands of true solo surgery assistance systems and are robust, versatile, and easy to use.
Click on any of the following blue hyperlinks for more about Laparoscopic Surgery:
- History
- See also:
- Arthroscopic surgery
- Percutaneous (surgery)
- Invasiveness of surgical procedures
- Natural orifice translumenal endoscopic surgery (NOTES)
- Revision weight loss surgery
- Single port laparoscopy
- Feder, Barnaby J., "Surgical Device Poses a Rare but Serious Peril" The New York Times, March 17, 2006
- Laparoscopy web information
- World Association of Laparoscopic Surgeons
- World Journal of Laparoscopic Surgery
Eye Care Professional
YouTube Video about Eye Health: What Are Cataracts?
Pictured Below: Illustration of the eye and its parts: 1. vitreous body; 2. ora serrata; 3. ciliary muscle; 4. ciliary zonules; 5. Schlemm's canal; 6. pupil; 7. anterior chamber; 8. cornea; 9. iris; 10. lens cortex; 11. lens nucleus; 12. ciliary process; 13. conjunctiva; 14. inferior oblique muscle; 15. inferior rectus muscle; 16. medial rectus muscle; 17. retinal arteries and veins; 18. optic disc; 19. dura mater; 20. central retinal artery 21. central retinal vein; 22. optic nerve; 23. vorticose vein; 24. bulbar sheath; 25. macula; 26. fovea; 27. sclera; 28. choroid 29. superior rectus muscle
An eye care professional (ECP) is an individual who provides a service related to the eyes or vision. It is any healthcare worker involved in eye care, from one with a small amount of post-secondary training to practitioners with a doctoral level of education.
Optometrist:
Main article: Optometry
The World Council of Optometry, a member of the World Health
Organisation, defines optometrists as “…the primary healthcare practitioners of the eye and visual system who provide comprehensive eye and vision care, which includes refraction and dispensing, detection/diagnosis and management of disease in the eye, and the rehabilitation of conditions of the visual system.”
A Doctor of Optometry (OD) attends four years of college, four years of optometry school and then an optional one-year residency. Optometrists undergo extensive and intensive refractive and medical training mainly pertaining to the eye and the entrance criteria to attend optometry school is also highly competitive. An OD is fully qualified to treat eye diseases and disorders and specializes in optics and vision correction. Permissions granted by an optometric license vary by location.
Ophthalmologist:
Main article: Ophthalmology
Ophthalmologists are “…medical and osteopathic doctors who provide comprehensive eye care, including medical, surgical and optical care.” In the US, this requires four years of college, four years of medical school, one year general internship, three years of residency, then optional fellowship for 1 to 2 years (typically 12–14 years of education after high school).
An ophthalmologist can perform all the tests an optometrist can and in addition is a fully qualified medical doctor and surgeon. Ophthalmologists undergo extensive and intensive medical and surgical exams to qualify and entrance criteria to a training program is highly competitive. Some ophthalmologists receive additional advanced training (or fellowship) in specific areas of ophthalmology, such as retina, cornea, glaucoma, laser vision correction, pediatric ophthalmology, uveitis, pathology, or neuro-ophthalmology.
Click on any of the following blue hyperlinks for more about Eye Care Professionals:
Optometrist:
Main article: Optometry
The World Council of Optometry, a member of the World Health
Organisation, defines optometrists as “…the primary healthcare practitioners of the eye and visual system who provide comprehensive eye and vision care, which includes refraction and dispensing, detection/diagnosis and management of disease in the eye, and the rehabilitation of conditions of the visual system.”
A Doctor of Optometry (OD) attends four years of college, four years of optometry school and then an optional one-year residency. Optometrists undergo extensive and intensive refractive and medical training mainly pertaining to the eye and the entrance criteria to attend optometry school is also highly competitive. An OD is fully qualified to treat eye diseases and disorders and specializes in optics and vision correction. Permissions granted by an optometric license vary by location.
- In the United States and Canada, the standard education is four years of college and four years of optometry school at an accredited Doctor of Optometry (OD) program. An additional one to two years of residency, fellowship and/or specialty training is required to qualify for certain positions. All optometry colleges in the U.S. currently provide training in the diagnosis and treatment of eye diseases and level 1 in office surgical procedures.
- In the United States, optometrists are defined as physicians under Medicare, but laws pertaining to optometry vary by state.
- All states allow treatment of eye diseases, including the use of topical pharmaceuticals (by properly licensed optometrists)
- 48/50 states allow prescription of oral medications to treat eye diseases
- Many states allow optometrists to perform injections in and around the eye
- Oklahoma, Kentucky, and Louisiana allow optometrists to perform certain laser surgeries.
- Outside of the United States, Canada, United Kingdom, Australia and Philippines, optometrists are often limited in their use of pharmaceuticals. In most of these countries, optometry is either a 4-year or 5-year college degree and they are not classified as doctors.
Ophthalmologist:
Main article: Ophthalmology
Ophthalmologists are “…medical and osteopathic doctors who provide comprehensive eye care, including medical, surgical and optical care.” In the US, this requires four years of college, four years of medical school, one year general internship, three years of residency, then optional fellowship for 1 to 2 years (typically 12–14 years of education after high school).
An ophthalmologist can perform all the tests an optometrist can and in addition is a fully qualified medical doctor and surgeon. Ophthalmologists undergo extensive and intensive medical and surgical exams to qualify and entrance criteria to a training program is highly competitive. Some ophthalmologists receive additional advanced training (or fellowship) in specific areas of ophthalmology, such as retina, cornea, glaucoma, laser vision correction, pediatric ophthalmology, uveitis, pathology, or neuro-ophthalmology.
Click on any of the following blue hyperlinks for more about Eye Care Professionals:
- Other Types
- Distinction between ophthalmologists, optometrists and orthoptists
- International organizations
- European Council of Optometry and Optics
- International Agency for the Prevention of Blindness
- International Council of Ophthalmology
- International Orthoptic Association
- World Council of Optometry
- World Optometry Foundation
- See also:
- American Optometric Association
- American Academy of Optometry
- American Association for Pediatric Ophthalmology and Strabismus
- American Academy of Ophthalmology
- College of Optometrists
- College of Optometrists in Vision Development
- Joint Commission on Allied Health Personnel in Ophthalmology
- Optometric Extension Program
- The Institute of Optometry
- Worshipful Company of Spectacle Makers
Eye surgery, also known as ocular surgery, is surgery performed on the eye or its adnexa, typically by an ophthalmologist.
The eye is a fragile organ, and requires extreme care before, during, and after a surgical procedure. An expert eye surgeon is responsible for selecting the appropriate surgical procedure for the patient, and for taking the necessary safety precautions. Mentions of eye surgery can be found in several ancient texts. Today it continues to be a widely practiced type of surgery, having developed various techniques for treating eye problems.
Preparation and precautions:
Main article: Anaesthesia for ocular surgery
Since the eye is heavily supplied by nerves, anesthesia is essential. Local anesthesia is most commonly used. Topical anesthesia using lidocaine topical gel are often used for quick procedures. Since topical anesthesia requires cooperation from the patient, general anesthesia is often used for children, traumatic eye injuries, major orbitotomies and for apprehensive patients.
The physician administering anesthesia, or a nurse anesthetist or anesthetist assistant with expertise in anesthesia of the eye, monitors the patient's cardiovascular status. Sterile precautions are taken to prepare the area for surgery and lower the risk of infection. These precautions include the use of antiseptics, such as povidone-iodine, and sterile drapes, gowns and gloves.
Laser eye surgery:
Although the terms laser eye surgery and refractive surgery are commonly used as if they were interchangeable, this is not the case. Lasers may be used to treat nonrefractive conditions (e.g. to seal a retinal tear).
Laser eye surgery or laser corneal surgery is a medical procedure that uses a laser to reshape the surface of the eye. This is done to correct myopia (short-sightedness), hypermetropia (long sightedness) and astigmatism (uneven curvature of the eye's surface). It is important to note that refractive surgery is not compatible with everyone, and rarely people may find that eyewear is still needed after surgery.
Recent developments also include procedures that can change eye color from brown to blue.
Cataract surgery:
Main article: Cataract surgery
A cataract is an opacification or cloudiness of the eye's crystalline lens due to aging, disease, or trauma that typically prevents light from forming a clear image on the retina. If visual loss is significant, surgical removal of the lens may be warranted, with lost optical power usually replaced with a plastic intraocular lens (IOL). Owing to the high prevalence of cataracts, cataract extraction is the most common eye surgery. Rest after surgery is recommended.
Glaucoma surgery:
Main article: Glaucoma surgery
Glaucoma is a group of diseases affecting the optic nerve that results in vision loss and is frequently characterized by raised intraocular pressure (IOP).
There are many types of glaucoma surgery, and variations or combinations of those types, that facilitate the escape of excess aqueous humor from the eye to lower intraocular pressure, and a few that lower IOP by decreasing the production of aqueous humor.
Canaloplasty:
Canaloplasty is an advanced, nonpenetrating procedure designed to enhance drainage through the eye’s natural drainage system to provide sustained reduction of IOP. Canaloplasty utilizes microcatheter technology in a simple and minimally invasive procedure. To perform a canaloplasty, an Ophthalmologist creates a tiny incision to gain access to a canal in the eye.
A microcatheter circumnavigates the canal around the iris, enlarging the main drainage channel and its smaller collector channels through the injection of a sterile, gel-like material called viscoelastic. The catheter is then removed and a suture is placed within the canal and tightened. By opening up the canal, the pressure inside the eye can be reduced.
Refractive surgery:
Main article: Refractive surgery
Refractive surgery aims to correct errors of refraction in the eye, reducing or eliminating the need for corrective lenses
Corneal surgery:
Corneal surgery includes most refractive surgery as well as the following:
Vitreo-retinal surgery:
Vitreo-retinal surgery includes the following:
Eye muscle surgery:
Main article: Strabismus surgery
With approximately 1.2 million procedures each year, extraocular muscle surgery is the third most common eye surgery in the United States.
Click on any of the following blue hyperlinks for more about Eye Surgery:
The eye is a fragile organ, and requires extreme care before, during, and after a surgical procedure. An expert eye surgeon is responsible for selecting the appropriate surgical procedure for the patient, and for taking the necessary safety precautions. Mentions of eye surgery can be found in several ancient texts. Today it continues to be a widely practiced type of surgery, having developed various techniques for treating eye problems.
Preparation and precautions:
Main article: Anaesthesia for ocular surgery
Since the eye is heavily supplied by nerves, anesthesia is essential. Local anesthesia is most commonly used. Topical anesthesia using lidocaine topical gel are often used for quick procedures. Since topical anesthesia requires cooperation from the patient, general anesthesia is often used for children, traumatic eye injuries, major orbitotomies and for apprehensive patients.
The physician administering anesthesia, or a nurse anesthetist or anesthetist assistant with expertise in anesthesia of the eye, monitors the patient's cardiovascular status. Sterile precautions are taken to prepare the area for surgery and lower the risk of infection. These precautions include the use of antiseptics, such as povidone-iodine, and sterile drapes, gowns and gloves.
Laser eye surgery:
Although the terms laser eye surgery and refractive surgery are commonly used as if they were interchangeable, this is not the case. Lasers may be used to treat nonrefractive conditions (e.g. to seal a retinal tear).
Laser eye surgery or laser corneal surgery is a medical procedure that uses a laser to reshape the surface of the eye. This is done to correct myopia (short-sightedness), hypermetropia (long sightedness) and astigmatism (uneven curvature of the eye's surface). It is important to note that refractive surgery is not compatible with everyone, and rarely people may find that eyewear is still needed after surgery.
Recent developments also include procedures that can change eye color from brown to blue.
Cataract surgery:
Main article: Cataract surgery
A cataract is an opacification or cloudiness of the eye's crystalline lens due to aging, disease, or trauma that typically prevents light from forming a clear image on the retina. If visual loss is significant, surgical removal of the lens may be warranted, with lost optical power usually replaced with a plastic intraocular lens (IOL). Owing to the high prevalence of cataracts, cataract extraction is the most common eye surgery. Rest after surgery is recommended.
Glaucoma surgery:
Main article: Glaucoma surgery
Glaucoma is a group of diseases affecting the optic nerve that results in vision loss and is frequently characterized by raised intraocular pressure (IOP).
There are many types of glaucoma surgery, and variations or combinations of those types, that facilitate the escape of excess aqueous humor from the eye to lower intraocular pressure, and a few that lower IOP by decreasing the production of aqueous humor.
Canaloplasty:
Canaloplasty is an advanced, nonpenetrating procedure designed to enhance drainage through the eye’s natural drainage system to provide sustained reduction of IOP. Canaloplasty utilizes microcatheter technology in a simple and minimally invasive procedure. To perform a canaloplasty, an Ophthalmologist creates a tiny incision to gain access to a canal in the eye.
A microcatheter circumnavigates the canal around the iris, enlarging the main drainage channel and its smaller collector channels through the injection of a sterile, gel-like material called viscoelastic. The catheter is then removed and a suture is placed within the canal and tightened. By opening up the canal, the pressure inside the eye can be reduced.
Refractive surgery:
Main article: Refractive surgery
Refractive surgery aims to correct errors of refraction in the eye, reducing or eliminating the need for corrective lenses
- Keratomilleusis is a method of reshaping the cornea surface to change its optical power. A disc of cornea is shaved off, quickly frozen, lathe-ground, then returned to its original power.
- Automated lamellar keratoplasty (ALK)
- Laser assisted in-situ keratomileusis (LASIK)
- Laser assisted sub-epithelial keratomileusis (LASEK), a.k.a. Epi-LASIK
- Photorefractive keratectomy (PRK)
- Laser thermal keratoplasty (LTK)
- Conductive keratoplasty (CK) uses radio frequency waves to shrink corneal collagen. It is used to treat mild to moderate hyperopia.
- Limbal relaxing incisions (LRI) to correct minor astigmatism
- Astigmatic keratotomy (AK), a.k.a. Arcuate keratotomy or Transverse keratotomy
- Radial keratotomy (RK)
- Hexagonal keratotomy (HK)
- Epikeratophakia is the removal of the corneal epithelium and replacement with a lathe cut corneal button.
- Intracorneal rings (ICRs), or corneal ring segments
- Implantable contact lenses
- Presbyopia reversal
- Anterior ciliary sclerotomy (ACS)
- Scleral reinforcement surgery for the mitigation of degenerative myopia
Corneal surgery:
Corneal surgery includes most refractive surgery as well as the following:
- Corneal transplant surgery, is used to remove a cloudy/diseased cornea and replace it with a clear donor cornea.
- Penetrating keratoplasty (PK)
- Keratoprosthesis (KKPro)
- Phototherapeutic keratectomy (PTK)
- Pterygium excision
- Corneal tattooing
- Osteo-Odonto-Keratoprosthesis (OOKP), in which support for an artificial cornea is created from a tooth and its surrounding jawbone. This is a still-experimental procedure used for patients with severely damaged eyes, generally from burns.
- Eye color change surgery through an iris implant, known as Brightocular, or stripping away the top layer of eye pigment, known as the Stroma procedure.
Vitreo-retinal surgery:
Vitreo-retinal surgery includes the following:
- Vitrectomy
- Anterior vitrectomy is the removal of the front portion of vitreous tissue. It is used for preventing or treating vitreous loss during cataract or corneal surgery, or to remove misplaced vitreous in conditions such as aphakia pupillary block glaucoma.
- Pars plana vitrectomy (PPV), or trans pars plana vitrectomy (TPPV), is a procedure to remove vitreous opacities and membranes through a pars plana incision. It is frequently combined with other intraocular procedures for the treatment of giant retinal tears, tractional retinal detachments, and posterior vitreous detachments.
- Pan retinal photocoagulation (PRP) is a type of photocoagulation therapy used in the treatment of diabetic retinopathy.
- Retinal detachment repair
- Ignipuncture is an obsolete procedure that involves cauterization of the retina with a very hot pointed instrument.
- A scleral buckle is used in the repair of a retinal detachment to indent or "buckle" the sclera inward, usually by sewing a piece of preserved sclera or silicone rubber to its surface.
- Laser photocoagulation, or photocoagulation therapy, is the use of a laser to seal a retinal tear.
- Pneumatic retinopexy
- Retinal cryopexy, or retinal cryotherapy, is a procedure that uses intense cold to induce a chorioretinal scar and to destroy retinal or choroidal tissue.[23]
- Macular hole repair
- Partial lamellar sclerouvectomy
- Partial lamellar sclerocyclochoroidectomy
- Partial lamellar sclerochoroidectomy
- Posterior sclerotomy is an opening made into the vitreous through the sclera, as for detached retina or the removal of a foreign body.
- Radial optic neurotomy
- macular translocation surgery
- through 360 degree retinotomy
- through scleral imbrication technique
Eye muscle surgery:
Main article: Strabismus surgery
With approximately 1.2 million procedures each year, extraocular muscle surgery is the third most common eye surgery in the United States.
- Eye muscle surgery typically corrects strabismus and includes the following:
- Loosening / weakening procedures
- Recession involves moving the insertion of a muscle posteriorly towards its origin.
- Myectomy
- Myotomy
- Tenectomy
- Tenotomy
- Tightening / strengthening procedures
- Resection
- Tucking
- Advancement is the movement of an eye muscle from its original place of attachment on the eyeball to a more forward position.
- Transposition / repositioning procedures
- Adjustable suture surgery is a method of reattaching an extraocular muscle by means of a stitch that can be shortened or lengthened within the first post-operative day, to obtain better ocular alignment
- Loosening / weakening procedures
Click on any of the following blue hyperlinks for more about Eye Surgery:
- Oculoplastic surgery
- Eyelid surgery
Orbital surgery
Other oculoplastic surgery
- Eyelid surgery
- Surgery involving the lacrimal apparatus
- Eye removal
- Other surgery
LASIK Eye Surgery
YouTube Video of LASIK Eye Surgery Procedure
Pictured Below: LASIK eye surgery: What you can expect (by the Mayo Clinic)
LASIK or Lasik (laser-assisted in situ keratomileusis), commonly referred to as laser eye surgery or laser vision correction, is a type of refractive surgery for the correction of myopia, hyperopia, and astigmatism.
The LASIK surgery is performed by an ophthalmologist who uses a laser or microkeratome to reshape the eye's cornea in order to improve visual acuity. For most people, LASIK provides a long-lasting alternative to eyeglasses or contact lenses.
LASIK is most similar to another surgical corrective procedure, photorefractive keratectomy (PRK), and both represent advances over radial keratotomy in the surgical treatment of refractive errors of vision.
For patients with moderate to high myopia or thin corneas which cannot be treated with LASIK and PRK, the phakic intraocular lens is an alternative. As of 2011, over 11 million LASIK procedures had been performed in the United States and as of 2009 over 28 million have been performed worldwide.
Click on the following blue hyperlinks for more about LASIK Eye Surgery:
The LASIK surgery is performed by an ophthalmologist who uses a laser or microkeratome to reshape the eye's cornea in order to improve visual acuity. For most people, LASIK provides a long-lasting alternative to eyeglasses or contact lenses.
LASIK is most similar to another surgical corrective procedure, photorefractive keratectomy (PRK), and both represent advances over radial keratotomy in the surgical treatment of refractive errors of vision.
For patients with moderate to high myopia or thin corneas which cannot be treated with LASIK and PRK, the phakic intraocular lens is an alternative. As of 2011, over 11 million LASIK procedures had been performed in the United States and as of 2009 over 28 million have been performed worldwide.
Click on the following blue hyperlinks for more about LASIK Eye Surgery:
- Effectiveness
- Risks
- Process
- Wavefront-guided
- Topography-assisted
- History
- Further research
- Comparison to photorefractive keratectomy
Macular Degeneration and its Treatment
YouTube Video Animation: Detecting age-related macular degeneration through a dilated eye exam by the National Eye Institute NIH
Pictured (L) Normal vision; (R) The same view with age-related macular degeneration (National Eye Institute)
Macular degeneration, also known as age-related macular degeneration (AMD or ARMD), is a medical condition which may result in blurred or no vision in the center of the visual field. Early on there are often no symptoms. Over time, however, some people experience a gradual worsening of vision that may affect one or both eyes.
While it does not result in complete blindness, loss of central vision can make it hard to recognize faces, drive, read, or perform other activities of daily life. Visual hallucinations may also occur and these do not represent a mental illness.
Macular degeneration typically occurs in older people. Genetic factors and smoking also play a role. It is due to damage to the macula of the retina. Diagnosis is by a complete eye exam. The severity is divided into early, intermediate, and late types. The late type is additionally divided into "dry" and "wet" forms with the dry form making up 90% of cases.
Prevention includes exercising, eating well, and not smoking. Antioxidant vitamins and minerals do not appear to be useful for prevention. There is no cure or treatment that returns vision already lost. In the wet form, anti-VEGF medication injected into the eye or less commonly laser coagulation or photodynamic therapy may slow worsening. Supplements in those who already have the disease may slow progression.
In 2015 it affected 6.2 million people globally. In 2013 it was the fourth most common cause of blindness after cataracts, preterm birth, and glaucoma. It most commonly occurs in people over the age of fifty and in the United States is the most common cause of vision loss in this age group. About 0.4% of people between 50 and 60 have the disease, while it occurs in 0.7% of people 60 to 70, 2.3% of those 70 to 80, and nearly 12% of people over 80 years old.
Signs and symptoms of macular degeneration include:
Visual symptoms:
Macular degeneration by itself will not lead to total blindness. For that matter, only a very small number of people with visual impairment are totally blind. In almost all cases, some vision remains, mainly peripheral. Other complicating conditions may possibly lead to such an acute condition (severe stroke or trauma, untreated glaucoma, etc.), but few macular degeneration patients experience total visual loss.
The area of the macula comprises only about 2.1% of the retina, and the remaining 97.9% (the peripheral field) remains unaffected by the disease. Even though the macula provides such a small fraction of the visual field, almost half of the visual cortex is devoted to processing macular information.
The loss of central vision profoundly affects visual functioning. It is quite difficult, for example, to read without central vision. Pictures that attempt to depict the central visual loss of macular degeneration with a black spot do not really do justice to the devastating nature of the visual loss. This can be demonstrated by printing letters six inches high on a piece of paper and attempting to identify them while looking straight ahead and holding the paper slightly to the side. Most people find this difficult to do.
Risk Factors:
Environment and lifestyle:
Genetics:
Recurrence ratios for siblings of an affected individual are three- to sixfold higher than in the general population. Genetic linkage analysis has identified 5 sets of gene variants at three locations on different chromosomes (1, 6 and 10) as explaining at least 50% of the risk.
These genes have roles regulating immune response, inflammatory processes and homeostasis of the retina. Variants of these genes give rise to different kinds of dysfunction in these processes. Over time, this results in accumulation of intracellular and extracellular metabolic debris. This can cause scarring of the retina or breakdown of its vascularization.
Genetic tests are available for some of these gene variations. However, pathogenesis of macular degeneration is a complex interaction between genetics, environment and lifestyle, and presence of unfavorable genetic factors doesn't necessarily predict progression to disease.
The three loci where identified gene variants are found are designated:
Specific genes:
Mitochondrial related gene polymorphisms such as that in the MT-ND2 molecule, predicts wet AMD.
Click on any of the following blue hyperlinks for more about Macular Degeneration:
While it does not result in complete blindness, loss of central vision can make it hard to recognize faces, drive, read, or perform other activities of daily life. Visual hallucinations may also occur and these do not represent a mental illness.
Macular degeneration typically occurs in older people. Genetic factors and smoking also play a role. It is due to damage to the macula of the retina. Diagnosis is by a complete eye exam. The severity is divided into early, intermediate, and late types. The late type is additionally divided into "dry" and "wet" forms with the dry form making up 90% of cases.
Prevention includes exercising, eating well, and not smoking. Antioxidant vitamins and minerals do not appear to be useful for prevention. There is no cure or treatment that returns vision already lost. In the wet form, anti-VEGF medication injected into the eye or less commonly laser coagulation or photodynamic therapy may slow worsening. Supplements in those who already have the disease may slow progression.
In 2015 it affected 6.2 million people globally. In 2013 it was the fourth most common cause of blindness after cataracts, preterm birth, and glaucoma. It most commonly occurs in people over the age of fifty and in the United States is the most common cause of vision loss in this age group. About 0.4% of people between 50 and 60 have the disease, while it occurs in 0.7% of people 60 to 70, 2.3% of those 70 to 80, and nearly 12% of people over 80 years old.
Signs and symptoms of macular degeneration include:
Visual symptoms:
- Distorted vision in the form of metamorphopsia, in which a grid of straight lines appears wavy and parts of the grid may appear blank: Patients often first notice this when looking at things like miniblinds in their home or telephone poles while driving. There may also be central scotomas, shadows or missing areas of vision
- Slow recovery of visual function after exposure to bright light (photostress test)
- Visual acuity drastically decreasing (two levels or more), e.g.: 20/20 to 20/80
- Blurred vision: Those with nonexudative macular degeneration may be asymptomatic or notice a gradual loss of central vision, whereas those with exudative macular degeneration often notice a rapid onset of vision loss (often caused by leakage and bleeding of abnormal blood vessels).
- Trouble discerning colors, specifically dark ones from dark ones and light ones from light ones
- A loss in contrast sensitivity
Macular degeneration by itself will not lead to total blindness. For that matter, only a very small number of people with visual impairment are totally blind. In almost all cases, some vision remains, mainly peripheral. Other complicating conditions may possibly lead to such an acute condition (severe stroke or trauma, untreated glaucoma, etc.), but few macular degeneration patients experience total visual loss.
The area of the macula comprises only about 2.1% of the retina, and the remaining 97.9% (the peripheral field) remains unaffected by the disease. Even though the macula provides such a small fraction of the visual field, almost half of the visual cortex is devoted to processing macular information.
The loss of central vision profoundly affects visual functioning. It is quite difficult, for example, to read without central vision. Pictures that attempt to depict the central visual loss of macular degeneration with a black spot do not really do justice to the devastating nature of the visual loss. This can be demonstrated by printing letters six inches high on a piece of paper and attempting to identify them while looking straight ahead and holding the paper slightly to the side. Most people find this difficult to do.
Risk Factors:
- Aging: Advanced age is the strongest predictor of AMD, particularly over 50.[9]
- Family history:
Environment and lifestyle:
- Smoking: Smoking tobacco increases the risk of AMD by two to three times that of someone who has never smoked, and may be the most important modifiable factor in its prevention. A review of previous studies found "a strong association between current smoking and AMD. ... Cigarette smoking is likely to have toxic effects on the retina."
- Hypertension (high blood pressure): In the ALIENOR study 2013, early and late AMD were not significantly associated with systolic or diastolic BP, hypertension, or use of antihypertensive medications, but elevated pulse pressure ((PP) systolic BP minus diastolic BP) was significantly associated with an increased risk of late AMD.
- Atherosclerosis:
- High cholesterol: Elevated cholesterol may increase the risk of AMD.
- Obesity: Abdominal obesity is a risk factor, especially among men.
- Fat intake: Consuming high amounts of certain fats including saturated fats, trans fats and omega-6 fatty acids likely contributes to AMD, while monounsaturated fats are potentially protective. In particular, ω-3 fatty acids may decrease the risk of AMD.
- Exposure to sunlight, especially blue light: Evidence is conflicting as to whether exposure to sunlight contributes to the development of macular degeneration. A recent study on 446 subjects found it does not. Other research, however, has shown high-energy visible light may contribute to AMD.
Genetics:
Recurrence ratios for siblings of an affected individual are three- to sixfold higher than in the general population. Genetic linkage analysis has identified 5 sets of gene variants at three locations on different chromosomes (1, 6 and 10) as explaining at least 50% of the risk.
These genes have roles regulating immune response, inflammatory processes and homeostasis of the retina. Variants of these genes give rise to different kinds of dysfunction in these processes. Over time, this results in accumulation of intracellular and extracellular metabolic debris. This can cause scarring of the retina or breakdown of its vascularization.
Genetic tests are available for some of these gene variations. However, pathogenesis of macular degeneration is a complex interaction between genetics, environment and lifestyle, and presence of unfavorable genetic factors doesn't necessarily predict progression to disease.
The three loci where identified gene variants are found are designated:
- Complement Factor H (CFH) on chromosome 1 at location 1q31.3
- HTRA serine peptidase 1/Age Related Maculopathy Susceptibility 2 (HTRA1/ARMS2) on chromosome 10 at location 10q26
- Complement Factor B/Complement Component 2 (CFB/CC2) on chromosome 6 at 6p21.3
Specific genes:
- Polymorphisms in genes for complement system proteins: The genes for the complement system proteins factor H (CFH), factor B (CFB) and factor 3 (C3) are strongly associated with a person's risk for developing AMD. CFH is involved in inhibiting the inflammatory response. The mutation in CFH (Y402H) results in reduced ability of CFH to regulate complement on critical surfaces such as the retina and leads to increased inflammatory response within the macula. Absence of the complement factor H-related genes R3 and R1 protects against AMD. Two independent studies in 2007 showed a certain common mutation Arg80Gly in the C3 gene, which is a central protein of the complement system, is strongly associated with the occurrence of AMD. The authors of both papers consider their study to underscore the influence of the complement pathway in the pathogenesis of this disease.
- In two 2006 studies, another gene that has implications for the disease, called HTRA1 (encoding a secreted serine protease), was identified.
- Six mutations of the gene SERPING1 (Serpin Peptidase Inhibitor, Clade G (C1 Inhibitor), Member 1) are associated with AMD. Mutations in this gene can also cause hereditary angioedema.
- Fibulin-5 mutation: Rare forms of the disease are caused by genetic defects in fibulin-5, in an autosomal dominant manner. In 2004, Stone et al. performed a screen on 402 AMD patients and revealed a statistically significant correlation between mutations in fibulin-5 and incidence of the disease.
Mitochondrial related gene polymorphisms such as that in the MT-ND2 molecule, predicts wet AMD.
Click on any of the following blue hyperlinks for more about Macular Degeneration:
- Pathophysiology
- Stages
Oxidative stress
- Stages
- Diagnosis
- Prevention
- Management
- Dry AMD
Wet AMD
Adaptive devices
- Dry AMD
- Epidemiology
- Research directions
- Other types
- Notable cases
- See also:
Gastric Bypass Surgery as a surgical means for reducing hunger for curing obesity
YouTube Video: Gastric Bypass Surgery Procedure (Animation)
Pictured: a Diagram of a Gastric Bypass, Courtesy of Lina Wolf - Own work, CC BY-SA 3.0
Gastric bypass surgery refers to a surgical procedure in which the stomach is divided into a small upper pouch and a much larger lower "remnant" pouch and then the small intestine is rearranged to connect to both.
Surgeons have developed several different ways to reconnect the intestine, thus leading to several different gastric bypass (GBP) procedures. Any GBP leads to a marked reduction in the functional volume of the stomach, accompanied by an altered physiological and physical response to food.
The operation is prescribed to treat morbid obesity (defined as a body mass index greater than 40), type 2 diabetes, hypertension, sleep apnea, and other comorbid conditions.
Bariatric surgery is the term encompassing allof the surgical treatments for morbid obesity, not just gastric bypasses, which make up only one class of such operations. The resulting weight loss, typically dramatic, markedly reduces comorbidities.
The long-term mortality rate of gastric bypass patients has been shown to be reduced by up to 40%. As with all surgery, complications may occur. A study from 2005 to 2006 revealed that 15% of patients experience complications as a result of gastric bypass, and 0.5% of patients died within six months of surgery due to complications.
Uses:
Gastric bypass is indicated for the surgical treatment of morbid obesity, a diagnosis which is made when the patient is seriously obese, has been unable to achieve satisfactory and sustained weight loss by dietary efforts, and suffers from comorbid conditions which are either life-threatening or a serious impairment to the quality of life.
Prior to 1991, clinicians interpreted serious obesity as weighing at least 100 pounds (45 kg) more than the "ideal body weight", an actuarially-determined body-weight at which one was estimated to be likely to live the longest, as determined by the life-insurance industry. This criterion failed for persons of short stature.
In 1991, the National Institutes of Health (NIH) sponsored a consensus panel whose recommendations have set the current standard for consideration of surgical treatment, the body mass index (BMI). The BMI is defined as the body weight (in kilograms), divided by the square of the height (in meters). The result is expressed as a number in units of kilograms per square meter.
In healthy adults, BMI ranges from 18.5 to 24.9, with a BMI above 30 being considered obese, and a BMI less than 18.5 considered underweight. (BMI is by itself not a reliable index of obesity: serious bodybuilders or strength athletes have BMIs in the obesity range while having relatively little body fat.)
The Consensus Panel of the National Institutes of Health (NIH) recommended the following criteria for consideration of bariatric surgery, including gastric bypass procedures:
The Consensus Panel also emphasized the necessity of multidisciplinary care of the bariatric surgical patient by a team of physicians and therapists to manage associated comorbidities and nutrition, physical activity, behavior, and psychological needs. The surgical procedure is best regarded as a tool which enables the patient to alter lifestyle and eating habits, and to achieve effective and permanent management of obesity and eating behavior.
Since 1991, major developments in the field of bariatric surgery, particularly laparoscopy, have outdated some of the conclusions of the NIH panel. In 2004 the American Society for Bariatric Surgery (ASBS) sponsored a consensus conference which updated the evidence and the conclusions of the NIH panel. This conference, composed of physicians and scientists of both surgical and non-surgical disciplines, reached several conclusions, including:
Click on any of the following blue hyperlinks for more about Gastric Bypass Surgery:
Surgeons have developed several different ways to reconnect the intestine, thus leading to several different gastric bypass (GBP) procedures. Any GBP leads to a marked reduction in the functional volume of the stomach, accompanied by an altered physiological and physical response to food.
The operation is prescribed to treat morbid obesity (defined as a body mass index greater than 40), type 2 diabetes, hypertension, sleep apnea, and other comorbid conditions.
Bariatric surgery is the term encompassing allof the surgical treatments for morbid obesity, not just gastric bypasses, which make up only one class of such operations. The resulting weight loss, typically dramatic, markedly reduces comorbidities.
The long-term mortality rate of gastric bypass patients has been shown to be reduced by up to 40%. As with all surgery, complications may occur. A study from 2005 to 2006 revealed that 15% of patients experience complications as a result of gastric bypass, and 0.5% of patients died within six months of surgery due to complications.
Uses:
Gastric bypass is indicated for the surgical treatment of morbid obesity, a diagnosis which is made when the patient is seriously obese, has been unable to achieve satisfactory and sustained weight loss by dietary efforts, and suffers from comorbid conditions which are either life-threatening or a serious impairment to the quality of life.
Prior to 1991, clinicians interpreted serious obesity as weighing at least 100 pounds (45 kg) more than the "ideal body weight", an actuarially-determined body-weight at which one was estimated to be likely to live the longest, as determined by the life-insurance industry. This criterion failed for persons of short stature.
In 1991, the National Institutes of Health (NIH) sponsored a consensus panel whose recommendations have set the current standard for consideration of surgical treatment, the body mass index (BMI). The BMI is defined as the body weight (in kilograms), divided by the square of the height (in meters). The result is expressed as a number in units of kilograms per square meter.
In healthy adults, BMI ranges from 18.5 to 24.9, with a BMI above 30 being considered obese, and a BMI less than 18.5 considered underweight. (BMI is by itself not a reliable index of obesity: serious bodybuilders or strength athletes have BMIs in the obesity range while having relatively little body fat.)
The Consensus Panel of the National Institutes of Health (NIH) recommended the following criteria for consideration of bariatric surgery, including gastric bypass procedures:
- people who have a BMI of 40 or higher
- people with a BMI of 35 or higher with one or more related comorbid conditions
The Consensus Panel also emphasized the necessity of multidisciplinary care of the bariatric surgical patient by a team of physicians and therapists to manage associated comorbidities and nutrition, physical activity, behavior, and psychological needs. The surgical procedure is best regarded as a tool which enables the patient to alter lifestyle and eating habits, and to achieve effective and permanent management of obesity and eating behavior.
Since 1991, major developments in the field of bariatric surgery, particularly laparoscopy, have outdated some of the conclusions of the NIH panel. In 2004 the American Society for Bariatric Surgery (ASBS) sponsored a consensus conference which updated the evidence and the conclusions of the NIH panel. This conference, composed of physicians and scientists of both surgical and non-surgical disciplines, reached several conclusions, including:
- bariatric surgery is the most effective treatment for morbid obesity
- gastric bypass is one of four types of operations for morbid obesity
- laparoscopic surgery is equally effective and as safe as open surgery
- patients should undergo comprehensive preoperative evaluation and have multi-disciplinary support for optimum outcome
Click on any of the following blue hyperlinks for more about Gastric Bypass Surgery:
- Surgical techniques
- Physiology
- Complications
- Results and health benefits of gastric bypass
- Cost of gastric bypass
- Living with gastric bypass
- Surgeon accreditation
- See also:
- Adjustable gastric banding surgery
- Duodenal Switch surgery
- Roux-en-Y anastomosis
- Vagotomy—Cutting of the vagus nerve to reduce the feeling of hunger
- StomaphyX—Revisional, natural orifice procedure for patients that have regained weight after gastric bypass
- American Society for Bariatric Surgery
- ASBS Consensus Conference Statement – 2004
- NIH – Gastrointestinal Surgery for Obesity
- NIH Medline Plus – Multiple Links to articles, videos about bariatric surgery
- Metabolic & Weight Loss Surgical Procedures Gallery - Including information on bariatric surgery
Diseases that have been eliminated from the United States
YouTube Video: Eradicating Polio: Reaching the Last Child by Harvard University
Pictured: Impact of diseases which have since been eradicated from the United States: (L) Small Pox, (C) Yellow Fever and (R) Malaria
This is a list of diseases known (or declared) to have been eliminated from the United States, either permanently or at one time.
Most of the diseases listed were eliminated after coordinated public health campaigns. (Since some diseases can be eliminated and then reintroduced at a later time, such diseases are still eligible for the list, but with the fact of reintroduction noted.) Some entries are based on formal public health declarations, others are based on reliable information in the medical or public health literature.
Since some diseases can be eliminated, but subsequently reimported without transmitting additional endemic cases, these are noted in a dedicated column. Although no fixed rule always applies, many infectious diseases (e.g., measles) are considered eliminated when no cases have been reported to public health authorities for at least 12 months.
(NB: In recent years, "elimination" is the preferred term for "regional eradication" of a disease; the term "eradication" is reserved for the reduction of an infectious disease's global prevalence to zero.)
Click on any of the following blue hyperlinks for a list of diseases eradicated in the United States:
Possible Future Eradications:
Various public health projects are ongoing with a goal of eliminating diseases from the country. Note that several infectious diseases in the United States, not on the above list, are considered close to elimination (98-99% reductions): e.g.,
Other disease pathogens (e.g., those of anthrax and rabies) have been almost entirely eliminated from humans in the US, but remain as hazards in the environment and so cannot accurately be described as eliminated.
The stated goal of "eradication" of hookworm from the southeast US (1915-20) was not achieved, although the hookworm-infection rate of that region did drop by more than half:
Most of the diseases listed were eliminated after coordinated public health campaigns. (Since some diseases can be eliminated and then reintroduced at a later time, such diseases are still eligible for the list, but with the fact of reintroduction noted.) Some entries are based on formal public health declarations, others are based on reliable information in the medical or public health literature.
Since some diseases can be eliminated, but subsequently reimported without transmitting additional endemic cases, these are noted in a dedicated column. Although no fixed rule always applies, many infectious diseases (e.g., measles) are considered eliminated when no cases have been reported to public health authorities for at least 12 months.
(NB: In recent years, "elimination" is the preferred term for "regional eradication" of a disease; the term "eradication" is reserved for the reduction of an infectious disease's global prevalence to zero.)
Click on any of the following blue hyperlinks for a list of diseases eradicated in the United States:
- Yellow fever
- Smallpox
- Babesia bovis babesiosis (Cattle disease; occasionally infects humans)
- Malaria (See National Malaria Eradication Program)
- Poliomyelitis ( see Poliomyelitis eradication)
- Measles After widespread national vaccination efforts.
- Diphtheria (After widespread national vaccination efforts.)
Possible Future Eradications:
Various public health projects are ongoing with a goal of eliminating diseases from the country. Note that several infectious diseases in the United States, not on the above list, are considered close to elimination (98-99% reductions): e.g.,
Other disease pathogens (e.g., those of anthrax and rabies) have been almost entirely eliminated from humans in the US, but remain as hazards in the environment and so cannot accurately be described as eliminated.
The stated goal of "eradication" of hookworm from the southeast US (1915-20) was not achieved, although the hookworm-infection rate of that region did drop by more than half:
- In 1954, Congressional funds were first approved for a Cooperative State-Federal Brucellosis Eradication Program to eliminate the disease from the country. (Brucellosis is a problem mainly in livestock. In 1956, there were 124,000 affected herds found by testing in the US. By 1992, the number had dropped to 700 herds and the number of affected, domestic herds has declined to single digits since then.)
- The CDC Division of TB Elimination has a goal of controlling tuberculosis and eliminating it from the United States by minimizing the likelihood of Mycobacterium tuberculosis transmission, which will prevent the occurrence of new cases.
- The Oral Rabies Vaccine (ORV) Program has a goal of preventing the spread of raccoon variant rabies and eventually eliminating it from the United States.
- There has been an effort for several years to eliminate syphilis from the US. The rate of infection decreased through the 1990s, and in 2000 it was the lowest since reporting began in 1941, leading the US Surgeon General to issue a plan to eliminate the disease from the country. It has been staging a comeback, however, increasing each year since 2001.
Jonas Salk
YouTube Video about Jonas Salk and the Polio Vaccine
Pictured: Jonas Salk on the cover of Time Magazine, March 29, 1954
Jonas Edward Salk (October 28, 1914 – June 23, 1995) was an American medical researcher and virologist. He discovered and developed one of the first successful polio vaccines.
Born in New York City, he attended New York University School of Medicine, later choosing to do medical research instead of becoming a practicing physician. In 1939, after earning his medical degree, Salk began an internship as a physician scientist at Mount Sinai Hospital.
Two years later he was granted a fellowship at the University of Michigan, where he would study flu viruses with his mentor Thomas Francis, Jr..
Until 1955, when the Salk vaccine was introduced, polio was considered one of the most frightening public health problems in the world. In the postwar United States, annual epidemics were increasingly devastating.
The 1952 U.S. epidemic was the worst outbreak in the nation's history. Of nearly 58,000 cases reported that year, 3,145 people died and 21,269 were left with mild to disabling paralysis, with most of its victims being children. The "public reaction was to a plague", said historian William L. O'Neill. "Citizens of urban areas were to be terrified every summer when this frightful visitor returned."
According to a 2009 PBS documentary, "Apart from the atomic bomb, America's greatest fear was polio." As a result, scientists were in a frantic race to find a way to prevent or cure the disease. In 1938, U.S. President Franklin D. Roosevelt, the world's most recognized victim of the disease, had founded the National Foundation for Infantile Paralysis (known as March of Dimes Foundation since 2007), an organization that would fund the development of a vaccine.
In 1947, Salk accepted an appointment to the University of Pittsburgh School of Medicine. In 1948, he undertook a project funded by the National Foundation for Infantile Paralysis to determine the number of different types of polio virus. Salk saw an opportunity to extend this project towards developing a vaccine against polio, and, together with the skilled research team he assembled, devoted himself to this work for the next seven years.
The field trial set up to test the Salk vaccine was, according to O'Neill, "the most elaborate program of its kind in history, involving 20,000 physicians and public health officers, 64,000 school personnel, and 220,000 volunteers." Over 1,800,000 school children took part in the trial.
When news of the vaccine's success was made public on April 12, 1955, Salk was hailed as a "miracle worker" and the day almost became a national holiday. Around the world, an immediate rush to vaccinate began, with countries including Canada, Sweden, Denmark, Norway, West Germany, the Netherlands, Switzerland, and Belgium planning to begin polio immunization campaigns using Salk's vaccine.
Salk campaigned for mandatory vaccination, claiming that public health should be considered a "moral commitment." His sole focus had been to develop a safe and effective vaccine as rapidly as possible, with no interest in personal profit. When asked who owned the patent to it, Salk said, "Well, the people I would say. There is no patent. Could you patent the sun?"
In 1960, he founded the Salk Institute for Biological Studies in La Jolla, California, which is today a center for medical and scientific research. He continued to conduct research and publish books, including Man Unfolding (1972), The Survival of the Wisest (1973), World Population and Human Values: A New Reality (1981), and Anatomy of Reality: Merging of Intuition and Reason (1983). Salk's last years were spent searching for a vaccine against HIV. His personal papers are stored at the University of California, San Diego Library.
Click here for more about Jonas Salk.
Born in New York City, he attended New York University School of Medicine, later choosing to do medical research instead of becoming a practicing physician. In 1939, after earning his medical degree, Salk began an internship as a physician scientist at Mount Sinai Hospital.
Two years later he was granted a fellowship at the University of Michigan, where he would study flu viruses with his mentor Thomas Francis, Jr..
Until 1955, when the Salk vaccine was introduced, polio was considered one of the most frightening public health problems in the world. In the postwar United States, annual epidemics were increasingly devastating.
The 1952 U.S. epidemic was the worst outbreak in the nation's history. Of nearly 58,000 cases reported that year, 3,145 people died and 21,269 were left with mild to disabling paralysis, with most of its victims being children. The "public reaction was to a plague", said historian William L. O'Neill. "Citizens of urban areas were to be terrified every summer when this frightful visitor returned."
According to a 2009 PBS documentary, "Apart from the atomic bomb, America's greatest fear was polio." As a result, scientists were in a frantic race to find a way to prevent or cure the disease. In 1938, U.S. President Franklin D. Roosevelt, the world's most recognized victim of the disease, had founded the National Foundation for Infantile Paralysis (known as March of Dimes Foundation since 2007), an organization that would fund the development of a vaccine.
In 1947, Salk accepted an appointment to the University of Pittsburgh School of Medicine. In 1948, he undertook a project funded by the National Foundation for Infantile Paralysis to determine the number of different types of polio virus. Salk saw an opportunity to extend this project towards developing a vaccine against polio, and, together with the skilled research team he assembled, devoted himself to this work for the next seven years.
The field trial set up to test the Salk vaccine was, according to O'Neill, "the most elaborate program of its kind in history, involving 20,000 physicians and public health officers, 64,000 school personnel, and 220,000 volunteers." Over 1,800,000 school children took part in the trial.
When news of the vaccine's success was made public on April 12, 1955, Salk was hailed as a "miracle worker" and the day almost became a national holiday. Around the world, an immediate rush to vaccinate began, with countries including Canada, Sweden, Denmark, Norway, West Germany, the Netherlands, Switzerland, and Belgium planning to begin polio immunization campaigns using Salk's vaccine.
Salk campaigned for mandatory vaccination, claiming that public health should be considered a "moral commitment." His sole focus had been to develop a safe and effective vaccine as rapidly as possible, with no interest in personal profit. When asked who owned the patent to it, Salk said, "Well, the people I would say. There is no patent. Could you patent the sun?"
In 1960, he founded the Salk Institute for Biological Studies in La Jolla, California, which is today a center for medical and scientific research. He continued to conduct research and publish books, including Man Unfolding (1972), The Survival of the Wisest (1973), World Population and Human Values: A New Reality (1981), and Anatomy of Reality: Merging of Intuition and Reason (1983). Salk's last years were spent searching for a vaccine against HIV. His personal papers are stored at the University of California, San Diego Library.
Click here for more about Jonas Salk.
Nanomedicine
YouTube Video: Nanomedicines -- The way of the future?
Pictured: what are the differences between nanomedicine and common medicine?
Nanomedicine is the medical application of nanotechnology. Nanomedicine ranges from the medical applications of nanomaterials and biological devices, to nanoelectronic biosensors, and even possible future applications of molecular nanotechnology such as biological machines.
Current problems for nanomedicine involve understanding the issues related to toxicity and environmental impact of nanoscale materials (materials whose structure is on the scale of nanometers, i.e. billionths of a meter).
Functionalities can be added to nanomaterials by interfacing them with biological molecules or structures. The size of nanomaterials is similar to that of most biological molecules and structures; therefore, nanomaterials can be useful for both in vivo and in vitro biomedical research and applications.
Thus far, the integration of nanomaterials with biology has led to the development of diagnostic devices, contrast agents, analytical tools, physical therapy applications, and drug delivery vehicles.
Nanomedicine seeks to deliver a valuable set of research tools and clinically useful devices in the near future. The National Nanotechnology Initiative expects new commercial applications in the pharmaceutical industry that may include advanced drug delivery systems, new therapies, and in vivo imaging. Nanomedicine research is receiving funding from the US National Institutes of Health Common Fund program, supporting four nanomedicine development centers.
Nanomedicine sales reached $16 billion in 2015, with a minimum of $3.8 billion in nanotechnology R&D being invested every year. Global funding for emerging nanotechnology increased by 45% per year in recent years, with product sales exceeding $1 trillion in 2013. As the nanomedicine industry continues to grow, it is expected to have a significant impact on the economy.
Drug Delivery:
Nanotechnology has provided the possibility of delivering drugs to specific cells using nanoparticles. The overall drug consumption and side-effects may be lowered significantly by depositing the active agent in the morbid region only and in no higher dose than needed.
Targeted drug delivery is intended to reduce the side effects of drugs with concomitant decreases in consumption and treatment expenses. Drug delivery focuses on maximizing bioavailability, both at specific places in the body and over a period of time.
This can potentially be achieved by molecular targeting by nanoengineered devices. A benefit of using nanoscale for medical technologies is that smaller devices are less invasive and can possibly be implanted inside the body, plus biochemical reaction times are much shorter.
These devices are faster and more sensitive than typical drug delivery. The efficacy of drug delivery through nanomedicine is largely based upon: a) efficient encapsulation of the drugs, b) successful delivery of drug to the targeted region of the body, and c) successful release of the drug.
Drug delivery systems, lipid- or polymer-based nanoparticles, can be designed to improve the pharmacokinetics and biodistribution of the drug. However, the pharmacokinetics and pharmacodynamics of nanomedicine is highly variable among different patients. When designed to avoid the body's defence mechanisms, nanoparticles have beneficial properties that can be used to improve drug delivery.
Complex drug delivery mechanisms are being developed, including the ability to get drugs through cell membranes and into cell cytoplasm. Triggered response is one way for drug molecules to be used more efficiently. Drugs are placed in the body and only activate on encountering a particular signal. For example, a drug with poor solubility will be replaced by a drug delivery system where both hydrophilic and hydrophobic environments exist, improving the solubility.
Drug delivery systems may also be able to prevent tissue damage through regulated drug release; reduce drug clearance rates; or lower the volume of distribution and reduce the effect on non-target tissue. However, the biodistribution of these nanoparticles is still imperfect due to the complex host's reactions to nano- and microsized materials and the difficulty in targeting specific organs in the body.
Nevertheless, a lot of work is still ongoing to optimize and better understand the potential and limitations of nanoparticulate systems. While advancement of research proves that targeting and distribution can be augmented by nanoparticles, the dangers of nanotoxicity become an important next step in further understanding of their medical uses.
Nanoparticles are under research for their potential to decrease antibiotic resistance or for various antimicrobial uses. Nanoparticles might also used to circumvent multidrug resistance (MDR) mechanisms.
Systems under research:
Two forms of nanomedicine that have already been tested in mice and are awaiting human testing will use gold nanoshells to help diagnose and treat cancer, along with liposomes as vaccine adjuvants and drug transport vehicles.
Similarly, drug detoxification is also another application for nanomedicine which has shown promising results in rats. Advances in Lipid nanotechnology was also instrumental in engineering medical nanodevices and novel drug delivery systems as well as in developing sensing applications. Another example can be found in dendrimers and nanoporous materials. Another example is to use block co-polymers, which form micelles for drug encapsulation.
Polymeric nanoparticles are a competing technology to lipidic (based mainly on Phospholipids) nanoparticles. There is an additional risk of toxicity associated with polymers not widely studied or understood. The major advantages of polymers is stability, lower cost and predictable characterization. However, in the patient's body this very stability (slow degradation) is a negative factor.
Phospholipids on the other hand are membrane lipids (already present in the body and surrounding each cell), have a GRAS (Generally Recognised As Safe) status from FDA and are derived from natural sources without any complex chemistry involved. They are not metabolised but rather absorbed by the body and the degradation products are themselves nutrients (fats or micronutrients).
Protein and peptides exert multiple biological actions in the human body and they have been identified as showing great promise for treatment of various diseases and disorders. These macromolecules are called biopharmaceuticals. Targeted and/or controlled delivery of these biopharmaceuticals using nanomaterials like nanoparticles and Dendrimers is an emerging field called nanobiopharmaceutics, and these products are called nanobiopharmaceuticals.
Another highly efficient system for microRNA delivery for example are nanoparticles formed by the self-assembly of two different microRNAs deregulated in cancer.
Another vision is based on small electromechanical systems; nanoelectromechanical systems are being investigated for the active release of drugs. Some potentially important applications include cancer treatment with iron nanoparticles or gold shells.
Nanotechnology is also opening up new opportunities in implantable delivery systems, which are often preferable to the use of injectable drugs, because the latter frequently display first-order kinetics (the blood concentration goes up rapidly, but drops exponentially over time).
This rapid rise may cause difficulties with toxicity, and drug efficacy can diminish as the drug concentration falls below the targeted range.
Applications:
Some nanotechnology-based drugs that are commercially available or in human clinical trials include:
Cancer:
Existing and potential drug nanocarriers have been reviewed.
Nanoparticles have high surface area to volume ratio. This allows for many functional groups to be attached to a nanoparticle, which can seek out and bind to certain tumor cells. Additionally, the small size of nanoparticles (10 to 100 nanometers), allows them to preferentially accumulate at tumor sites (because tumors lack an effective lymphatic drainage system). Limitations to conventional cancer chemotherapy include drug resistance, lack of selectivity, and lack of solubility. Nanoparticles have the potential to overcome these problems.
In photodynamic therapy, a particle is placed within the body and is illuminated with light from the outside. The light gets absorbed by the particle and if the particle is metal, energy from the light will heat the particle and surrounding tissue.
Light may also be used to produce high energy oxygen molecules which will chemically react with and destroy most organic molecules that are next to them (like tumors). This therapy is appealing for many reasons. It does not leave a "toxic trail" of reactive molecules throughout the body (chemotherapy) because it is directed where only the light is shined and the particles exist.
Photodynamic therapy has potential for a noninvasive procedure for dealing with diseases, growth and tumors. Kanzius RF therapy is one example of such therapy (nanoparticle hyperthermia) . Also, gold nanoparticles have the potential to join numerous therapeutic functions into a single platform, by targeting specific tumor cells, tissues and organs.
Imaging:
In vivo imaging is another area where tools and devices are being developed. Using nanoparticle contrast agents, images such as ultrasound and MRI have a favorable distribution and improved contrast. In cardiovascular imaging, nanoparticles have potential to aid visualization of blood pooling, ischemia, angiogenesis, atherosclerosis, and focal areas where inflammation is present.
The small size of nanoparticles endows them with properties that can be very useful in oncology, particularly in imaging. Quantum dots (nanoparticles with quantum confinement properties, such as size-tunable light emission), when used in conjunction with MRI (magnetic resonance imaging), can produce exceptional images of tumor sites.
Nanoparticles of cadmium selenide (quantum dots) glow when exposed to ultraviolet light. When injected, they seep into cancer tumors. The surgeon can see the glowing tumor, and use it as a guide for more accurate tumor removal.These nanoparticles are much brighter than organic dyes and only need one light source for excitation.
This means that the use of fluorescent quantum dots could produce a higher contrast image and at a lower cost than today's organic dyes used as contrast media. The downside, however, is that quantum dots are usually made of quite toxic elements, but this concern may be addressed by use of fluorescent dopants.
Tracking movement can help determine how well drugs are being distributed or how substances are metabolized. It is difficult to track a small group of cells throughout the body, so scientists used to dye the cells. These dyes needed to be excited by light of a certain wavelength in order for them to light up.
While different color dyes absorb different frequencies of light, there was a need for as many light sources as cells. A way around this problem is with luminescent tags. These tags are quantum dots attached to proteins that penetrate cell membranes. The dots can be random in size, can be made of bio-inert material, and they demonstrate the nanoscale property that color is size-dependent.
As a result, sizes are selected so that the frequency of light used to make a group of quantum dots fluoresce is an even multiple of the frequency required to make another group incandesce. Then both groups can be lit with a single light source. They have also found a way to insert nanoparticles into the affected parts of the body so that those parts of the body will glow showing the tumor growth or shrinkage or also organ trouble.
Sensing:
Main article: Nanosensor
Nanotechnology-on-a-chip is one more dimension of lab-on-a-chip technology. Magnetic nanoparticles, bound to a suitable antibody, are used to label specific molecules, structures or microorganisms. Gold nanoparticles tagged with short segments of DNA can be used for detection of genetic sequence in a sample.
Multicolor optical coding for biological assays has been achieved by embedding different-sized quantum dots into polymeric microbeads. Nanopore technology for analysis of nucleic acids converts strings of nucleotides directly into electronic signatures.
Sensor test chips containing thousands of nanowires, able to detect proteins and other biomarkers left behind by cancer cells, could enable the detection and diagnosis of cancer in the early stages from a few drops of a patient's blood. Nanotechnology is helping to advance the use of arthroscopes, which are pencil-sized devices that are used in surgeries with lights and cameras so surgeons can do the surgeries with smaller incisions. The smaller the incisions the faster the healing time which is better for the patients. It is also helping to find a way to make an arthroscope smaller than a strand of hair.
Research on nanoelectronics-based cancer diagnostics could lead to tests that can be done in pharmacies. The results promise to be highly accurate and the product promises to be inexpensive. They could take a very small amount of blood and detect cancer anywhere in the body in about five minutes, with a sensitivity that is a thousand times better than in a conventional laboratory test.
These devices that are built with nanowires to detect cancer proteins; each nanowire detector is primed to be sensitive to a different cancer marker. The biggest advantage of the nanowire detectors is that they could test for anywhere from ten to one hundred similar medical conditions without adding cost to the testing device.
Nanotechnology has also helped to personalize oncology for the detection, diagnosis, and treatment of cancer. It is now able to be tailored to each individual’s tumor for better performance. They have found ways that they will be able to target a specific part of the body that is being affected by cancer.
Blood purification:
Magnetic micro particles are proven research instruments for the separation of cells and proteins from complex media. The technology is available under the name Magnetic-activated cell sorting or Dynabeads among others.
More recently it was shown in animal models that magnetic nanoparticles can be used for the removal of various noxious compounds including toxins, pathogens, and proteins from whole blood in an extracorporeal circuit similar to dialysis.
In contrast to dialysis, which works on the principle of the size related diffusion of solutes and ultrafiltration of fluid across a semi-permeable membrane, the purification with nanoparticles allows specific targeting of substances. Additionally larger compounds which are commonly not dialyzable can be removed.
The purification process is based on functionalized iron oxide or carbon coated metal nanoparticles with ferromagnetic or superparamagnetic properties. Binding agents such as proteins, antibodies, antibiotics, or synthetic ligands are covalently linked to the particle surface. These binding agents are able to interact with target species forming an agglomerate.
Applying an external magnetic field gradient allows exerting a force on the nanoparticles. Hence the particles can be separated from the bulk fluid, thereby cleaning it from the contaminants.
The small size (< 100 nm) and large surface area of functionalized nanomagnets leads to advantageous properties compared to hemoperfusion, which is a clinically used technique for the purification of blood and is based on surface adsorption.
These advantages are high loading and accessibility of the binding agents, high selectivity towards the target compound, fast diffusion, small hydrodynamic resistance, and low dosage.
This approach offers new therapeutic possibilities for the treatment of systemic infections such as sepsis by directly removing the pathogen. It can also be used to selectively remove cytokines or endotoxins or for the dialysis of compounds which are not accessible by traditional dialysis methods. However the technology is still in a preclinical phase and first clinical trials are not expected before 2017.
Tissue engineering:
Nanotechnology may be used as part of tissue engineering to help reproduce or repair or reshape damaged tissue using suitable nanomaterial-based scaffolds and growth factors. Tissue engineering if successful may replace conventional treatments like organ transplants or artificial implants.
Nanoparticles such as graphene, carbon nanotubes, molybdenum disulfide and tungsten disulfide are being used as reinforcing agents to fabricate mechanically strong biodegradable polymeric nanocomposites for bone tissue engineering applications. The addition of these nanoparticles in the polymer matrix at low concentrations (~0.2 weight %) leads to significant improvements in the compressive and flexural mechanical properties of polymeric nanocomposites.
Potentially, these nanocomposites may be used as a novel, mechanically strong, light weight composite as bone implants.
For example, a flesh welder was demonstrated to fuse two pieces of chicken meat into a single piece using a suspension of gold-coated nanoshells activated by an infrared laser. This could be used to weld arteries during surgery. Another example is nanonephrology, the use of nanomedicine on the kidney.
Medical devices:
Neuro-electronic interfacing is a visionary goal dealing with the construction of nanodevices that will permit computers to be joined and linked to the nervous system. This idea requires the building of a molecular structure that will permit control and detection of nerve impulses by an external computer.
A refuelable strategy implies energy is refilled continuously or periodically with external sonic, chemical, tethered, magnetic, or biological electrical sources, while a nonrefuelable strategy implies that all power is drawn from internal energy storage which would stop when all energy is drained.
A nanoscale enzymatic biofuel cell for self-powered nanodevices have been developed that uses glucose from biofluids including human blood and watermelons. One limitation to this innovation is the fact that electrical interference or leakage or overheating from power consumption is possible.
The wiring of the structure is extremely difficult because they must be positioned precisely in the nervous system. The structures that will provide the interface must also be compatible with the body's immune system.
Molecular nanotechnology is a speculative subfield of nanotechnology regarding the possibility of engineering molecular assemblers, machines which could re-order matter at a molecular or atomic scale.
Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections. Molecular nanotechnology is highly theoretical, seeking to anticipate what inventions nanotechnology might yield and to propose an agenda for future inquiry.
The proposed elements of molecular nanotechnology, such as molecular assemblers and nanorobots are far beyond current capabilities. Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging.
K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair machines, including ones operating within cells and utilizing as yet hypothetical molecular machines, in his 1986 book Engines of Creation, with the first technical discussion of medical nanorobots by Robert Freitas appearing in 1999.
Raymond Kurzweil, a futurist and transhumanist, stated in his book The Singularity Is Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030.
According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines (see nanotechnology). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.
See Also:
Current problems for nanomedicine involve understanding the issues related to toxicity and environmental impact of nanoscale materials (materials whose structure is on the scale of nanometers, i.e. billionths of a meter).
Functionalities can be added to nanomaterials by interfacing them with biological molecules or structures. The size of nanomaterials is similar to that of most biological molecules and structures; therefore, nanomaterials can be useful for both in vivo and in vitro biomedical research and applications.
Thus far, the integration of nanomaterials with biology has led to the development of diagnostic devices, contrast agents, analytical tools, physical therapy applications, and drug delivery vehicles.
Nanomedicine seeks to deliver a valuable set of research tools and clinically useful devices in the near future. The National Nanotechnology Initiative expects new commercial applications in the pharmaceutical industry that may include advanced drug delivery systems, new therapies, and in vivo imaging. Nanomedicine research is receiving funding from the US National Institutes of Health Common Fund program, supporting four nanomedicine development centers.
Nanomedicine sales reached $16 billion in 2015, with a minimum of $3.8 billion in nanotechnology R&D being invested every year. Global funding for emerging nanotechnology increased by 45% per year in recent years, with product sales exceeding $1 trillion in 2013. As the nanomedicine industry continues to grow, it is expected to have a significant impact on the economy.
Drug Delivery:
Nanotechnology has provided the possibility of delivering drugs to specific cells using nanoparticles. The overall drug consumption and side-effects may be lowered significantly by depositing the active agent in the morbid region only and in no higher dose than needed.
Targeted drug delivery is intended to reduce the side effects of drugs with concomitant decreases in consumption and treatment expenses. Drug delivery focuses on maximizing bioavailability, both at specific places in the body and over a period of time.
This can potentially be achieved by molecular targeting by nanoengineered devices. A benefit of using nanoscale for medical technologies is that smaller devices are less invasive and can possibly be implanted inside the body, plus biochemical reaction times are much shorter.
These devices are faster and more sensitive than typical drug delivery. The efficacy of drug delivery through nanomedicine is largely based upon: a) efficient encapsulation of the drugs, b) successful delivery of drug to the targeted region of the body, and c) successful release of the drug.
Drug delivery systems, lipid- or polymer-based nanoparticles, can be designed to improve the pharmacokinetics and biodistribution of the drug. However, the pharmacokinetics and pharmacodynamics of nanomedicine is highly variable among different patients. When designed to avoid the body's defence mechanisms, nanoparticles have beneficial properties that can be used to improve drug delivery.
Complex drug delivery mechanisms are being developed, including the ability to get drugs through cell membranes and into cell cytoplasm. Triggered response is one way for drug molecules to be used more efficiently. Drugs are placed in the body and only activate on encountering a particular signal. For example, a drug with poor solubility will be replaced by a drug delivery system where both hydrophilic and hydrophobic environments exist, improving the solubility.
Drug delivery systems may also be able to prevent tissue damage through regulated drug release; reduce drug clearance rates; or lower the volume of distribution and reduce the effect on non-target tissue. However, the biodistribution of these nanoparticles is still imperfect due to the complex host's reactions to nano- and microsized materials and the difficulty in targeting specific organs in the body.
Nevertheless, a lot of work is still ongoing to optimize and better understand the potential and limitations of nanoparticulate systems. While advancement of research proves that targeting and distribution can be augmented by nanoparticles, the dangers of nanotoxicity become an important next step in further understanding of their medical uses.
Nanoparticles are under research for their potential to decrease antibiotic resistance or for various antimicrobial uses. Nanoparticles might also used to circumvent multidrug resistance (MDR) mechanisms.
Systems under research:
Two forms of nanomedicine that have already been tested in mice and are awaiting human testing will use gold nanoshells to help diagnose and treat cancer, along with liposomes as vaccine adjuvants and drug transport vehicles.
Similarly, drug detoxification is also another application for nanomedicine which has shown promising results in rats. Advances in Lipid nanotechnology was also instrumental in engineering medical nanodevices and novel drug delivery systems as well as in developing sensing applications. Another example can be found in dendrimers and nanoporous materials. Another example is to use block co-polymers, which form micelles for drug encapsulation.
Polymeric nanoparticles are a competing technology to lipidic (based mainly on Phospholipids) nanoparticles. There is an additional risk of toxicity associated with polymers not widely studied or understood. The major advantages of polymers is stability, lower cost and predictable characterization. However, in the patient's body this very stability (slow degradation) is a negative factor.
Phospholipids on the other hand are membrane lipids (already present in the body and surrounding each cell), have a GRAS (Generally Recognised As Safe) status from FDA and are derived from natural sources without any complex chemistry involved. They are not metabolised but rather absorbed by the body and the degradation products are themselves nutrients (fats or micronutrients).
Protein and peptides exert multiple biological actions in the human body and they have been identified as showing great promise for treatment of various diseases and disorders. These macromolecules are called biopharmaceuticals. Targeted and/or controlled delivery of these biopharmaceuticals using nanomaterials like nanoparticles and Dendrimers is an emerging field called nanobiopharmaceutics, and these products are called nanobiopharmaceuticals.
Another highly efficient system for microRNA delivery for example are nanoparticles formed by the self-assembly of two different microRNAs deregulated in cancer.
Another vision is based on small electromechanical systems; nanoelectromechanical systems are being investigated for the active release of drugs. Some potentially important applications include cancer treatment with iron nanoparticles or gold shells.
Nanotechnology is also opening up new opportunities in implantable delivery systems, which are often preferable to the use of injectable drugs, because the latter frequently display first-order kinetics (the blood concentration goes up rapidly, but drops exponentially over time).
This rapid rise may cause difficulties with toxicity, and drug efficacy can diminish as the drug concentration falls below the targeted range.
Applications:
Some nanotechnology-based drugs that are commercially available or in human clinical trials include:
- Abraxane, approved by the U.S. Food and Drug Administration (FDA) to treat breast cancer, non-small- cell lung cancer (NSCLC) and pancreatic cancer, is the nanoparticle albumin bound paclitaxel.
- Doxil was originally approved by the FDA for the use on HIV-related Kaposi's sarcoma. It is now being used to also treat ovarian cancer and multiple myeloma. The drug is encased in liposomes, which helps to extend the life of the drug that is being distributed. Liposomes are self-assembling, spherical, closed colloidal structures that are composed of lipid bilayers that surround an aqueous space. The liposomes also help to increase the functionality and it helps to decrease the damage that the drug does to the heart muscles specifically.
- Onivyde, liposome encapsulated irinotecan to treat metastatic pancreatic cancer, was approved by FDA in October 2015.
- C-dots (Cornell dots) are the smallest silica-based nanoparticles with the size <10 nm. The particles are infused with organic dye which will light up with fluorescence. Clinical trial is underway since 2011 to use the C-dots as diagnostic tool to assist surgeons to identify the location of tumor cells.
- An early phase clinical trial using the platform of ‘Minicell’ nanoparticle for drug delivery have been tested on patients with advanced and untreatable cancer. Built from the membranes of mutant bacteria, the minicells were loaded with paclitaxel and coated with cetuximab, antibodies that bind the epidermal growth factor receptor (EGFR) which is often overexpressed in a number of cancers, as a 'homing' device to the tumor cells. The tumor cells recognize the bacteria from which the minicells have been derived, regard it as invading microorganism and engulf it. Once inside, the payload of anti-cancer drug kills the tumor cells. Measured at 400 nanometers, the minicell is bigger than synthetic particles developed for drug delivery. The researchers indicated that this larger size gives the minicells a better profile in side-effects because the minicells will preferentially leak out of the porous blood vessels around the tumor cells and do not reach the liver, digestive system and skin. This Phase 1 clinical trial demonstrated that this treatment is well tolerated by the patients. As a platform technology, the minicell drug delivery system can be used to treat a number of different cancers with different anti-cancer drugs with the benefit of lower dose and less side-effects.
- In 2014, a Phase 3 clinical trial for treating inflammation and pain after cataract surgery, and a Phase 2 trial for treating dry eye disease were initiated using nanoparticle loteprednol etabonate. In 2015, the product, KPI-121 was found to produce statistically significant positive results for the post-surgery treatment.
Cancer:
Existing and potential drug nanocarriers have been reviewed.
Nanoparticles have high surface area to volume ratio. This allows for many functional groups to be attached to a nanoparticle, which can seek out and bind to certain tumor cells. Additionally, the small size of nanoparticles (10 to 100 nanometers), allows them to preferentially accumulate at tumor sites (because tumors lack an effective lymphatic drainage system). Limitations to conventional cancer chemotherapy include drug resistance, lack of selectivity, and lack of solubility. Nanoparticles have the potential to overcome these problems.
In photodynamic therapy, a particle is placed within the body and is illuminated with light from the outside. The light gets absorbed by the particle and if the particle is metal, energy from the light will heat the particle and surrounding tissue.
Light may also be used to produce high energy oxygen molecules which will chemically react with and destroy most organic molecules that are next to them (like tumors). This therapy is appealing for many reasons. It does not leave a "toxic trail" of reactive molecules throughout the body (chemotherapy) because it is directed where only the light is shined and the particles exist.
Photodynamic therapy has potential for a noninvasive procedure for dealing with diseases, growth and tumors. Kanzius RF therapy is one example of such therapy (nanoparticle hyperthermia) . Also, gold nanoparticles have the potential to join numerous therapeutic functions into a single platform, by targeting specific tumor cells, tissues and organs.
Imaging:
In vivo imaging is another area where tools and devices are being developed. Using nanoparticle contrast agents, images such as ultrasound and MRI have a favorable distribution and improved contrast. In cardiovascular imaging, nanoparticles have potential to aid visualization of blood pooling, ischemia, angiogenesis, atherosclerosis, and focal areas where inflammation is present.
The small size of nanoparticles endows them with properties that can be very useful in oncology, particularly in imaging. Quantum dots (nanoparticles with quantum confinement properties, such as size-tunable light emission), when used in conjunction with MRI (magnetic resonance imaging), can produce exceptional images of tumor sites.
Nanoparticles of cadmium selenide (quantum dots) glow when exposed to ultraviolet light. When injected, they seep into cancer tumors. The surgeon can see the glowing tumor, and use it as a guide for more accurate tumor removal.These nanoparticles are much brighter than organic dyes and only need one light source for excitation.
This means that the use of fluorescent quantum dots could produce a higher contrast image and at a lower cost than today's organic dyes used as contrast media. The downside, however, is that quantum dots are usually made of quite toxic elements, but this concern may be addressed by use of fluorescent dopants.
Tracking movement can help determine how well drugs are being distributed or how substances are metabolized. It is difficult to track a small group of cells throughout the body, so scientists used to dye the cells. These dyes needed to be excited by light of a certain wavelength in order for them to light up.
While different color dyes absorb different frequencies of light, there was a need for as many light sources as cells. A way around this problem is with luminescent tags. These tags are quantum dots attached to proteins that penetrate cell membranes. The dots can be random in size, can be made of bio-inert material, and they demonstrate the nanoscale property that color is size-dependent.
As a result, sizes are selected so that the frequency of light used to make a group of quantum dots fluoresce is an even multiple of the frequency required to make another group incandesce. Then both groups can be lit with a single light source. They have also found a way to insert nanoparticles into the affected parts of the body so that those parts of the body will glow showing the tumor growth or shrinkage or also organ trouble.
Sensing:
Main article: Nanosensor
Nanotechnology-on-a-chip is one more dimension of lab-on-a-chip technology. Magnetic nanoparticles, bound to a suitable antibody, are used to label specific molecules, structures or microorganisms. Gold nanoparticles tagged with short segments of DNA can be used for detection of genetic sequence in a sample.
Multicolor optical coding for biological assays has been achieved by embedding different-sized quantum dots into polymeric microbeads. Nanopore technology for analysis of nucleic acids converts strings of nucleotides directly into electronic signatures.
Sensor test chips containing thousands of nanowires, able to detect proteins and other biomarkers left behind by cancer cells, could enable the detection and diagnosis of cancer in the early stages from a few drops of a patient's blood. Nanotechnology is helping to advance the use of arthroscopes, which are pencil-sized devices that are used in surgeries with lights and cameras so surgeons can do the surgeries with smaller incisions. The smaller the incisions the faster the healing time which is better for the patients. It is also helping to find a way to make an arthroscope smaller than a strand of hair.
Research on nanoelectronics-based cancer diagnostics could lead to tests that can be done in pharmacies. The results promise to be highly accurate and the product promises to be inexpensive. They could take a very small amount of blood and detect cancer anywhere in the body in about five minutes, with a sensitivity that is a thousand times better than in a conventional laboratory test.
These devices that are built with nanowires to detect cancer proteins; each nanowire detector is primed to be sensitive to a different cancer marker. The biggest advantage of the nanowire detectors is that they could test for anywhere from ten to one hundred similar medical conditions without adding cost to the testing device.
Nanotechnology has also helped to personalize oncology for the detection, diagnosis, and treatment of cancer. It is now able to be tailored to each individual’s tumor for better performance. They have found ways that they will be able to target a specific part of the body that is being affected by cancer.
Blood purification:
Magnetic micro particles are proven research instruments for the separation of cells and proteins from complex media. The technology is available under the name Magnetic-activated cell sorting or Dynabeads among others.
More recently it was shown in animal models that magnetic nanoparticles can be used for the removal of various noxious compounds including toxins, pathogens, and proteins from whole blood in an extracorporeal circuit similar to dialysis.
In contrast to dialysis, which works on the principle of the size related diffusion of solutes and ultrafiltration of fluid across a semi-permeable membrane, the purification with nanoparticles allows specific targeting of substances. Additionally larger compounds which are commonly not dialyzable can be removed.
The purification process is based on functionalized iron oxide or carbon coated metal nanoparticles with ferromagnetic or superparamagnetic properties. Binding agents such as proteins, antibodies, antibiotics, or synthetic ligands are covalently linked to the particle surface. These binding agents are able to interact with target species forming an agglomerate.
Applying an external magnetic field gradient allows exerting a force on the nanoparticles. Hence the particles can be separated from the bulk fluid, thereby cleaning it from the contaminants.
The small size (< 100 nm) and large surface area of functionalized nanomagnets leads to advantageous properties compared to hemoperfusion, which is a clinically used technique for the purification of blood and is based on surface adsorption.
These advantages are high loading and accessibility of the binding agents, high selectivity towards the target compound, fast diffusion, small hydrodynamic resistance, and low dosage.
This approach offers new therapeutic possibilities for the treatment of systemic infections such as sepsis by directly removing the pathogen. It can also be used to selectively remove cytokines or endotoxins or for the dialysis of compounds which are not accessible by traditional dialysis methods. However the technology is still in a preclinical phase and first clinical trials are not expected before 2017.
Tissue engineering:
Nanotechnology may be used as part of tissue engineering to help reproduce or repair or reshape damaged tissue using suitable nanomaterial-based scaffolds and growth factors. Tissue engineering if successful may replace conventional treatments like organ transplants or artificial implants.
Nanoparticles such as graphene, carbon nanotubes, molybdenum disulfide and tungsten disulfide are being used as reinforcing agents to fabricate mechanically strong biodegradable polymeric nanocomposites for bone tissue engineering applications. The addition of these nanoparticles in the polymer matrix at low concentrations (~0.2 weight %) leads to significant improvements in the compressive and flexural mechanical properties of polymeric nanocomposites.
Potentially, these nanocomposites may be used as a novel, mechanically strong, light weight composite as bone implants.
For example, a flesh welder was demonstrated to fuse two pieces of chicken meat into a single piece using a suspension of gold-coated nanoshells activated by an infrared laser. This could be used to weld arteries during surgery. Another example is nanonephrology, the use of nanomedicine on the kidney.
Medical devices:
Neuro-electronic interfacing is a visionary goal dealing with the construction of nanodevices that will permit computers to be joined and linked to the nervous system. This idea requires the building of a molecular structure that will permit control and detection of nerve impulses by an external computer.
A refuelable strategy implies energy is refilled continuously or periodically with external sonic, chemical, tethered, magnetic, or biological electrical sources, while a nonrefuelable strategy implies that all power is drawn from internal energy storage which would stop when all energy is drained.
A nanoscale enzymatic biofuel cell for self-powered nanodevices have been developed that uses glucose from biofluids including human blood and watermelons. One limitation to this innovation is the fact that electrical interference or leakage or overheating from power consumption is possible.
The wiring of the structure is extremely difficult because they must be positioned precisely in the nervous system. The structures that will provide the interface must also be compatible with the body's immune system.
Molecular nanotechnology is a speculative subfield of nanotechnology regarding the possibility of engineering molecular assemblers, machines which could re-order matter at a molecular or atomic scale.
Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections. Molecular nanotechnology is highly theoretical, seeking to anticipate what inventions nanotechnology might yield and to propose an agenda for future inquiry.
The proposed elements of molecular nanotechnology, such as molecular assemblers and nanorobots are far beyond current capabilities. Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging.
K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair machines, including ones operating within cells and utilizing as yet hypothetical molecular machines, in his 1986 book Engines of Creation, with the first technical discussion of medical nanorobots by Robert Freitas appearing in 1999.
Raymond Kurzweil, a futurist and transhumanist, stated in his book The Singularity Is Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030.
According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines (see nanotechnology). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.
See Also:
- Colloidal gold
- Gold nanobeacon
- Heart nanotechnology
- IEEE P1906.1 – Recommended Practice for Nanoscale and Molecular Communication Framework
- Impalefection
- Monitoring (medicine)
- Nanobiotechnology
- Nanoparticle–biomolecule conjugate
- Nanosensor
- Nanozymes
- Nanotechnology in fiction
- Photodynamic therapy
- Top-down and bottom-up design
Medical Research and Funding
YouTube Video: National Institute of Health Mission - It's About Life
Pictured: R&D in the FY 2014 Omnibus: The National Institutes of Health
Biomedical research (or experimental medicine) encompasses a wide array of research from "basic research" (also called bench science or bench research), involving the elucidation of more fundamental scientific principles, to clinical research, which is distinguished by the involvement of patients.
Within this spectrum is applied research, or translational research conducted to aid and support the development of knowledge in the field of medicine, and pre-clinical research, for example involving animals.
Both clinical and pre-clinical research phases exist in the pharmaceutical industry's drug development pipelines, where the clinical phase is denoted by the term clinical trial.
However, only part of the clinical or pre-clinical research is oriented towards a specific pharmaceutical purpose. The need for fundamental and mechanistic understanding, diagnostics, medical devices and non-pharmaceutical therapies means that pharmaceutical research is only a small part of medical research.
The increased longevity of humans over the past century can be significantly attributed to advances resulting from medical research. Among the major benefits of medical research have been the following:
New, beneficial tests and treatments are expected as a result of the Human Genome Project. Many challenges remain, however, including the appearance of antibiotic resistance and the obesity epidemic.
Most of the research in the field is pursued by biomedical scientists, however significant contributions are made by other biologists, as well as chemists and physicists. Medical research, done on humans, has to strictly follow the medical ethics as sanctioned in the Declaration of Helsinki and elsewhere. In all cases, the research ethics has to be respected.
Phases of Medical Research:
Basic Medical Research:
Areas tackled in the most fundamental parts of medical research include cellular and molecular biology, medical genetics, immunology, neuroscience, and psychology.
Researchers, mainly in universities or government-funded research institutes, aim to establish an understanding of the cellular, molecular and physiological mechanisms underpinning human health and disease.
Since many organisms share a common evolutionary history with humans and hence common features and systems, the basic end of medical research these days shades into basic biology.
Preclinical research:
Pre-clinical research covers research that prepares the ground for clinical research with patients. Typically the work requires no ethical approval (though work with animals does), is supervised by scientists rather than medical doctors, and is carried out in a university or company rather than a hospital.
Clinical research:
Clinical research is carried out with patients. It is generally supervised by doctors in a medical setting such as a hospital and requires ethical approval.
Click on any of the following for more about Medical Research:
Within this spectrum is applied research, or translational research conducted to aid and support the development of knowledge in the field of medicine, and pre-clinical research, for example involving animals.
Both clinical and pre-clinical research phases exist in the pharmaceutical industry's drug development pipelines, where the clinical phase is denoted by the term clinical trial.
However, only part of the clinical or pre-clinical research is oriented towards a specific pharmaceutical purpose. The need for fundamental and mechanistic understanding, diagnostics, medical devices and non-pharmaceutical therapies means that pharmaceutical research is only a small part of medical research.
The increased longevity of humans over the past century can be significantly attributed to advances resulting from medical research. Among the major benefits of medical research have been the following:
- vaccines for measles and polio,
- insulin treatment for diabetes,
- classes of antibiotics for treating a host of maladies,
- medication for high blood pressure,
- improved treatments for AIDS,
- statins and other treatments for atherosclerosis,
- new surgical techniques such as microsurgery,
- and increasingly successful treatments for cancer.
New, beneficial tests and treatments are expected as a result of the Human Genome Project. Many challenges remain, however, including the appearance of antibiotic resistance and the obesity epidemic.
Most of the research in the field is pursued by biomedical scientists, however significant contributions are made by other biologists, as well as chemists and physicists. Medical research, done on humans, has to strictly follow the medical ethics as sanctioned in the Declaration of Helsinki and elsewhere. In all cases, the research ethics has to be respected.
Phases of Medical Research:
Basic Medical Research:
Areas tackled in the most fundamental parts of medical research include cellular and molecular biology, medical genetics, immunology, neuroscience, and psychology.
Researchers, mainly in universities or government-funded research institutes, aim to establish an understanding of the cellular, molecular and physiological mechanisms underpinning human health and disease.
Since many organisms share a common evolutionary history with humans and hence common features and systems, the basic end of medical research these days shades into basic biology.
Preclinical research:
Pre-clinical research covers research that prepares the ground for clinical research with patients. Typically the work requires no ethical approval (though work with animals does), is supervised by scientists rather than medical doctors, and is carried out in a university or company rather than a hospital.
Clinical research:
Clinical research is carried out with patients. It is generally supervised by doctors in a medical setting such as a hospital and requires ethical approval.
Click on any of the following for more about Medical Research:
Francis Collins, Lead Scientist of the Genome Project
YouTube Video: Francis Collins - The Language of God: A Scientist Presents Evidence of Belief
Francis Sellers Collins (born April 14, 1950) is an American physician-geneticist noted for his discoveries of disease genes and his leadership of the Human Genome Project. He is director of the National Institutes of Health (NIH) in Bethesda, Maryland, USA.
Before being appointed director of the NIH, Collins led the Human Genome Project and other genomics research initiatives as director of the National Human Genome Research Institute (NHGRI), one of the 27 institutes and centers at NIH.
Before joining NHGRI, he earned a reputation as a gene hunter at the University of Michigan.
He has been elected to the Institute of Medicine and the National Academy of Sciences, and has received the Presidential Medal of Freedom and the National Medal of Science.
Collins also has written a number of books on science, medicine, and religion, including the New York Times bestseller, The Language of God: A Scientist Presents Evidence for Belief.
After leaving the helm of NHGRI and before becoming director of the NIH, he founded and
served as president of The BioLogos Foundation, which promotes discourse on the relationship between science and religion and advocates the perspective that belief in Christianity can be reconciled with acceptance of evolution and science, especially through the advancement of evolutionary creation.
In 2009, Pope Benedict XVI appointed Collins to the Pontifical Academy of Sciences.
Click on any of the following blue hyperlinks for more about Francis Collins:
Before being appointed director of the NIH, Collins led the Human Genome Project and other genomics research initiatives as director of the National Human Genome Research Institute (NHGRI), one of the 27 institutes and centers at NIH.
Before joining NHGRI, he earned a reputation as a gene hunter at the University of Michigan.
He has been elected to the Institute of Medicine and the National Academy of Sciences, and has received the Presidential Medal of Freedom and the National Medal of Science.
Collins also has written a number of books on science, medicine, and religion, including the New York Times bestseller, The Language of God: A Scientist Presents Evidence for Belief.
After leaving the helm of NHGRI and before becoming director of the NIH, he founded and
served as president of The BioLogos Foundation, which promotes discourse on the relationship between science and religion and advocates the perspective that belief in Christianity can be reconciled with acceptance of evolution and science, especially through the advancement of evolutionary creation.
In 2009, Pope Benedict XVI appointed Collins to the Pontifical Academy of Sciences.
Click on any of the following blue hyperlinks for more about Francis Collins:
Vaccines, including a List of Vaccine-preventable Diseases
YouTube Video: How do vaccines prevent disease? by Harvard University
Pictured: Avian flu vaccine development by reverse genetics techniques.
Click here for a List of Vaccine-preventable Diseases.
A vaccine is a biological preparation that provides active acquired immunity to a particular disease. A vaccine typically contains an agent that resembles a disease-causing microorganism and is often made from weakened or killed forms of the microbe, its toxins, or one of its surface proteins.
The agent stimulates the body's immune system to recognize the agent as a threat, destroy it, recognize and destroy any of these microorganisms that it later encounters. Vaccines can be prophylactic (example: to prevent or ameliorate the effects of a future infection by a natural or "wild" pathogen), or therapeutic (e.g., vaccines against cancer are being investigated).
The effectiveness of vaccination has been widely studied and verified; for example, the influenza vaccine, the HPV vaccine, and the chicken pox vaccine. The World Health Organization (WHO) reports that licensed vaccines are currently available for twenty-five different preventable infections.
The administration of vaccines is called vaccination. Vaccination is the most effective method of preventing infectious diseases; widespread immunity due to vaccination is largely responsible for the worldwide eradication of smallpox and the restriction of diseases such as polio, measles, and tetanus from much of the world.
The terms vaccine and vaccination are derived from Variolae vaccinae (smallpox of the cow), the term devised by Edward Jenner to denote cowpox. He used it in 1798 in the long title of his Inquiry into the Variolae vaccinae known as the Cow Pox, in which he described the protective effect of cowpox against smallpox.
In 1881, to honor Jenner, Louis Pasteur proposed that the terms should be extended to cover the new protective inoculations then being developed.
How Vaccines Work:
Generically, the process of artificial induction of immunity, in an effort to protect against infectious disease, works by 'priming' the immune system with an 'immunogen'. Stimulating immune responses with an infectious agent is known as immunization. Vaccination includes various ways of administering immunogens.
Some vaccines are administered after the patient already has contracted a disease. Vaccines given after exposure to smallpox, within the first three days, are reported to attenuate the disease considerably, and vaccination up to a week after exposure probably offers some protection from disease or may reduce the severity of disease.
The first rabies immunization was given by Louis Pasteur to a child after he was bitten by a rabid dog. Since then, it has been found that, in people with healthy immune systems, four doses of rabies vaccine over 14 days, wound care, and treatment of the bite with rabies immune globulin, commenced as soon as possible after exposure, is effective in preventing rabies in humans.
Other examples include experimental AIDS, cancer and Alzheimer's disease vaccines. Such immunizations aim to trigger an immune response more rapidly and with less harm than natural infection.
Most vaccines are given by hypodermic injection as they are not absorbed reliably through the intestines. Live attenuated polio, some typhoid, and some cholera vaccines are given orally to produce immunity in the bowel. While vaccination provides a lasting effect, it usually takes several weeks to develop, while passive immunity (the transfer of antibodies) has immediate effect.
Vaccination versus inoculation:
The term inoculation is often used interchangeably with vaccination. However, some argue that the terms are not synonymous. Dr Byron Plant explains: "Vaccination is the more commonly used term, which actually consists of a 'safe' injection of a sample taken from a cow suffering from cowpox... Inoculation, a practice probably as old as the disease itself, is the injection of the variola virus taken from a pustule or scab of a smallpox sufferer into the superficial layers of the skin, commonly on the upper arm of the subject.
Often inoculation was done 'arm to arm' or less effectively 'scab to arm'..." Inoculation oftentimes caused the patient to become infected with smallpox, and in some cases the infection turned into a severe case.
Vaccinations began in the 18th century with the work of Edward Jenner and the smallpox vaccine.
Effectiveness:
Vaccines have historically been the most effective means to fight and eradicate infectious diseases. Limitations to their effectiveness, nevertheless, exist. Sometimes, protection fails because the host's immune system simply does not respond adequately or at all. Lack of response commonly results from clinical factors such as diabetes, steroid use, HIV infection or age. It also might fail for genetic reasons if the host's immune system includes no strains of B cells that can generate antibodies suited to reacting effectively and binding to the antigens associated with the pathogen.
Even if the host does develop antibodies, protection might not be adequate; immunity might develop too slowly to be effective in time, the antibodies might not disable the pathogen completely, or there might be multiple strains of the pathogen, not all of which are equally susceptible to the immune reaction.
However, even a partial, late, or weak immunity, such as a one resulting from cross-immunity to a strain other than the target strain, may mitigate an infection, resulting in a lower mortality rate, lower morbidity, and faster recovery.
Adjuvants commonly are used to boost immune response, particularly for older people (50–75 years and up), whose immune response to a simple vaccine may have weakened.
The efficacy or performance of the vaccine is dependent on a number of factors:
If a vaccinated individual does develop the disease vaccinated against (breakthrough infection), the disease is likely to be less virulent than in unvaccinated victims.
The following are important considerations in the effectiveness of a vaccination program:
In 1958, there were 763,094 cases of measles in the United States; 552 deaths resulted. After the introduction of new vaccines, the number of cases dropped to fewer than 150 per year (median of 56). In early 2008, there were 64 suspected cases of measles. Fifty-four of those infections were associated with importation from another country, although only 13% were actually acquired outside the United States; 63 of the 64 individuals either had never been vaccinated against measles or were uncertain whether they had been vaccinated.
Vaccines have contributed to the eradication of smallpox, one of the most contagious and deadly diseases in humans. Other diseases such as rubella, polio, measles, mumps, chickenpox, and typhoid are nowhere near as common as they were a hundred years ago.
As long as the vast majority of people are vaccinated, it is much more difficult for an outbreak of disease to occur, let alone spread. This effect is called herd immunity. Polio, which is transmitted only between humans, is targeted by an extensive eradication campaign that has seen endemic polio restricted to only parts of three countries (Afghanistan, Nigeria, and Pakistan).
However, the difficulty of reaching all children as well as cultural misunderstandings have caused the anticipated eradication date to be missed several times.
Vaccines also help prevent the development of antibiotic resistance. For example, by greatly reducing the incidence of pneumonia caused by Streptococcus pneumoniae, vaccine programs have greatly reduced the prevalence of infections resistant to penicillin or other first-line antibiotics.
Adverse Effects:
Vaccination given during childhood is generally safe. Adverse effects if any are generally mild. The rate of side effects depends on the vaccine in question. Some common side effects include fever, pain around the injection site, and muscle aches. Additionally, some individuals may be allergic to ingredients in the vaccine. MMR vaccine is rarely associated with febrile seizures.
Severe side effects are extremely rare. Varicella vaccine is rarely associated with complications in immunodeficient individuals and rotavirus vaccines are moderately associated with intussusception.
Types of Vaccines:
Vaccines are dead or inactivated organisms or purified products derived from them.
There are several types of vaccines in use. These represent different strategies used to try to reduce the risk of illness while retaining the ability to induce a beneficial immune response.
Inactivated:
Main article: Inactivated vaccine
Some vaccines contain inactivated, but previously virulent, micro-organisms that have been destroyed with chemicals, heat, or radiation. Examples include the polio vaccine, hepatitis A vaccine, rabies vaccine and some influenza vaccines.
Attenuated:
Main article: Attenuated vaccine
Some vaccines contain live, attenuated microorganisms. Many of these are active viruses that have been cultivated under conditions that disable their virulent properties, or that use closely related but less dangerous organisms to produce a broad immune response. Although most attenuated vaccines are viral, some are bacterial in nature.
Examples include the viral diseases yellow fever, measles, mumps, and rubella, and the bacterial disease typhoid.
The live Mycobacterium tuberculosis vaccine developed by Calmette and Guérin is not made of a contagious strain but contains a virulently modified strain called "BCG" used to elicit an immune response to the vaccine.
The live attenuated vaccine containing strain Yersinia pestis EV is used for plague immunization. Attenuated vaccines have some advantages and disadvantages. They typically provoke more durable immunological responses and are the preferred type for healthy adults. But they may not be safe for use in immunocompromised individuals, and may rarely mutate to a virulent form and cause disease.
Toxoid:
Toxoid vaccines are made from inactivated toxic compounds that cause illness rather than the micro-organism. Examples of toxoid-based vaccines include tetanus and diphtheria. Toxoid vaccines are known for their efficacy. Not all toxoids are for micro-organisms; for example, Crotalus atrox toxoid is used to vaccinate dogs against rattlesnake bites.
Subunit:
Protein subunit – rather than introducing an inactivated or attenuated micro-organism to an immune system (which would constitute a "whole-agent" vaccine), a fragment of it can create an immune response.
Examples include the subunit vaccine against Hepatitis B virus that is composed of only the surface proteins of the virus (previously extracted from the blood serum of chronically infected patients, but now produced by recombination of the viral genes into yeast), the virus-like particle (VLP) vaccine against human papillomavirus (HPV) that is composed of the viral major capsid protein, and the hemagglutinin and neuraminidase subunits of the influenza virus. Subunit vaccine is being used for plague immunization.
Conjugate:
Conjugate – certain bacteria have polysaccharide outer coats that are poorly immunogenic. By linking these outer coats to proteins (e.g., toxins), the immune system can be led to recognize the polysaccharide as if it were a protein antigen. This approach is used in the Haemophilus influenzae type B vaccine
Experimental:
A number of innovative vaccines are also in development and in use:
While most vaccines are created using inactivated or attenuated compounds from micro-organisms, synthetic vaccines are composed mainly or wholly of synthetic peptides, carbohydrates, or antigens.
Valence:
Vaccines may be monovalent (also called univalent) or multivalent (also called polyvalent). A monovalent vaccine is designed to immunize against a single antigen or single microorganism. A multivalent or polyvalent vaccine is designed to immunize against two or more strains of the same microorganism, or against two or more microorganisms. The valency of a multivalent vaccine may be denoted with a Greek or Latin prefix (e.g., tetravalent or quadrivalent). In certain cases, a monovalent vaccine may be preferable for rapidly developing a strong immune response.[
Heterotypic:
Also known as heterologous or "Jennerian" vaccines, these are vaccines that are pathogens of other animals that either do not cause disease or cause mild disease in the organism being treated. The classic example is Jenner's use of cowpox to protect against smallpox. A current example is the use of BCG vaccine made from Mycobacterium bovis to protect against human tuberculosis
Routes of administration:
See also: Vaccination schedule
A vaccine administration may be oral, by injection (intramuscular, intradermal, subcutaneous), by puncture, transdermal or intranasal. Several recent clinical trials have aimed to deliver the vaccines via mucosal surfaces to be up-taken by the common mucosal immunity system, thus avoiding the need for injections.
Click on any of the following blue hyperlinks for more about Vaccines:
A vaccine is a biological preparation that provides active acquired immunity to a particular disease. A vaccine typically contains an agent that resembles a disease-causing microorganism and is often made from weakened or killed forms of the microbe, its toxins, or one of its surface proteins.
The agent stimulates the body's immune system to recognize the agent as a threat, destroy it, recognize and destroy any of these microorganisms that it later encounters. Vaccines can be prophylactic (example: to prevent or ameliorate the effects of a future infection by a natural or "wild" pathogen), or therapeutic (e.g., vaccines against cancer are being investigated).
The effectiveness of vaccination has been widely studied and verified; for example, the influenza vaccine, the HPV vaccine, and the chicken pox vaccine. The World Health Organization (WHO) reports that licensed vaccines are currently available for twenty-five different preventable infections.
The administration of vaccines is called vaccination. Vaccination is the most effective method of preventing infectious diseases; widespread immunity due to vaccination is largely responsible for the worldwide eradication of smallpox and the restriction of diseases such as polio, measles, and tetanus from much of the world.
The terms vaccine and vaccination are derived from Variolae vaccinae (smallpox of the cow), the term devised by Edward Jenner to denote cowpox. He used it in 1798 in the long title of his Inquiry into the Variolae vaccinae known as the Cow Pox, in which he described the protective effect of cowpox against smallpox.
In 1881, to honor Jenner, Louis Pasteur proposed that the terms should be extended to cover the new protective inoculations then being developed.
How Vaccines Work:
Generically, the process of artificial induction of immunity, in an effort to protect against infectious disease, works by 'priming' the immune system with an 'immunogen'. Stimulating immune responses with an infectious agent is known as immunization. Vaccination includes various ways of administering immunogens.
Some vaccines are administered after the patient already has contracted a disease. Vaccines given after exposure to smallpox, within the first three days, are reported to attenuate the disease considerably, and vaccination up to a week after exposure probably offers some protection from disease or may reduce the severity of disease.
The first rabies immunization was given by Louis Pasteur to a child after he was bitten by a rabid dog. Since then, it has been found that, in people with healthy immune systems, four doses of rabies vaccine over 14 days, wound care, and treatment of the bite with rabies immune globulin, commenced as soon as possible after exposure, is effective in preventing rabies in humans.
Other examples include experimental AIDS, cancer and Alzheimer's disease vaccines. Such immunizations aim to trigger an immune response more rapidly and with less harm than natural infection.
Most vaccines are given by hypodermic injection as they are not absorbed reliably through the intestines. Live attenuated polio, some typhoid, and some cholera vaccines are given orally to produce immunity in the bowel. While vaccination provides a lasting effect, it usually takes several weeks to develop, while passive immunity (the transfer of antibodies) has immediate effect.
Vaccination versus inoculation:
The term inoculation is often used interchangeably with vaccination. However, some argue that the terms are not synonymous. Dr Byron Plant explains: "Vaccination is the more commonly used term, which actually consists of a 'safe' injection of a sample taken from a cow suffering from cowpox... Inoculation, a practice probably as old as the disease itself, is the injection of the variola virus taken from a pustule or scab of a smallpox sufferer into the superficial layers of the skin, commonly on the upper arm of the subject.
Often inoculation was done 'arm to arm' or less effectively 'scab to arm'..." Inoculation oftentimes caused the patient to become infected with smallpox, and in some cases the infection turned into a severe case.
Vaccinations began in the 18th century with the work of Edward Jenner and the smallpox vaccine.
Effectiveness:
Vaccines have historically been the most effective means to fight and eradicate infectious diseases. Limitations to their effectiveness, nevertheless, exist. Sometimes, protection fails because the host's immune system simply does not respond adequately or at all. Lack of response commonly results from clinical factors such as diabetes, steroid use, HIV infection or age. It also might fail for genetic reasons if the host's immune system includes no strains of B cells that can generate antibodies suited to reacting effectively and binding to the antigens associated with the pathogen.
Even if the host does develop antibodies, protection might not be adequate; immunity might develop too slowly to be effective in time, the antibodies might not disable the pathogen completely, or there might be multiple strains of the pathogen, not all of which are equally susceptible to the immune reaction.
However, even a partial, late, or weak immunity, such as a one resulting from cross-immunity to a strain other than the target strain, may mitigate an infection, resulting in a lower mortality rate, lower morbidity, and faster recovery.
Adjuvants commonly are used to boost immune response, particularly for older people (50–75 years and up), whose immune response to a simple vaccine may have weakened.
The efficacy or performance of the vaccine is dependent on a number of factors:
- the disease itself (for some diseases vaccination performs better than for others)
- the strain of vaccine (some vaccines are specific to, or at least most effective against, particular strains of the disease)
- whether the vaccination schedule has been properly observed.
- idiosyncratic response to vaccination; some individuals are "non-responders" to certain vaccines, meaning that they do not generate antibodies even after being vaccinated correctly.
- assorted factors such as ethnicity, age, or genetic predisposition.
If a vaccinated individual does develop the disease vaccinated against (breakthrough infection), the disease is likely to be less virulent than in unvaccinated victims.
The following are important considerations in the effectiveness of a vaccination program:
- careful modeling to anticipate the impact that an immunization campaign will have on the epidemiology of the disease in the medium to long term
- ongoing surveillance for the relevant disease following introduction of a new vaccine
- maintenance of high immunization rates, even when a disease has become rare.
In 1958, there were 763,094 cases of measles in the United States; 552 deaths resulted. After the introduction of new vaccines, the number of cases dropped to fewer than 150 per year (median of 56). In early 2008, there were 64 suspected cases of measles. Fifty-four of those infections were associated with importation from another country, although only 13% were actually acquired outside the United States; 63 of the 64 individuals either had never been vaccinated against measles or were uncertain whether they had been vaccinated.
Vaccines have contributed to the eradication of smallpox, one of the most contagious and deadly diseases in humans. Other diseases such as rubella, polio, measles, mumps, chickenpox, and typhoid are nowhere near as common as they were a hundred years ago.
As long as the vast majority of people are vaccinated, it is much more difficult for an outbreak of disease to occur, let alone spread. This effect is called herd immunity. Polio, which is transmitted only between humans, is targeted by an extensive eradication campaign that has seen endemic polio restricted to only parts of three countries (Afghanistan, Nigeria, and Pakistan).
However, the difficulty of reaching all children as well as cultural misunderstandings have caused the anticipated eradication date to be missed several times.
Vaccines also help prevent the development of antibiotic resistance. For example, by greatly reducing the incidence of pneumonia caused by Streptococcus pneumoniae, vaccine programs have greatly reduced the prevalence of infections resistant to penicillin or other first-line antibiotics.
Adverse Effects:
Vaccination given during childhood is generally safe. Adverse effects if any are generally mild. The rate of side effects depends on the vaccine in question. Some common side effects include fever, pain around the injection site, and muscle aches. Additionally, some individuals may be allergic to ingredients in the vaccine. MMR vaccine is rarely associated with febrile seizures.
Severe side effects are extremely rare. Varicella vaccine is rarely associated with complications in immunodeficient individuals and rotavirus vaccines are moderately associated with intussusception.
Types of Vaccines:
Vaccines are dead or inactivated organisms or purified products derived from them.
There are several types of vaccines in use. These represent different strategies used to try to reduce the risk of illness while retaining the ability to induce a beneficial immune response.
Inactivated:
Main article: Inactivated vaccine
Some vaccines contain inactivated, but previously virulent, micro-organisms that have been destroyed with chemicals, heat, or radiation. Examples include the polio vaccine, hepatitis A vaccine, rabies vaccine and some influenza vaccines.
Attenuated:
Main article: Attenuated vaccine
Some vaccines contain live, attenuated microorganisms. Many of these are active viruses that have been cultivated under conditions that disable their virulent properties, or that use closely related but less dangerous organisms to produce a broad immune response. Although most attenuated vaccines are viral, some are bacterial in nature.
Examples include the viral diseases yellow fever, measles, mumps, and rubella, and the bacterial disease typhoid.
The live Mycobacterium tuberculosis vaccine developed by Calmette and Guérin is not made of a contagious strain but contains a virulently modified strain called "BCG" used to elicit an immune response to the vaccine.
The live attenuated vaccine containing strain Yersinia pestis EV is used for plague immunization. Attenuated vaccines have some advantages and disadvantages. They typically provoke more durable immunological responses and are the preferred type for healthy adults. But they may not be safe for use in immunocompromised individuals, and may rarely mutate to a virulent form and cause disease.
Toxoid:
Toxoid vaccines are made from inactivated toxic compounds that cause illness rather than the micro-organism. Examples of toxoid-based vaccines include tetanus and diphtheria. Toxoid vaccines are known for their efficacy. Not all toxoids are for micro-organisms; for example, Crotalus atrox toxoid is used to vaccinate dogs against rattlesnake bites.
Subunit:
Protein subunit – rather than introducing an inactivated or attenuated micro-organism to an immune system (which would constitute a "whole-agent" vaccine), a fragment of it can create an immune response.
Examples include the subunit vaccine against Hepatitis B virus that is composed of only the surface proteins of the virus (previously extracted from the blood serum of chronically infected patients, but now produced by recombination of the viral genes into yeast), the virus-like particle (VLP) vaccine against human papillomavirus (HPV) that is composed of the viral major capsid protein, and the hemagglutinin and neuraminidase subunits of the influenza virus. Subunit vaccine is being used for plague immunization.
Conjugate:
Conjugate – certain bacteria have polysaccharide outer coats that are poorly immunogenic. By linking these outer coats to proteins (e.g., toxins), the immune system can be led to recognize the polysaccharide as if it were a protein antigen. This approach is used in the Haemophilus influenzae type B vaccine
Experimental:
A number of innovative vaccines are also in development and in use:
- Dendritic cell vaccines combine dendritic cells with antigens in order to present the antigens to the body's white blood cells, thus stimulating an immune reaction. These vaccines have shown some positive preliminary results for treating brain tumors and are also tested in malignant melanoma.
- Recombinant Vector – by combining the physiology of one micro-organism and the DNA of the other, immunity can be created against diseases that have complex infection processes
- DNA vaccination – an alternative, experimental approach to vaccination called DNA vaccination, created from an infectious agent's DNA, is under development. The proposed mechanism is the insertion (and expression, enhanced by the use of electroporation, triggering immune system recognition) of viral or bacterial DNA into human or animal cells. Some cells of the immune system that recognize the proteins expressed will mount an attack against these proteins and cells expressing them. Because these cells live for a very long time, if the pathogen that normally expresses these proteins is encountered at a later time, they will be attacked instantly by the immune system. One potential advantage of DNA vaccines is that they are very easy to produce and store. As of 2015, DNA vaccination is still experimental and is not approved for human use.
- T-cell receptor peptide vaccines are under development for several diseases using models of Valley Fever, stomatitis, and atopic dermatitis. These peptides have been shown to modulate cytokine production and improve cell-mediated immunity.
- Targeting of identified bacterial proteins that are involved in complement inhibition would neutralize the key bacterial virulence mechanism.
While most vaccines are created using inactivated or attenuated compounds from micro-organisms, synthetic vaccines are composed mainly or wholly of synthetic peptides, carbohydrates, or antigens.
Valence:
Vaccines may be monovalent (also called univalent) or multivalent (also called polyvalent). A monovalent vaccine is designed to immunize against a single antigen or single microorganism. A multivalent or polyvalent vaccine is designed to immunize against two or more strains of the same microorganism, or against two or more microorganisms. The valency of a multivalent vaccine may be denoted with a Greek or Latin prefix (e.g., tetravalent or quadrivalent). In certain cases, a monovalent vaccine may be preferable for rapidly developing a strong immune response.[
Heterotypic:
Also known as heterologous or "Jennerian" vaccines, these are vaccines that are pathogens of other animals that either do not cause disease or cause mild disease in the organism being treated. The classic example is Jenner's use of cowpox to protect against smallpox. A current example is the use of BCG vaccine made from Mycobacterium bovis to protect against human tuberculosis
Routes of administration:
See also: Vaccination schedule
A vaccine administration may be oral, by injection (intramuscular, intradermal, subcutaneous), by puncture, transdermal or intranasal. Several recent clinical trials have aimed to deliver the vaccines via mucosal surfaces to be up-taken by the common mucosal immunity system, thus avoiding the need for injections.
Click on any of the following blue hyperlinks for more about Vaccines:
- Nomenclature
- Developing immunity
- Schedule
- History including Timeline
- Society and culture
- Opposition
Economics of development
Patents
- Opposition
- Production
- Excipients
Role of preservatives
- Excipients
- Delivery systems including Plasmids
- Veterinary medicine
- DIVA vaccines
- First DIVA vaccines
Use in practice
Other DIVA vaccines (under development)
- First DIVA vaccines
- Trends
- See also:
- Coalition for Epidemic Preparedness Innovations
- Flying syringe
- The Horse Named Jim
- Immunization registry
- Immunotherapy
- List of vaccine ingredients
- List of vaccine topics
- Non-specific effect of vaccines
- OPV AIDS hypothesis
- Reverse vaccinology
- TA-CD
- Virosome
- Vaccinov
- WHO Vaccine preventable diseases and immunization
- World Health Organization position papers on vaccines
- The History of Vaccines, from the College of Physicians of Philadelphia
Microsurgery including Robot-assisted Surgery
YouTube Video: Robotic Micro Surgery
Pictured: Robotic Micro Surgery
Microsurgery:
Microsurgery is a general term for surgery requiring an operating microscope. The most obvious developments have been procedures developed to allow anastomosis of successively smaller blood vessels and nerves (typically 1 mm in diameter) which have allowed transfer of tissue from one part of the body to another and re-attachment of severed parts.
Microsurgical techniques are utilized by several specialties today, such as:
Free Tissue Transfer:
Main article: free flap
Free tissue transfer is a surgical reconstructive procedure using microsurgery. A region of "donor" tissue is selected that can be isolated on a feeding artery and vein; this tissue is usually a composite of several tissue types (e.g., skin, muscle, fat, bone).
Common donor regions include the rectus abdominis muscle, latissimus dorsi muscle, fibula, radial forearm bone and skin, and lateral arm skin. The composite tissue is transferred (moved as a free flap of tissue) to the region on the patient requiring reconstruction (e.g., mandible after oral cancer resection, breast after cancer resection, traumatic tissue loss, congenital tissue absence).
The vessels that supply the free flap are anastomosed with microsurgery to matching vessels (artery and vein) in the reconstructive site. The procedure was first done in the early 1970s and has become a popular "one-stage" (single operation) procedure for many surgical reconstructive applications.
Replantation:
Replantation is the reattachment of a completely detached body part.
Fingers and thumbs are the most common but the ear, scalp, nose, face, arm and penis have all been replanted.
Generally replantation involves restoring blood flow through arteries and veins, restoring the bony skeleton and connecting tendons and nerves as required.
Robert Malt and Charles Mckhann reported the first replantation two human upper extremities by microvascular means in 1964 with the first arm replanted in a child after a train injury in 1962 in Boston.
Initially, when the techniques were developed to make replantation possible, success was defined in terms of a survival of the amputated part alone. However, as more experience was gained in this field, surgeons specializing in replantation began to understand that survival of the amputated piece was not enough to ensure success of the replant.
In this way, functional demands of the amputated specimen became paramount in guiding which amputated pieces should and should not be replanted. Additional concerns about the patients ability to tolerate the long rehabilitation process that is necessary after replantation both on physical and psychological levels also became important.
So, when fingers are amputated, for instance, a replantation surgeon must seriously consider the contribution of the finger to the overall function of the hand. In this way, every attempt will be made to salvage an amputated thumb, since a great deal of hand function is dependent on the thumb, while an index finger or small finger may not be replanted, depending on the individual needs of the patient and the ability of the patient to tolerate a long surgery and a long course of rehabilitation.
However, if an amputated specimen is not able to be replanted to its original location entirely, this does not mean that the specimen is unreplantable. In fact, replantation surgeons have learned that only a piece or a portion may be necessary to obtain a functional result, or especially in the case of multiple amputated fingers, a finger or fingers may be transposed to a more useful location to obtain a more functional result. This concept is called "spare parts" surgery.
Transplantation:
Microsurgical techniques have played a crucial role in the development of transplantation immunological research because it allowed the use of rodents models, which are more appropriate for transplantation research (there are more reagents, monoclonal antibodies, knockout animals, and other immunological tools for mice and rats than other species).
Before it was introduced, transplant immunology was studied in rodents using the skin transplantation model, which is limited by the fact that it is not vascularized. Thus, microsurgery represents the link between surgery and transplant immunological research.
The first microsurgical experiments (porto-caval anastomosis in the rat) were performed by Dr. Sun Lee (pioneer of microsurgery) at the University of Pittsburgh in 1958. After a short time, many models of organ transplants in rat and mice have been established.
Today, virtually every rat or mouse organ can be transplanted with relative high success rate. Microsurgery was also important to develop new techniques of transplantation, that would be later performed in humans. In addition, it allows reconstruction of small arteries in clinical organ transplantation (e.g. accessory arteries in cadaver liver transplantation, polar arteries in renal transplantation and in living liver donor transplantation)
Treatment of Infertility:
Microsurgery has been used to treat several pathologic conditions leading to infertility such as tubal obstructions, vas deferens obstructions and varicocele which is one of the most frequent cause of male infertility.
Microsurgical drainages by placing micro vascular bypasses between spermatic and inferior epigastric veins as proposed by Flati et al. have been successfully performed in treating male infertility due to varicocele. Microsurgical treatment has been shown to significantly improve fertility rate also in patients with recurrent varicocele who had previously undergone non microsurgical treatments.
Click on any of the following blue hyperlinks for more about Microsurgery:
___________________________________________________________________________
Robot-assisted Surgery:
Robotic surgery, computer-assisted surgery, and robotically-assisted surgery are terms for technological developments that use robotic systems to aid in surgical procedures. Robotically-assisted surgery was developed to overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capabilities of surgeons performing open surgery.
In the case of robotically-assisted minimally-invasive surgery, instead of directly moving the instruments, the surgeon uses one of two methods to control the instruments; either a direct telemanipulator or through computer control.
A telemanipulator is a remote manipulator that allows the surgeon to perform the normal movements associated with the surgery while the robotic arms carry out those movements using end-effectors and manipulators to perform the actual surgery on the patient.
In computer-controlled systems the surgeon uses a computer to control the robotic arms and its end-effectors, though these systems can also still use telemanipulators for their input. One advantage of using the computerised method is that the surgeon does not have to be present, but can be anywhere in the world, leading to the possibility for remote surgery.
In the case of enhanced open surgery, autonomous instruments (in familiar configurations) replace traditional steel tools, performing certain actions (such as rib spreading) with much smoother, feedback-controlled motions than could be achieved by a human hand.
The main object of such smart instruments is to reduce or eliminate the tissue trauma traditionally associated with open surgery without requiring more than a few minutes' training on the part of surgeons. This approach seeks to improve open surgeries, particularly cardio-thoracic, that have so far not benefited from minimally-invasive techniques.
Robotic surgery has been criticized for its expense, by one estimate costing $1,500 to $2000 more per patient.
Comparison to Traditional Methods:
Major advances aided by surgical robots have been remote surgery, minimally invasive surgery and unmanned surgery. Due to robotic use, the surgery is done with precision, miniaturization, smaller incisions; decreased blood loss, less pain, and quicker healing time. Articulation beyond normal manipulation and three-dimensional magnification helps resulting in improved ergonomics.
Due to these techniques there is a reduced duration of hospital stays, blood loss, transfusions, and use of pain medication. The existing open surgery technique has many flaws like limited access to surgical area, long recovery time, long hours of operation, blood loss, surgical scars and marks.
The robot normally costs $1,390,000 and while its disposable supply cost is normally $1,500 per procedure, the cost of the procedure is higher. Additional surgical training is needed to operate the system.
Numerous feasibility studies have been done to determine whether the purchase of such systems are worthwhile. As it stands, opinions differ dramatically. Surgeons report that, although the manufacturers of such systems provide training on this new technology, the learning phase is intensive and surgeons must operate on twelve to eighteen patients before they adapt.
During the training phase, minimally invasive operations can take up to twice as long as traditional surgery, leading to operating room tie ups and surgical staffs keeping patients under anesthesia for longer periods. Patient surveys indicate they chose the procedure based on expectations of decreased morbidity, improved outcomes, reduced blood loss and less pain. Higher expectations may explain higher rates of dissatisfaction and regret.
Compared with other minimally invasive surgery approaches, robot-assisted surgery gives the surgeon better control over the surgical instruments and a better view of the surgical site. In addition, surgeons no longer have to stand throughout the surgery and do not tire as quickly. Naturally occurring hand tremors are filtered out by the robot's computer software.
Finally, the surgical robot can continuously be used by rotating surgery teams.
Critics of the system, including the American Congress of Obstetricians and Gynecologists, say there is a steep learning curve for surgeons who adopt use of the system and that there's a lack of studies that indicate long-term results are superior to results following traditional laparoscopic surgery. Articles in the newly created Journal of Robotic Surgery tend to report on one surgeon's experience.
A Medicare study found that some procedures that have traditionally been performed with large incisions can be converted to "minimally invasive" endoscopic procedures with the use of the Da Vinci Surgical System, shortening length-of-stay in the hospital and reducing recovery times. But because of the hefty cost of the robotic system it is not clear that it is cost-effective for hospitals and physicians despite any benefits to patients since there is no additional reimbursement paid by the government or insurance companies when the system is used.
Robot-assisted pancreatectomies have been found to be associated with "longer operating time, lower estimated blood loss, a higher spleen-preservation rate, and shorter hospital stay[s]" than laparoscopic pancreatectomies; there was "no significant difference in transfusion, conversion to open surgery, overall complications, severe complications, pancreatic fistula, severe pancreatic fistula, ICU stay, total cost, and 30-day mortality between the two groups."
For surgical removal of the uterus and cervix for early cervical cancer, robotic and laparoscopic surgery resulted in similar outcomes with respect to the cancer.
Click on any of the following blue hyperlinks for more about Robot-assisted Surgery:
Microsurgery is a general term for surgery requiring an operating microscope. The most obvious developments have been procedures developed to allow anastomosis of successively smaller blood vessels and nerves (typically 1 mm in diameter) which have allowed transfer of tissue from one part of the body to another and re-attachment of severed parts.
Microsurgical techniques are utilized by several specialties today, such as:
- general surgery,
- ophthalmology,
- orthopedic surgery,
- gynecological surgery,
- otolaryngology,
- neurosurgery,
- oral and maxillofacial surgery,
- plastic surgery,
- podiatric surgery
- and pediatric surgery.
Free Tissue Transfer:
Main article: free flap
Free tissue transfer is a surgical reconstructive procedure using microsurgery. A region of "donor" tissue is selected that can be isolated on a feeding artery and vein; this tissue is usually a composite of several tissue types (e.g., skin, muscle, fat, bone).
Common donor regions include the rectus abdominis muscle, latissimus dorsi muscle, fibula, radial forearm bone and skin, and lateral arm skin. The composite tissue is transferred (moved as a free flap of tissue) to the region on the patient requiring reconstruction (e.g., mandible after oral cancer resection, breast after cancer resection, traumatic tissue loss, congenital tissue absence).
The vessels that supply the free flap are anastomosed with microsurgery to matching vessels (artery and vein) in the reconstructive site. The procedure was first done in the early 1970s and has become a popular "one-stage" (single operation) procedure for many surgical reconstructive applications.
Replantation:
Replantation is the reattachment of a completely detached body part.
Fingers and thumbs are the most common but the ear, scalp, nose, face, arm and penis have all been replanted.
Generally replantation involves restoring blood flow through arteries and veins, restoring the bony skeleton and connecting tendons and nerves as required.
Robert Malt and Charles Mckhann reported the first replantation two human upper extremities by microvascular means in 1964 with the first arm replanted in a child after a train injury in 1962 in Boston.
Initially, when the techniques were developed to make replantation possible, success was defined in terms of a survival of the amputated part alone. However, as more experience was gained in this field, surgeons specializing in replantation began to understand that survival of the amputated piece was not enough to ensure success of the replant.
In this way, functional demands of the amputated specimen became paramount in guiding which amputated pieces should and should not be replanted. Additional concerns about the patients ability to tolerate the long rehabilitation process that is necessary after replantation both on physical and psychological levels also became important.
So, when fingers are amputated, for instance, a replantation surgeon must seriously consider the contribution of the finger to the overall function of the hand. In this way, every attempt will be made to salvage an amputated thumb, since a great deal of hand function is dependent on the thumb, while an index finger or small finger may not be replanted, depending on the individual needs of the patient and the ability of the patient to tolerate a long surgery and a long course of rehabilitation.
However, if an amputated specimen is not able to be replanted to its original location entirely, this does not mean that the specimen is unreplantable. In fact, replantation surgeons have learned that only a piece or a portion may be necessary to obtain a functional result, or especially in the case of multiple amputated fingers, a finger or fingers may be transposed to a more useful location to obtain a more functional result. This concept is called "spare parts" surgery.
Transplantation:
Microsurgical techniques have played a crucial role in the development of transplantation immunological research because it allowed the use of rodents models, which are more appropriate for transplantation research (there are more reagents, monoclonal antibodies, knockout animals, and other immunological tools for mice and rats than other species).
Before it was introduced, transplant immunology was studied in rodents using the skin transplantation model, which is limited by the fact that it is not vascularized. Thus, microsurgery represents the link between surgery and transplant immunological research.
The first microsurgical experiments (porto-caval anastomosis in the rat) were performed by Dr. Sun Lee (pioneer of microsurgery) at the University of Pittsburgh in 1958. After a short time, many models of organ transplants in rat and mice have been established.
Today, virtually every rat or mouse organ can be transplanted with relative high success rate. Microsurgery was also important to develop new techniques of transplantation, that would be later performed in humans. In addition, it allows reconstruction of small arteries in clinical organ transplantation (e.g. accessory arteries in cadaver liver transplantation, polar arteries in renal transplantation and in living liver donor transplantation)
Treatment of Infertility:
Microsurgery has been used to treat several pathologic conditions leading to infertility such as tubal obstructions, vas deferens obstructions and varicocele which is one of the most frequent cause of male infertility.
Microsurgical drainages by placing micro vascular bypasses between spermatic and inferior epigastric veins as proposed by Flati et al. have been successfully performed in treating male infertility due to varicocele. Microsurgical treatment has been shown to significantly improve fertility rate also in patients with recurrent varicocele who had previously undergone non microsurgical treatments.
Click on any of the following blue hyperlinks for more about Microsurgery:
___________________________________________________________________________
Robot-assisted Surgery:
Robotic surgery, computer-assisted surgery, and robotically-assisted surgery are terms for technological developments that use robotic systems to aid in surgical procedures. Robotically-assisted surgery was developed to overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capabilities of surgeons performing open surgery.
In the case of robotically-assisted minimally-invasive surgery, instead of directly moving the instruments, the surgeon uses one of two methods to control the instruments; either a direct telemanipulator or through computer control.
A telemanipulator is a remote manipulator that allows the surgeon to perform the normal movements associated with the surgery while the robotic arms carry out those movements using end-effectors and manipulators to perform the actual surgery on the patient.
In computer-controlled systems the surgeon uses a computer to control the robotic arms and its end-effectors, though these systems can also still use telemanipulators for their input. One advantage of using the computerised method is that the surgeon does not have to be present, but can be anywhere in the world, leading to the possibility for remote surgery.
In the case of enhanced open surgery, autonomous instruments (in familiar configurations) replace traditional steel tools, performing certain actions (such as rib spreading) with much smoother, feedback-controlled motions than could be achieved by a human hand.
The main object of such smart instruments is to reduce or eliminate the tissue trauma traditionally associated with open surgery without requiring more than a few minutes' training on the part of surgeons. This approach seeks to improve open surgeries, particularly cardio-thoracic, that have so far not benefited from minimally-invasive techniques.
Robotic surgery has been criticized for its expense, by one estimate costing $1,500 to $2000 more per patient.
Comparison to Traditional Methods:
Major advances aided by surgical robots have been remote surgery, minimally invasive surgery and unmanned surgery. Due to robotic use, the surgery is done with precision, miniaturization, smaller incisions; decreased blood loss, less pain, and quicker healing time. Articulation beyond normal manipulation and three-dimensional magnification helps resulting in improved ergonomics.
Due to these techniques there is a reduced duration of hospital stays, blood loss, transfusions, and use of pain medication. The existing open surgery technique has many flaws like limited access to surgical area, long recovery time, long hours of operation, blood loss, surgical scars and marks.
The robot normally costs $1,390,000 and while its disposable supply cost is normally $1,500 per procedure, the cost of the procedure is higher. Additional surgical training is needed to operate the system.
Numerous feasibility studies have been done to determine whether the purchase of such systems are worthwhile. As it stands, opinions differ dramatically. Surgeons report that, although the manufacturers of such systems provide training on this new technology, the learning phase is intensive and surgeons must operate on twelve to eighteen patients before they adapt.
During the training phase, minimally invasive operations can take up to twice as long as traditional surgery, leading to operating room tie ups and surgical staffs keeping patients under anesthesia for longer periods. Patient surveys indicate they chose the procedure based on expectations of decreased morbidity, improved outcomes, reduced blood loss and less pain. Higher expectations may explain higher rates of dissatisfaction and regret.
Compared with other minimally invasive surgery approaches, robot-assisted surgery gives the surgeon better control over the surgical instruments and a better view of the surgical site. In addition, surgeons no longer have to stand throughout the surgery and do not tire as quickly. Naturally occurring hand tremors are filtered out by the robot's computer software.
Finally, the surgical robot can continuously be used by rotating surgery teams.
Critics of the system, including the American Congress of Obstetricians and Gynecologists, say there is a steep learning curve for surgeons who adopt use of the system and that there's a lack of studies that indicate long-term results are superior to results following traditional laparoscopic surgery. Articles in the newly created Journal of Robotic Surgery tend to report on one surgeon's experience.
A Medicare study found that some procedures that have traditionally been performed with large incisions can be converted to "minimally invasive" endoscopic procedures with the use of the Da Vinci Surgical System, shortening length-of-stay in the hospital and reducing recovery times. But because of the hefty cost of the robotic system it is not clear that it is cost-effective for hospitals and physicians despite any benefits to patients since there is no additional reimbursement paid by the government or insurance companies when the system is used.
Robot-assisted pancreatectomies have been found to be associated with "longer operating time, lower estimated blood loss, a higher spleen-preservation rate, and shorter hospital stay[s]" than laparoscopic pancreatectomies; there was "no significant difference in transfusion, conversion to open surgery, overall complications, severe complications, pancreatic fistula, severe pancreatic fistula, ICU stay, total cost, and 30-day mortality between the two groups."
For surgical removal of the uterus and cervix for early cervical cancer, robotic and laparoscopic surgery resulted in similar outcomes with respect to the cancer.
Click on any of the following blue hyperlinks for more about Robot-assisted Surgery:
- Uses:
- Miniature robotics
- History
- See also:
Reconstructive Surgery
YouTube Video: Plastic Surgery - Face Reconstruction - BBC Worldwide
Pictured: How a Severely Burned Former Firefighter Is Doing 1 Year After Face Transplant Surgery by ABC News
Reconstructive surgery is, in its broadest sense, the use of surgery to restore the form and function of the body; maxillo-facial surgeons, plastic surgeons and otolaryngologists do reconstructive surgery on faces after trauma and to reconstruct the head and neck after cancer.
Other branches of surgery (e.g., general surgery, gynecological surgery, pediatric surgery, cosmetic surgery, podiatric surgery) also perform some reconstructive procedures.
The common feature is that the operation attempts to restore the anatomy or the function of the body part to normal.
Reconstructive surgeons use the concept of a reconstructive ladder to manage increasingly complex wounds. This ranges from very simple techniques such as primary closure and dressings to more complex skin grafts, tissue expansion and free flaps.
Cosmetic surgery procedures include breast enhancement, reduction and lift, face lift, forehead lift, upper and lower eyelid surgery (blepharoplasty), laser skin resurfacing (laser resurfacing), chemical peel, nose reshaping (rhinoplasty), reconstruction liposuction, nasal reconstruction using the paramedian flap, as well as tummy tuck (abdominoplasty).
Many of these procedures are constantly being improved.
In 2010 only 10 research papers were identified which looked at reconstructive surgery after massive weight loss.
Use of implants and biomaterials:
Biomaterials are, in their simplest form, plastic implants used to correct or replace damaged body parts. Biomaterials were not used for reconstructive purposes until after World War II due to the new and improved technology and the tremendous need for the correction of damaged body parts that could replace transplantation.
The process involves scientific and medical research to ensure that the biomaterials are biocompatible and that they can assume the mechanical and functioning roles of the components they are replacing.
A successful implantation can best be achieved by a team that understands not only the anatomical, physiological, biochemical, and pathological aspects of the problem, but also comprehends bioengineering. Cellular and tissue engineering is crucial to know for reconstructive procedures.
An overview on the standardization and control of biomedical devices has recently been gathered by D. G. Singleton. Papers have covered in depth the U.S. Food and Drug Administration (FDA) Premarket Approval Process (J. L. Ely) and FDA regulations governing Class III devices.
Two papers have described how the National Bureau of Standards, American Dental Association, National Institute of Dental Research, and private dental companies have collaborated in a number of important advances in dental materials, devices, and analytical systems.
Other branches of surgery (e.g., general surgery, gynecological surgery, pediatric surgery, cosmetic surgery, podiatric surgery) also perform some reconstructive procedures.
The common feature is that the operation attempts to restore the anatomy or the function of the body part to normal.
Reconstructive surgeons use the concept of a reconstructive ladder to manage increasingly complex wounds. This ranges from very simple techniques such as primary closure and dressings to more complex skin grafts, tissue expansion and free flaps.
Cosmetic surgery procedures include breast enhancement, reduction and lift, face lift, forehead lift, upper and lower eyelid surgery (blepharoplasty), laser skin resurfacing (laser resurfacing), chemical peel, nose reshaping (rhinoplasty), reconstruction liposuction, nasal reconstruction using the paramedian flap, as well as tummy tuck (abdominoplasty).
Many of these procedures are constantly being improved.
In 2010 only 10 research papers were identified which looked at reconstructive surgery after massive weight loss.
Use of implants and biomaterials:
Biomaterials are, in their simplest form, plastic implants used to correct or replace damaged body parts. Biomaterials were not used for reconstructive purposes until after World War II due to the new and improved technology and the tremendous need for the correction of damaged body parts that could replace transplantation.
The process involves scientific and medical research to ensure that the biomaterials are biocompatible and that they can assume the mechanical and functioning roles of the components they are replacing.
A successful implantation can best be achieved by a team that understands not only the anatomical, physiological, biochemical, and pathological aspects of the problem, but also comprehends bioengineering. Cellular and tissue engineering is crucial to know for reconstructive procedures.
An overview on the standardization and control of biomedical devices has recently been gathered by D. G. Singleton. Papers have covered in depth the U.S. Food and Drug Administration (FDA) Premarket Approval Process (J. L. Ely) and FDA regulations governing Class III devices.
Two papers have described how the National Bureau of Standards, American Dental Association, National Institute of Dental Research, and private dental companies have collaborated in a number of important advances in dental materials, devices, and analytical systems.
Vascular Surgery
YouTube Video: What is Vascular Surgery?
Pictured: Illustration of the major arteries and veins comprising the human vascular system
Vascular surgery is a surgical subspecialty in which diseases of the vascular system, or arteries, veins and lymphatic circulation, are managed by medical therapy, minimally-invasive catheter procedures, and surgical reconstruction.
The specialty evolved from general and cardiac surgery as well as minimally invasive techniques pioneered by interventional radiology.
The vascular surgeon is trained in the diagnosis and management of diseases affecting all parts of the vascular system except those of the heart and brain. Cardiothoracic surgeons and interventional cardiologists manage diseases of the heart vessels.
Neurosurgeons and interventional neuroradiologists surgically manage diseases of the vessels in the brain (e.g., intracranial aneurysms).
Scope:
Vascular surgery encompasses surgery of the aorta, carotid arteries, and lower extremities, including the iliac, femoral, and tibial arteries. Vascular surgery also involves surgery of veins, for conditions such as May–Thurner syndrome and for varicose veins. In some regions, vascular surgery also includes dialysis access surgery and transplant surgery.
Click here for the main disease categories and procedures associated with the vascular system.
Click on the following for more about Vascular Surgery:
The specialty evolved from general and cardiac surgery as well as minimally invasive techniques pioneered by interventional radiology.
The vascular surgeon is trained in the diagnosis and management of diseases affecting all parts of the vascular system except those of the heart and brain. Cardiothoracic surgeons and interventional cardiologists manage diseases of the heart vessels.
Neurosurgeons and interventional neuroradiologists surgically manage diseases of the vessels in the brain (e.g., intracranial aneurysms).
Scope:
Vascular surgery encompasses surgery of the aorta, carotid arteries, and lower extremities, including the iliac, femoral, and tibial arteries. Vascular surgery also involves surgery of veins, for conditions such as May–Thurner syndrome and for varicose veins. In some regions, vascular surgery also includes dialysis access surgery and transplant surgery.
Click here for the main disease categories and procedures associated with the vascular system.
Click on the following for more about Vascular Surgery:
- History
- Investigations including Major trials
- Training
- See also
- Society for Vascular Surgery, the major American professional society
- Ischemia-repurfusion injuries of the appendicular musculoskeletal system
- Society for Vascular Surgery (U.S.)
- International Society for Vascular Surgery
Laser Technology in Medicine
YouTube Video of Laser Treatment for Lung Cancer Victims
Pictured: Example Laser Application in Dentistry
Laser medicine consists in the use of lasers in medical diagnosis, treatments, or therapies, such as laser photodynamic therapy.
Lasers used in medicine include in principle any type of laser, but especially:
Applications in Medicine:
Examples of procedures, practices, devices, and specialties where lasers are utilized include:
See Also:
Lasers used in medicine include in principle any type of laser, but especially:
- CO2 lasers, used to cut, vaporize, ablate and photo-coagulate soft tissue.
- diode lasers
- dye lasers
- excimer lasers
- fiber lasers
- gas lasers
- free electron lasers
- semiconductor diode lasers
Applications in Medicine:
Examples of procedures, practices, devices, and specialties where lasers are utilized include:
- angioplasty
- cancer diagnosis
- cancer treatment
- cosmetic dermatology such as scar revision, skin resurfacing, laser hair removal, tattoo removal
- dermatology, to treat melanoma
- frenectomy
- lithotripsy,
- laser mammography
- medical imaging
- microscopy
- ophthalmology (includes Lasik and laser photocoagulation)
- optical coherence tomography
- optogenetics
- prostatectomy
- plastic surgery, in laser liposuction
- surgery, to ablate and cauterize tissue
See Also:
- Dental laser
- Endovenous laser therapy
- Laser-assisted new attachment procedure
- Laser scalpel
- Laser surgery
- Light therapy
- Low level laser therapy
- Photodynamic therapy
- Photomedicine
- Soft-tissue laser surgery
Medical Robots
YouTube Video: da Vinci Robot Stitches a Grape Back Together
Pictured: A laparoscopic robotic surgery machine. Patient-side cart of the da Vinci surgical system.
A medical robot is a robot used in the medical sciences. They include, but are not limited to, surgical robots. These are in most telemanipulators, which use the surgeon's actions on one side to control the "effector" on the other side.
Types of Medical Robots:
Surgical robots: These robots either allow surgical operations to be carried out with greater precision than an unaided human surgeon, or allow remote surgery where a human surgeon is not physically present with the patient.
Rehabilitation robots: This group facilitates and supports the lives of infirm, elderly people, or those with dysfunction of body parts effecting movement. These robots are also used for rehabilitation and related procedures, such as training and therapy.
Biorobots: A group of robots designed to imitate the cognition of humans and animals.
Telepresence robots: Allow off-site medical professionals to move, look around, communicate, and participate from remote locations.
Pharmacy automation: Robotic systems to dispense oral solids in a retail pharmacy setting or preparing sterile IV admixtures in a hospital pharmacy setting.
Disinfection robot: has the capability to disinfect a whole room in mere minutes, generally using ultraviolet light technology. They are being used to fight Ebola virus disease. Pulsed light (PL) is a technique to decontaminate surfaces by killing MOs using pulses of an intense broad spectrum, rich in UV-C light. UV-C is the portion of the electromagnetic spectrum corresponding to the band between 200 and 280 nm. PL works with Xenon flash lamps that can produce flashes several times per second. Disinfection robots use pulsed UV light.
See also:
Types of Medical Robots:
Surgical robots: These robots either allow surgical operations to be carried out with greater precision than an unaided human surgeon, or allow remote surgery where a human surgeon is not physically present with the patient.
Rehabilitation robots: This group facilitates and supports the lives of infirm, elderly people, or those with dysfunction of body parts effecting movement. These robots are also used for rehabilitation and related procedures, such as training and therapy.
Biorobots: A group of robots designed to imitate the cognition of humans and animals.
Telepresence robots: Allow off-site medical professionals to move, look around, communicate, and participate from remote locations.
Pharmacy automation: Robotic systems to dispense oral solids in a retail pharmacy setting or preparing sterile IV admixtures in a hospital pharmacy setting.
Disinfection robot: has the capability to disinfect a whole room in mere minutes, generally using ultraviolet light technology. They are being used to fight Ebola virus disease. Pulsed light (PL) is a technique to decontaminate surfaces by killing MOs using pulses of an intense broad spectrum, rich in UV-C light. UV-C is the portion of the electromagnetic spectrum corresponding to the band between 200 and 280 nm. PL works with Xenon flash lamps that can produce flashes several times per second. Disinfection robots use pulsed UV light.
See also:
- Medical Robots Conference
- Where Are the Elder Care Robots?
- Notable Chinese Firms Emerging in Medical Robots Sector (IEEE)
A woman with a transplanted uterus just gave birth — a first for the U.S. by the Washington Post (December 3, 2017)
YouTube Video: A woman with a transplanted uterus just gave birth
Pictured: The first baby born as a result of a womb transplant in the United States lies in the neonatal unit at Baylor University Medical Center in Dallas. (Baylor University Medical Center via AP)
For women with uterine factor infertility who want to be mothers, the calculus has always been heartbreakingly simple: No uterus means no pregnancy.
The equation changed drastically in 2014, when Swedish doctors delivered a healthy 3.9-pound baby that was the result of a successful uterus transplant.
Now, doctors at Baylor University say a woman born without a uterus has delivered a baby after a successful transplant, the first time the surgery has worked outside of the Swedish hospital that pioneered the procedure.
The success marked another step forward for transplant surgery aimed at improving a person’s life, not just saving it. Doctors have performed penis transplants for wounded troops, given a young boy two new hands and given a new nose, lips, palate, eyelids and jaw to a woman who was gruesomely disfigured after she was shot in the face.
The fact that the uterus transplant success in Sweden can be replicated is a promising sign for thousands of women who have been unable to conceive. And doctors at Baylor have sought to expand the limits of the procedure, using donated uteri that didn’t come from family members and, in some cases, organs that came from cadavers.
“To make the field grow and expand and have the procedure come out to more women, it has to be reproduced,” said Liza Johannesson, a uterus transplant surgeon who left the Swedish team to join Baylor’s group, told the New York Times. “It was a very exciting birth. I’ve seen so many births and delivered so many babies, but this was a very special one.”
Baylor’s clinical trial was designed to include 10 women. Eight, including the new mother, have received the transplants so far. One recipient is pregnant, and two are trying to conceive. Four others had transplants that failed, and the organs had to be surgically removed.
[A 2-year-old’s kidney transplant was put on hold — after his donor father’s probation violation]
The surgeries differ from other transplants in one major way: They’re not intended to be permanent. Instead, they give a woman enough time to conceive a child. In vitro fertilized eggs are transferred to the woman’s womb, and after the baby is born, the uterus is removed via surgery.
That means the patient doesn’t have to spend a lifetime taking powerful drugs that suppress her immune system, which would put her at risk for dangerous long-term complications.
The university hasn’t released the names of the mother or the baby, saying they chose to remain anonymous.
But according to Tech Times, the donor uterus came from Taylor Siler, a Dallas nurse who has two children. She said she wanted to offer another woman a chance to give birth.
While this most recent birth is a step forward, uterine transplantation surgery is still in its very early days, and doctors conceded that there had been setbacks, particularly with the earliest volunteers.
In February 2016, Lindsey McFarland became the first woman to receive a uterus transplant in the United States. The organ came from a dead donor and was implanted during a nine-hour surgery.
Her story gave a sense of just how tenuous the nascent surgery is. She had to have her transplanted uterus removed after coming down with a yeast infection.
According to Newsweek, most of the women in Baylor’s trial had Mayer-Rokitansky-Küster-Hauser (MRKH) syndrome, which makes pregnancy and giving birth impossible.
And for most of their lives many had been told that they wouldn’t be able to have children.
“We do transplants all day long,” Giuliano Tesla, who heads the uterus transplant clinical trial at Baylor University Medical Center, told Time magazine. “This is not the same thing. I totally underestimated what this type of transplant does for these women. What I’ve learned emotionally, I do not have the words to describe.”
The equation changed drastically in 2014, when Swedish doctors delivered a healthy 3.9-pound baby that was the result of a successful uterus transplant.
Now, doctors at Baylor University say a woman born without a uterus has delivered a baby after a successful transplant, the first time the surgery has worked outside of the Swedish hospital that pioneered the procedure.
The success marked another step forward for transplant surgery aimed at improving a person’s life, not just saving it. Doctors have performed penis transplants for wounded troops, given a young boy two new hands and given a new nose, lips, palate, eyelids and jaw to a woman who was gruesomely disfigured after she was shot in the face.
The fact that the uterus transplant success in Sweden can be replicated is a promising sign for thousands of women who have been unable to conceive. And doctors at Baylor have sought to expand the limits of the procedure, using donated uteri that didn’t come from family members and, in some cases, organs that came from cadavers.
“To make the field grow and expand and have the procedure come out to more women, it has to be reproduced,” said Liza Johannesson, a uterus transplant surgeon who left the Swedish team to join Baylor’s group, told the New York Times. “It was a very exciting birth. I’ve seen so many births and delivered so many babies, but this was a very special one.”
Baylor’s clinical trial was designed to include 10 women. Eight, including the new mother, have received the transplants so far. One recipient is pregnant, and two are trying to conceive. Four others had transplants that failed, and the organs had to be surgically removed.
[A 2-year-old’s kidney transplant was put on hold — after his donor father’s probation violation]
The surgeries differ from other transplants in one major way: They’re not intended to be permanent. Instead, they give a woman enough time to conceive a child. In vitro fertilized eggs are transferred to the woman’s womb, and after the baby is born, the uterus is removed via surgery.
That means the patient doesn’t have to spend a lifetime taking powerful drugs that suppress her immune system, which would put her at risk for dangerous long-term complications.
The university hasn’t released the names of the mother or the baby, saying they chose to remain anonymous.
But according to Tech Times, the donor uterus came from Taylor Siler, a Dallas nurse who has two children. She said she wanted to offer another woman a chance to give birth.
While this most recent birth is a step forward, uterine transplantation surgery is still in its very early days, and doctors conceded that there had been setbacks, particularly with the earliest volunteers.
In February 2016, Lindsey McFarland became the first woman to receive a uterus transplant in the United States. The organ came from a dead donor and was implanted during a nine-hour surgery.
Her story gave a sense of just how tenuous the nascent surgery is. She had to have her transplanted uterus removed after coming down with a yeast infection.
According to Newsweek, most of the women in Baylor’s trial had Mayer-Rokitansky-Küster-Hauser (MRKH) syndrome, which makes pregnancy and giving birth impossible.
And for most of their lives many had been told that they wouldn’t be able to have children.
“We do transplants all day long,” Giuliano Tesla, who heads the uterus transplant clinical trial at Baylor University Medical Center, told Time magazine. “This is not the same thing. I totally underestimated what this type of transplant does for these women. What I’ve learned emotionally, I do not have the words to describe.”
Corneal Transplants
YouTube Video of Corneal Transplant Surgery
Corneal transplantation, also known as corneal grafting, is a surgical procedure where a damaged or diseased cornea is replaced by donated corneal tissue (the graft). When the entire cornea is replaced it is known as penetrating keratoplasty and when only part of the cornea is replaced it is known as lamellar keratoplasty.
Keratoplasty simply means surgery to the cornea. The graft is taken from a recently dead individual with no known diseases or other factors that may affect the chance of survival of the donated tissue or the health of the recipient.
The cornea is the transparent front part of the eye that covers the iris, pupil and anterior chamber. The surgical procedure is performed by ophthalmologists, physicians who specialize in eyes, and is often done on an outpatient basis. Donors can be of any age, as is shown in the case of Janis Babson, who donated her eyes at age 10. The corneal transplantation is performed when medicines, keratoconus conservative surgery and cross-linking cannot heal the cornea anymore.
Medical uses:
Indications include the following:
Risks:
The risks are similar to other intraocular procedures, but additionally include graft rejection (lifelong), detachment or displacement of lamellar transplants and primary graft failure.
There is also a risk of infection. Since the cornea has no blood vessels (it takes its nutrients from the aqueous humor) it heals much more slowly than a cut on the skin. While the wound is healing, it is possible that it might become infected by various microorganisms.
This risk is minimized by antibiotic prophylaxis (using antibiotic eyedrops, even when no infection exists).
There is a risk of cornea rejection, which occurs in about 20% of cases. Graft failure can occur at any time after the cornea has been transplanted, even years or decades later. The causes can vary, though it is usually due to new injury or illness. Treatment can be either medical or surgical, depending on the individual case. An early, technical cause of failure may be an excessively tight stitch cheesewiring through the sclera.
Procedures:
On the day of the surgery, the patient arrives to either a hospital or an outpatient surgery center, where the procedure will be performed. The patient is given a brief physical examination by the surgical team and is taken to the operating room. In the operating room, the patient lies down on an operating table and is either given general anesthesia, or local anesthesia and a sedative.
With anesthesia induced, the surgical team prepares the eye to be operated on and drapes the face around the eye. An eyelid speculum is placed to keep the lids open, and some lubrication is placed on the eye to prevent drying. In children, a metal ring is stitched to the sclera which will provide support of the sclera during the procedure.
Pre-operative examination:
In most instances, the person will meet with their ophthalmologist for an examination in the weeks or months preceding the surgery. During the exam, the ophthalmologist will examine the eye and diagnose the condition. The doctor will then discuss the condition with the patient, including the different treatment options available.
The doctor will also discuss the risks and benefits of the various options. If the patient elects to proceed with the surgery, the doctor will have the patient sign an informed consent form.
The doctor might also perform a physical examination and order lab tests, such as blood work, X-rays, or an EKG.
The surgery date and time will also be set, and the patient will be told where the surgery will take place. Within the United States, the supply of corneas is sufficient to meet the demand for surgery and research purposes. Therefore, unlike other tissues for transplantation, delays and shortages are not usually an issue.
Penetrating keratoplasty:
A trephine (a circular cutting device), which removes a circular disc of cornea, is used by the surgeon to cut the donor cornea. A second trephine is then used to remove a similar-sized portion of the patient's cornea. The donor tissue is then sewn in place with sutures.
Antibiotic eyedrops are placed, the eye is patched, and the patient is taken to a recovery area while the effects of the anesthesia wear off. The patient typically goes home following this and sees the doctor the following day for the first postoperative appointment.
Lamellar keratoplasty:
Lamellar keratoplasty encompasses several techniques which selectively replace diseased layers of the cornea while leaving healthy layers in place. The chief advantage is improved tectonic integrity of the eye. Disadvantages include the technically challenging nature of these procedures, which replace portions of a structure only 500 µm thick, and reduced optical performance of the donor/recipient interface compared to full-thickness keratoplasty.
Deep anterior lamellar keratoplasty:
In this procedure, the anterior layers of the central cornea are removed and replaced with donor tissue. Endothelial cells and the Descemets membrane are left in place. This technique is used in cases of anterior corneal opacifications, scars, and ectatic diseases such as keratoconus.
Endothelial keratoplasty:
Endothelial keratoplasty replaces the patient's endothelium with a transplanted disc of posterior stroma/Descemets/endothelium (DSEK) or Descemets/endothelium (DMEK).
This relatively new procedure has revolutionized treatment of disorders of the innermost layer of the cornea (endothelium). Unlike a full-thickness corneal transplant, the surgery can be performed with one or no sutures. Patients may recover functional vision in days to weeks, as opposed to up to a year with full thickness transplants.
However, an Australian study has shown that despite its benefits, the loss of endothelial cells that maintain transparency is much higher in DSEK compared to a full-thickness corneal transplant. The reason may be greater tissue manipulation during surgery, the study concluded.
During surgery the patient's corneal endothelium is removed and replaced with donor tissue. With DSEK, the donor includes a thin layer of stroma, as well as endothelium, and is commonly 100–150 µm thick. With DMEK, only the endothelium is transplanted. In the immediate postoperative period the donor tissue is held in position with an air bubble placed inside the eye (the anterior chamber). The tissue self-adheres in a short period and the air is adsorbed into the surrounding tissues.
Complications include displacement of the donor tissue requiring repositioning ("refloating"). This is more common with DMEK than DSEK. Folds in the donor tissue may reduce the quality of vision, requiring repair. Rejection of the donor tissue may require repeating the procedure. Gradual reduction in endothelial cell density over time can lead to loss of clarity and require repeating the procedure.
Patients with endothelial transplants frequently achieve best corrected vision in the 20/30 to 20/40 range, although some reach 20/20. Optical irregularity at the graft/host interface may limit vision below 20/20.
Synthetic corneas:
Main article: Keratoprosthesis
Boston keratoprosthesis:
The Boston keratoprosthesis is the most widely used synthetic cornea to date with over 900 procedures performed worldwide in 2008. The Boston KPro was developed at the Massachusetts Eye and Ear Infirmary under the leadership of Claes Dohlman, MD, PhD.
AlphaCor:
In cases where there have been several graft failures or the risk for keratoplasty is high, synthetic corneas can substitute successfully for donor corneas. Such a device contains a peripheral skirt and a transparent central region.
These two parts are connected on a molecular level by an interpenetrating polymer network, made from poly-2-hydroxyethyl methacrylate (pHEMA). AlphaCor is a U.S. FDA-approved type of synthetic cornea measuring 7.0 mm in diameter and 0.5 mm in thickness. The main advantages of synthetic corneas are that they are biocompatible, and the network between the parts and the device prevents complications that could arise at their interface. The probability of retention in one large study was estimated at 62% at 2 years follow-up.
Osteo-Odonto-Keratoprosthesis:
Main article: Osteo-Odonto-Keratoprosthesis
In a very rare and complex multi-step surgical procedure, employed to help the most disabled patients, a lamina of the person's tooth is grafted into the eye, with an artificial lens installed in the transplanted piece.
Prognosis:
The prognosis for visual restoration and maintenance of ocular health with corneal transplants is generally very good. Risks for failure or guarded prognoses are multifactorial. The type of transplant, the disease state requiring the procedure, the health of the other parts of the recipient eye and even the health of the donor tissue may all confer a more or less favorable prognosis.
The majority of corneal transplants result in significant improvement in visual function for many years or a lifetime. In cases of rejection or transplant failure, the surgery can generally be repeated.
Alternatives:
Contact lenses:
Different types of contact lenses may be used to delay or eliminate the need for corneal transplantation in corneal disorders.
Phototherapeutic keratectomy:
Diseases that only affect the surface of the cornea can be treated with an operation called phototherapeutic keratectomy (PTK). With the precision of an excimer laser and a modulating agent coating the eye, irregularities on the surface can be removed. However, in most of the cases where corneal transplantation is recommended, PTK would not be effective.
Intrastromal corneal ring segments:
In corneal disorders where vision correction is not possible by using contact lenses, intrastromal corneal ring segments may be used to flatten the cornea, which is intended to relieve the nearsightedness and astigmatism.
In this procedure, an ophthalmologist makes an incision in the cornea of the eye, and inserts two crescent or semi-circular shaped ring segments between the layers of the corneal stroma, one on each side of the pupil.
Intrastromal corneal rings were approved in 2004 by the Food and Drug Administration for people with keratoconus who cannot adequately correct their vision with glasses or contact lenses. They were approved under the Humanitarian Device Exemption, which means the manufacturer did not have to demonstrate effectiveness.
Corneal collagen cross-linking:
Corneal collagen cross-linking may delay or eliminate the need for corneal transplantation in keratoconus and post-LASIK ectasia, however as of 2015 it is lacking sufficient evidence to determine if it is useful in keratoconus.
Epidemiology:
Corneal transplant is one of the most common transplant procedures. Although approximately 100,000 procedures are performed worldwide each year, some estimates report that 10,000,000 people are affected by various disorders that would benefit from corneal transplantation.
In Australia, approximately 1,500 grafts are performed each year. According to the NHS Blood and Transplant, over 2,300 corneal transplant procedures are performed each year in the United Kingdom. Between April 1, 2005 and March 31, 2006, 2,503 people received corneal transplants in the UK.
Click on any of the following blue hyperlinks for more about Corneal Transplants:
Keratoplasty simply means surgery to the cornea. The graft is taken from a recently dead individual with no known diseases or other factors that may affect the chance of survival of the donated tissue or the health of the recipient.
The cornea is the transparent front part of the eye that covers the iris, pupil and anterior chamber. The surgical procedure is performed by ophthalmologists, physicians who specialize in eyes, and is often done on an outpatient basis. Donors can be of any age, as is shown in the case of Janis Babson, who donated her eyes at age 10. The corneal transplantation is performed when medicines, keratoconus conservative surgery and cross-linking cannot heal the cornea anymore.
Medical uses:
Indications include the following:
- Optical: To improve visual acuity by replacing the opaque or distorted host tissue by clear healthy donor tissue. The most common indication in this category is pseudophakic bullous keratopathy, followed by keratoconus, corneal degeneration, keratoglobus and dystrophy, as well as scarring due to keratitis and trauma.
- Tectonic/reconstructive: To preserve corneal anatomy and integrity in patients with stromal thinning and descemetoceles, or to reconstruct the anatomy of the eye, e.g. after corneal perforation.
- Therapeutic: To remove inflamed corneal tissue unresponsive to treatment by antibiotics or anti-virals.
- Cosmetic: To improve the appearance of patients with corneal scars that have given a whitish or opaque hue to the cornea.
Risks:
The risks are similar to other intraocular procedures, but additionally include graft rejection (lifelong), detachment or displacement of lamellar transplants and primary graft failure.
There is also a risk of infection. Since the cornea has no blood vessels (it takes its nutrients from the aqueous humor) it heals much more slowly than a cut on the skin. While the wound is healing, it is possible that it might become infected by various microorganisms.
This risk is minimized by antibiotic prophylaxis (using antibiotic eyedrops, even when no infection exists).
There is a risk of cornea rejection, which occurs in about 20% of cases. Graft failure can occur at any time after the cornea has been transplanted, even years or decades later. The causes can vary, though it is usually due to new injury or illness. Treatment can be either medical or surgical, depending on the individual case. An early, technical cause of failure may be an excessively tight stitch cheesewiring through the sclera.
Procedures:
On the day of the surgery, the patient arrives to either a hospital or an outpatient surgery center, where the procedure will be performed. The patient is given a brief physical examination by the surgical team and is taken to the operating room. In the operating room, the patient lies down on an operating table and is either given general anesthesia, or local anesthesia and a sedative.
With anesthesia induced, the surgical team prepares the eye to be operated on and drapes the face around the eye. An eyelid speculum is placed to keep the lids open, and some lubrication is placed on the eye to prevent drying. In children, a metal ring is stitched to the sclera which will provide support of the sclera during the procedure.
Pre-operative examination:
In most instances, the person will meet with their ophthalmologist for an examination in the weeks or months preceding the surgery. During the exam, the ophthalmologist will examine the eye and diagnose the condition. The doctor will then discuss the condition with the patient, including the different treatment options available.
The doctor will also discuss the risks and benefits of the various options. If the patient elects to proceed with the surgery, the doctor will have the patient sign an informed consent form.
The doctor might also perform a physical examination and order lab tests, such as blood work, X-rays, or an EKG.
The surgery date and time will also be set, and the patient will be told where the surgery will take place. Within the United States, the supply of corneas is sufficient to meet the demand for surgery and research purposes. Therefore, unlike other tissues for transplantation, delays and shortages are not usually an issue.
Penetrating keratoplasty:
A trephine (a circular cutting device), which removes a circular disc of cornea, is used by the surgeon to cut the donor cornea. A second trephine is then used to remove a similar-sized portion of the patient's cornea. The donor tissue is then sewn in place with sutures.
Antibiotic eyedrops are placed, the eye is patched, and the patient is taken to a recovery area while the effects of the anesthesia wear off. The patient typically goes home following this and sees the doctor the following day for the first postoperative appointment.
Lamellar keratoplasty:
Lamellar keratoplasty encompasses several techniques which selectively replace diseased layers of the cornea while leaving healthy layers in place. The chief advantage is improved tectonic integrity of the eye. Disadvantages include the technically challenging nature of these procedures, which replace portions of a structure only 500 µm thick, and reduced optical performance of the donor/recipient interface compared to full-thickness keratoplasty.
Deep anterior lamellar keratoplasty:
In this procedure, the anterior layers of the central cornea are removed and replaced with donor tissue. Endothelial cells and the Descemets membrane are left in place. This technique is used in cases of anterior corneal opacifications, scars, and ectatic diseases such as keratoconus.
Endothelial keratoplasty:
Endothelial keratoplasty replaces the patient's endothelium with a transplanted disc of posterior stroma/Descemets/endothelium (DSEK) or Descemets/endothelium (DMEK).
This relatively new procedure has revolutionized treatment of disorders of the innermost layer of the cornea (endothelium). Unlike a full-thickness corneal transplant, the surgery can be performed with one or no sutures. Patients may recover functional vision in days to weeks, as opposed to up to a year with full thickness transplants.
However, an Australian study has shown that despite its benefits, the loss of endothelial cells that maintain transparency is much higher in DSEK compared to a full-thickness corneal transplant. The reason may be greater tissue manipulation during surgery, the study concluded.
During surgery the patient's corneal endothelium is removed and replaced with donor tissue. With DSEK, the donor includes a thin layer of stroma, as well as endothelium, and is commonly 100–150 µm thick. With DMEK, only the endothelium is transplanted. In the immediate postoperative period the donor tissue is held in position with an air bubble placed inside the eye (the anterior chamber). The tissue self-adheres in a short period and the air is adsorbed into the surrounding tissues.
Complications include displacement of the donor tissue requiring repositioning ("refloating"). This is more common with DMEK than DSEK. Folds in the donor tissue may reduce the quality of vision, requiring repair. Rejection of the donor tissue may require repeating the procedure. Gradual reduction in endothelial cell density over time can lead to loss of clarity and require repeating the procedure.
Patients with endothelial transplants frequently achieve best corrected vision in the 20/30 to 20/40 range, although some reach 20/20. Optical irregularity at the graft/host interface may limit vision below 20/20.
Synthetic corneas:
Main article: Keratoprosthesis
Boston keratoprosthesis:
The Boston keratoprosthesis is the most widely used synthetic cornea to date with over 900 procedures performed worldwide in 2008. The Boston KPro was developed at the Massachusetts Eye and Ear Infirmary under the leadership of Claes Dohlman, MD, PhD.
AlphaCor:
In cases where there have been several graft failures or the risk for keratoplasty is high, synthetic corneas can substitute successfully for donor corneas. Such a device contains a peripheral skirt and a transparent central region.
These two parts are connected on a molecular level by an interpenetrating polymer network, made from poly-2-hydroxyethyl methacrylate (pHEMA). AlphaCor is a U.S. FDA-approved type of synthetic cornea measuring 7.0 mm in diameter and 0.5 mm in thickness. The main advantages of synthetic corneas are that they are biocompatible, and the network between the parts and the device prevents complications that could arise at their interface. The probability of retention in one large study was estimated at 62% at 2 years follow-up.
Osteo-Odonto-Keratoprosthesis:
Main article: Osteo-Odonto-Keratoprosthesis
In a very rare and complex multi-step surgical procedure, employed to help the most disabled patients, a lamina of the person's tooth is grafted into the eye, with an artificial lens installed in the transplanted piece.
Prognosis:
The prognosis for visual restoration and maintenance of ocular health with corneal transplants is generally very good. Risks for failure or guarded prognoses are multifactorial. The type of transplant, the disease state requiring the procedure, the health of the other parts of the recipient eye and even the health of the donor tissue may all confer a more or less favorable prognosis.
The majority of corneal transplants result in significant improvement in visual function for many years or a lifetime. In cases of rejection or transplant failure, the surgery can generally be repeated.
Alternatives:
Contact lenses:
Different types of contact lenses may be used to delay or eliminate the need for corneal transplantation in corneal disorders.
Phototherapeutic keratectomy:
Diseases that only affect the surface of the cornea can be treated with an operation called phototherapeutic keratectomy (PTK). With the precision of an excimer laser and a modulating agent coating the eye, irregularities on the surface can be removed. However, in most of the cases where corneal transplantation is recommended, PTK would not be effective.
Intrastromal corneal ring segments:
In corneal disorders where vision correction is not possible by using contact lenses, intrastromal corneal ring segments may be used to flatten the cornea, which is intended to relieve the nearsightedness and astigmatism.
In this procedure, an ophthalmologist makes an incision in the cornea of the eye, and inserts two crescent or semi-circular shaped ring segments between the layers of the corneal stroma, one on each side of the pupil.
Intrastromal corneal rings were approved in 2004 by the Food and Drug Administration for people with keratoconus who cannot adequately correct their vision with glasses or contact lenses. They were approved under the Humanitarian Device Exemption, which means the manufacturer did not have to demonstrate effectiveness.
Corneal collagen cross-linking:
Corneal collagen cross-linking may delay or eliminate the need for corneal transplantation in keratoconus and post-LASIK ectasia, however as of 2015 it is lacking sufficient evidence to determine if it is useful in keratoconus.
Epidemiology:
Corneal transplant is one of the most common transplant procedures. Although approximately 100,000 procedures are performed worldwide each year, some estimates report that 10,000,000 people are affected by various disorders that would benefit from corneal transplantation.
In Australia, approximately 1,500 grafts are performed each year. According to the NHS Blood and Transplant, over 2,300 corneal transplant procedures are performed each year in the United Kingdom. Between April 1, 2005 and March 31, 2006, 2,503 people received corneal transplants in the UK.
Click on any of the following blue hyperlinks for more about Corneal Transplants:
- History
- Research
- Society and culture
- Facts About the Cornea and Corneal Disease The National Eye Institute (NEI)
Virtual Healthcare: The future of Medicine? YouTube Video: This Is What The Future Of Health Care Looks Like Pictured below: The eHealth Enhanced Chronic Care Model: A Theory Derivation Approach
eHealth (also written e-health) is a relatively recent healthcare practice supported by electronic processes and communication, dating back to at least 1999. Usage of the term varies. A study in 2005 found 51 unique definitions.
Some argue that it is interchangeable with health informatics with a broad definition covering electronic/digital processes in health while others use it in the narrower sense of healthcare practice using the Internet.
It can also include health applications and links on mobile phones, referred to as mHealth or m-Health. Since about 2011, the increasing recognition of the need for better cyber-security and regulation may result in the need for these specialized resources to develop safer eHealth solutions that can withstand these growing threats.
The term can encompass a range of services or systems that are at the edge of medicine/healthcare and information technology, including:
Several authors have noted the variable usage in the term, from being specific to the use of the Internet in healthcare to being generally around any use of computers in healthcare.
Various authors have considered the evolution of the term and its usage and how this maps to changes in health informatics and healthcare generally.Oh et al., in a 2005 systematic review of the term's usage, offered the definition of eHealth as a set of technological themes in health today, more specifically based on commerce, activities, stakeholders, outcomes, locations, or perspectives.
One thing that all sources seem to agree on is that e-Health initiatives do not originate with the patient, though the patient may be a member of a patient organization that seeks to do this, as in the e-Patient movement.
Click on any of the following blue hyperlinks for more about eHealth:
Telemedicine is the use of telecommunication and information technology to provide clinical health care from a distance. It has been used to overcome distance barriers and to improve access to medical services that would often not be consistently available in distant rural communities. It is also used to save lives in critical care and emergency situations.
Although there were distant precursors to telemedicine, it is essentially a product of 20th century telecommunication and information technologies. These technologies permit communications between patient and medical staff with both convenience and fidelity, as well as the transmission of medical, imaging and health informatics data from one site to another.
Early forms of telemedicine achieved with telephone and radio have been supplemented with videotelephony, advanced diagnostic methods supported by distributed client/server applications, and additionally with telemedical devices to support in-home care.
The definition of telemedicine is somewhat controversial. Some definitions (such as the definition given by the World Health Organization) include all aspects of healthcare including preventive care. The American Telemedicine Association uses the terms telemedicine and telehealth interchangeably, although it acknowledges that telehealth is sometimes used more broadly for remote health not involving active clinical treatments.
eHealth is another related term, used particularly in the U.K. and Europe, as an umbrella term that includes telehealth, electronic medical records, and other components of health information technology.
Benefits and Drawbacks:
Telemedicine can be beneficial to patients in isolated communities and remote regions, who can receive care from doctors or specialists far away without the patient having to travel to visit them.
Recent developments in mobile collaboration technology can allow healthcare professionals in multiple locations to share information and discuss patient issues as if they were in the same place.
Remote patient monitoring through mobile technology can reduce the need for outpatient visits and enable remote prescription verification and drug administration oversight, potentially significantly reducing the overall cost of medical care. Telemedicine can also facilitate medical education by allowing workers to observe experts in their fields and share best practices more easily.
Telemedicine can eliminate the possible transmission of infectious diseases or parasites between patients and medical staff. This is particularly an issue where MRSA is a concern. Additionally, some patients who feel uncomfortable in a doctors office may do better remotely. For example, white coat syndrome may be avoided. Patients who are home-bound and would otherwise require an ambulance to move them to a clinic are also a consideration.
The downsides of telemedicine include the cost of telecommunication and data management equipment and of technical training for medical personnel who will employ it. Virtual medical treatment also entails potentially decreased human interaction between medical professionals and patients, an increased risk of error when medical services are delivered in the absence of a registered professional, and an increased risk that protected health information may be compromised through electronic storage and transmission.
There is also a concern that telemedicine may actually decrease time efficiency due to the difficulties of assessing and treating patients through virtual interactions; for example, it has been estimated that a teledermatology consultation can take up to thirty minutes, whereas fifteen minutes is typical for a traditional consultation.
Additionally, potentially poor quality of transmitted records, such as images or patient progress reports, and decreased access to relevant clinical information are quality assurance risks that can compromise the quality and continuity of patient care for the reporting doctor.
Other obstacles to the implementation of telemedicine include unclear legal regulation for some telemedical practices and difficulty claiming reimbursement from insurers or government programs in some fields.
Another disadvantage of telemedicine is the inability to start treatment immediately. For example, a patient suffering from a bacterial infection might be given an antibiotic hypodermic injection in the clinic, and observed for any reaction, before that antibiotic is prescribed in pill form.
Click on any of the following blue hyperlinks for more about Telemedicine:
Digital health is the convergence of digital and genomic technologies with health, healthcare, living, and society to enhance the efficiency of healthcare delivery and make medicines more personalized and precise.
The discipline involves the use of information and communication technologies to help address the health problems and challenges faced by patients. These technologies include both hardware and software solutions and services, including telemedicine, web-based analysis, email, mobile phones and applications, text messages, and clinic or remote monitoring sensors.
Generally, digital health is concerned about the development of interconnected health systems to improve the use of computational technologies, smart devices, computational analysis techniques and communication media to aid healthcare professionals and patients manage illnesses and health risks, as well as promote health and wellbeing.
Digital health is a multi-disciplinary domain which involves many stakeholders, including clinicians, researchers and scientists with a wide range of expertise in healthcare, engineering, social sciences, public health, health economics and management.
Elements:
As an outgrowth of the Digital Revolution characterized by "the mass production and widespread use of digital logic circuits, and its derived technologies, including the computer, digital cellular phone, and the Internet," key elements of digital health include wireless devices, hardware sensors and software sensing technologies, microprocessors and integrated circuits, the Internet, social networking, mobile/cellular networks and body area networks, health information technology, genomics, and personal genetic information.
Domains:
Various domains span digital health. These include Healthcare technology assessment and monitoring to prevent, diagnose or treat diseases, monitoring of patients, or for rehabilitation or long-term care.
Such technologies include Assistive technologies and rehabilitation robotics for people with disabilities so as to aid in their independence to perform daily tasks, unobtrusive monitoring sensors and wearable devices.
Clinical decision support aids clinicians at the point of care, including diagnosis, analysis and interpretation of patient-related data.Computational simulations, modeling and machine learning approaches can model health-related outcomes. E-health delivers health information and services to enable data transmition, storage and retrievial for clinical, educational and administrative purposes.
Health systems engineering applications in health care systems, includes knowledge discovery, decision making, optimization, human factors engineering, quality engineering, and information technology and communication. Human-computer-environment interactions Human-computer interaction principles tend to be based around user-centered, experience-centered or activity-centered designs.
Virtual reality, video gaming rehabilitation, and serious games to provide a social and interactive experience for healthcare student and patient education. Speech and hearing systems for natural language processing, speech recognition techniques, and medical devices can aid in speech and hearing (e.g. cochlear implants).Telehealth, telemedicine, telecare, telecoaching and telerehabilitation provide various forms of patient care remotely at a distance.
Implementation:
National digital programs exist to support healthcare, such as those of Canada Health Infoway built on core systems of patient and provider registries, clinical and diagnostic imaging systems, clinical reports and immunizations. By 2014, 75% of Canadian physicians were using electronic medical records.
In Uganda and Mozambique, partnerships between patients with cell phones, local and regional governments, technologists, non-governmental organizations, academia, and industry have enabled mHealth solutions.
Innovation cycle:
The innovation process for digital health is an iterative cycle for technological solutions classified into five main activity processes beginning from the identification of the healthcare problem, research, digital solution, evaluating the solution to implementation in working clinical practices.
___________________________________________________________________________
Online doctor is a term that emerged during the 2000s, used by both the media and academics, to describe a generation of physicians and health practitioners who deliver healthcare, including drug prescription, over the internet.
Emergence of online doctors:
In the 2000s, many people came to treat the internet as a first, or at least a major, source of information and communication. Health advice is now the second-most popular topic, after pornography, that people search for on the internet.
With the advent of broadband and videoconferencing, many individuals have turned to online doctors to receive online consultations and purchase prescription drugs. Use of this technology has many advantages for both the doctor and the patient, including cost savings, convenience, accessibility, and improved privacy and communication.
In the US, a 2006 study found that searching for information on prescription or over-the-counter drugs was the fifth most popular search topic, and a 2004 study found that 4% of Americans had purchased prescription medications online.
A 2009 survey conducted by Geneva-based Health On the Net Foundation found one-in-ten Europeans buys medicines from websites and one-third claim to use online consultation.
In Germany, approximately seven million people buy from mail-order pharmacies, and mail-order sales account for approximately 8–10% of total pharmaceutical sales.
In 2008, the Royal Pharmaceutical Society of Great Britain reported that approximately two million people in Great Britain were regularly purchasing pharmaceuticals online (both with a prescription from registered online UK doctors and without prescriptions from other websites).
A recent survey commissioned by Pfizer, the Medicines and Healthcare products Regulatory Agency, RPSGB, the Patients Association and HEART UK found that 15% of the British adults asked had bought a prescription-only medicine online.
In developed countries, many online doctors prescribe so-called ‘lifestyle drugs’, such as for weight loss, hair loss or erectile dysfunction. The RPSGB has identified the most popular products prescribed online as Prozac (an antidepressant), Viagra (for erectile dysfunction), Valium (a tranquiliser), Ritalin (a psychostimulant), Serostim (a synthetic growth hormone) and Provigil (a psychostimulant).
A study in the USA has also shown that antibiotics are commonly available online without prescription.
Potential Harm:
Traditionalist critics of online doctors argue that an online doctor cannot provide proper examinations or diagnosis either by email or video call. Such consultations, they argue, will always be dangerous, with the potential for serious disease to be missed. There are also concerns that the absence of proximity leads to treatment by unqualified doctors or patients using false information to secure dangerous drugs.
Proponents argue there is little difference between an e-mail consultation and the sort of telephone assessment and advice that doctors regularly make out of hours or in circumstances where doctors cannot physically examine a patient (e.g., jungle medicine).
Laurence Buckman, chairman of the British Medical Association’s GPs’ committee, says that online consultations make life easier for doctors and patients when used properly. "Many GPs will be very happy with it and it could be useful. When it’s a regular patient you know well, it follows on from telephone consulting.
Voice is essential, vision is desirable. The problem comes when I don’t know the patient". Niall Dickson, chief executive of the General Medical Council, says: "We trust doctors to use their judgement to decide whether they should see a patient in person. Online consultations will be appropriate for some patients, whereas other patients will need a physical examination or may benefit from seeing their doctor in person".
Past and Future Developments
The first medical consulting website in the US was WebMD, founded in 1996 by Jim Clark (one of the founders of Netscape) and Pavan Nigam as Healthscape. Currently, its website carries information regarding health and health care, including a symptom checklist, pharmacy information, drug information, blogs of physicians with specific topics, and a place to store personal medical information.
As of February 2011, WebMD’s network of sites reaches an average of 86.4 million visitors per month and is the leading health portal in the United States.
Other popular US healthcare and medical consulting sites include the following:
Many have experienced dramatic growth. (Healthline, launched in 2005, grew by 269% to 2.7 million average monthly unique visitors in Q1 2007 from 0.8 million average monthly unique visitors in Q1 2006).
Niche consulting sites are also popular including SeniorNet, which deals with age-related syndromes and 4collegewomen.org and GirlsHealth.gov, which target young women.
Several American online doctor companies, including Sherpaa, MDlive, Teladoc, First Stop Health, American Well, WebDoc247, MeMD, and Ringadoc, provide consultations with doctors over the phone or the Internet. Prominent San Francisco-based venture capital firm Founders Fund called such services "extraordinarily fast" and predicted that they will "bring relief to thousands of people with immediate medical needs".
In the UK, e-med was the first online health site to offer both a diagnosis and prescriptions to patients over the Internet. It was established in March 2000 by Dr. Julian Eden, In 2010, DrThom claimed to have 100,000 patients visit their site
NHS Direct (currently NHS Choices) is the free health advice and information service provided by the National Health Service (NHS) for residents and visitors in the UK, with advice offered 24 hours a day via telephone and web contact.
Over 1.5 million patients visit the website every month. More recently, a number of online doctors have emerged in the country, firms such as Now Healthcare Group, Dr Fox Pharmacy, Push Doctor and Lloyds Pharmacy offer consultation and prescriptions via the Internet.
In Australia HealthDirect is the free health advice and information service provided by the government with advice offered 24 hours a day via telephone. Medicare began funding online consultations for specialists on 1 July 2011 which has seen a slow but steady increase in volumes.
New advances in digital information technology mean that in future online doctors and healthcare websites may offer advanced scanning and diagnostic services over the internet.
The Nuffield Council on Bioethics identifies such services as direct-to-consumer body imaging (such as CT and MRI scans) and personal genetic profiling for individual susceptibility to disease. Professor Sir Bruce Keogh, the medical director of the UK NHS, is drawing up plans to introduce online consultations via Skype and has said IT will "completely change the way [doctors] deliver medicine".
This concept is gaining more importance and there are some companies who started cashing on it, few companies like Zocdoc have started providing online doctor booking service.
See also:
Some argue that it is interchangeable with health informatics with a broad definition covering electronic/digital processes in health while others use it in the narrower sense of healthcare practice using the Internet.
It can also include health applications and links on mobile phones, referred to as mHealth or m-Health. Since about 2011, the increasing recognition of the need for better cyber-security and regulation may result in the need for these specialized resources to develop safer eHealth solutions that can withstand these growing threats.
The term can encompass a range of services or systems that are at the edge of medicine/healthcare and information technology, including:
- Electronic health record: enabling the communication of patient data between different healthcare professionals (GPs, specialists etc.);
- Computerized physician order entry: a means of requesting diagnostic tests and treatments electronically and receiving the results
- ePrescribing: access to prescribing options, printing prescriptions to patients and sometimes electronic transmission of prescriptions from doctors to pharmacists
- Clinical decision support system: providing information electronically about protocols and standards for healthcare professionals to use in diagnosing and treating patients.
- Telemedicine: physical and psychological diagnosis and treatments at a distance, including telemonitoring of patients functions;
- Consumer health informatics: use of electronic resources on medical topics by healthy individuals or patients;
- Health knowledge management: e.g. in an overview of latest medical journals, best practice guidelines or epidemiological tracking (examples include physician resources such as Medscape and MDLinx);
- Virtual healthcare teams: consisting of healthcare professionals who collaborate and share information on patients through digital equipment (for transmural care);
- mHealth or m-Health: includes the use of mobile devices in collecting aggregate and patient-level health data, providing healthcare information to practitioners, researchers, and patients, real-time monitoring of patient vitals, and direct provision of care (via mobile telemedicine);
- Medical research using grids: powerful computing and data management capabilities to handle large amounts of heterogeneous data.
- Health informatics / healthcare information systems: also often refer to software solutions for appointment scheduling, patient data management, work schedule management and other administrative tasks surrounding health
Several authors have noted the variable usage in the term, from being specific to the use of the Internet in healthcare to being generally around any use of computers in healthcare.
Various authors have considered the evolution of the term and its usage and how this maps to changes in health informatics and healthcare generally.Oh et al., in a 2005 systematic review of the term's usage, offered the definition of eHealth as a set of technological themes in health today, more specifically based on commerce, activities, stakeholders, outcomes, locations, or perspectives.
One thing that all sources seem to agree on is that e-Health initiatives do not originate with the patient, though the patient may be a member of a patient organization that seeks to do this, as in the e-Patient movement.
Click on any of the following blue hyperlinks for more about eHealth:
- eHealth literacy
- Data exchange
- Early adopters
- E-mental health
- Cybermedicine
- Self-monitoring healthcare devices
- Evaluation
- In developing countries
- See also:
- Center for Telehealth and E-Health Law
- eHealthInsurance
- EUDRANET
- European Institute for Health Records
- Health 2.0
- Health blog
- Technology and mental health issues
- Telehealth
- "ehealth news". ehealth.
- "NorthWest EHealth". NWEH.
- "The eHeatlhQ Seal". Internet Medical Society.
- "Virtual Healthcare Trends". Anna Maria Online. Anna Maria College.
- "The Digital Health Care Environment". Saint Joseph's University.
Telemedicine is the use of telecommunication and information technology to provide clinical health care from a distance. It has been used to overcome distance barriers and to improve access to medical services that would often not be consistently available in distant rural communities. It is also used to save lives in critical care and emergency situations.
Although there were distant precursors to telemedicine, it is essentially a product of 20th century telecommunication and information technologies. These technologies permit communications between patient and medical staff with both convenience and fidelity, as well as the transmission of medical, imaging and health informatics data from one site to another.
Early forms of telemedicine achieved with telephone and radio have been supplemented with videotelephony, advanced diagnostic methods supported by distributed client/server applications, and additionally with telemedical devices to support in-home care.
The definition of telemedicine is somewhat controversial. Some definitions (such as the definition given by the World Health Organization) include all aspects of healthcare including preventive care. The American Telemedicine Association uses the terms telemedicine and telehealth interchangeably, although it acknowledges that telehealth is sometimes used more broadly for remote health not involving active clinical treatments.
eHealth is another related term, used particularly in the U.K. and Europe, as an umbrella term that includes telehealth, electronic medical records, and other components of health information technology.
Benefits and Drawbacks:
Telemedicine can be beneficial to patients in isolated communities and remote regions, who can receive care from doctors or specialists far away without the patient having to travel to visit them.
Recent developments in mobile collaboration technology can allow healthcare professionals in multiple locations to share information and discuss patient issues as if they were in the same place.
Remote patient monitoring through mobile technology can reduce the need for outpatient visits and enable remote prescription verification and drug administration oversight, potentially significantly reducing the overall cost of medical care. Telemedicine can also facilitate medical education by allowing workers to observe experts in their fields and share best practices more easily.
Telemedicine can eliminate the possible transmission of infectious diseases or parasites between patients and medical staff. This is particularly an issue where MRSA is a concern. Additionally, some patients who feel uncomfortable in a doctors office may do better remotely. For example, white coat syndrome may be avoided. Patients who are home-bound and would otherwise require an ambulance to move them to a clinic are also a consideration.
The downsides of telemedicine include the cost of telecommunication and data management equipment and of technical training for medical personnel who will employ it. Virtual medical treatment also entails potentially decreased human interaction between medical professionals and patients, an increased risk of error when medical services are delivered in the absence of a registered professional, and an increased risk that protected health information may be compromised through electronic storage and transmission.
There is also a concern that telemedicine may actually decrease time efficiency due to the difficulties of assessing and treating patients through virtual interactions; for example, it has been estimated that a teledermatology consultation can take up to thirty minutes, whereas fifteen minutes is typical for a traditional consultation.
Additionally, potentially poor quality of transmitted records, such as images or patient progress reports, and decreased access to relevant clinical information are quality assurance risks that can compromise the quality and continuity of patient care for the reporting doctor.
Other obstacles to the implementation of telemedicine include unclear legal regulation for some telemedical practices and difficulty claiming reimbursement from insurers or government programs in some fields.
Another disadvantage of telemedicine is the inability to start treatment immediately. For example, a patient suffering from a bacterial infection might be given an antibiotic hypodermic injection in the clinic, and observed for any reaction, before that antibiotic is prescribed in pill form.
Click on any of the following blue hyperlinks for more about Telemedicine:
- History
- Types
- Specialist care delivery
- Licensure
- U.S. licensing and regulatory issues
- Companies
- Advanced and experimental services
- Enabling technologies
- Developing countries
- See also:
Digital health is the convergence of digital and genomic technologies with health, healthcare, living, and society to enhance the efficiency of healthcare delivery and make medicines more personalized and precise.
The discipline involves the use of information and communication technologies to help address the health problems and challenges faced by patients. These technologies include both hardware and software solutions and services, including telemedicine, web-based analysis, email, mobile phones and applications, text messages, and clinic or remote monitoring sensors.
Generally, digital health is concerned about the development of interconnected health systems to improve the use of computational technologies, smart devices, computational analysis techniques and communication media to aid healthcare professionals and patients manage illnesses and health risks, as well as promote health and wellbeing.
Digital health is a multi-disciplinary domain which involves many stakeholders, including clinicians, researchers and scientists with a wide range of expertise in healthcare, engineering, social sciences, public health, health economics and management.
Elements:
As an outgrowth of the Digital Revolution characterized by "the mass production and widespread use of digital logic circuits, and its derived technologies, including the computer, digital cellular phone, and the Internet," key elements of digital health include wireless devices, hardware sensors and software sensing technologies, microprocessors and integrated circuits, the Internet, social networking, mobile/cellular networks and body area networks, health information technology, genomics, and personal genetic information.
Domains:
Various domains span digital health. These include Healthcare technology assessment and monitoring to prevent, diagnose or treat diseases, monitoring of patients, or for rehabilitation or long-term care.
Such technologies include Assistive technologies and rehabilitation robotics for people with disabilities so as to aid in their independence to perform daily tasks, unobtrusive monitoring sensors and wearable devices.
Clinical decision support aids clinicians at the point of care, including diagnosis, analysis and interpretation of patient-related data.Computational simulations, modeling and machine learning approaches can model health-related outcomes. E-health delivers health information and services to enable data transmition, storage and retrievial for clinical, educational and administrative purposes.
Health systems engineering applications in health care systems, includes knowledge discovery, decision making, optimization, human factors engineering, quality engineering, and information technology and communication. Human-computer-environment interactions Human-computer interaction principles tend to be based around user-centered, experience-centered or activity-centered designs.
Virtual reality, video gaming rehabilitation, and serious games to provide a social and interactive experience for healthcare student and patient education. Speech and hearing systems for natural language processing, speech recognition techniques, and medical devices can aid in speech and hearing (e.g. cochlear implants).Telehealth, telemedicine, telecare, telecoaching and telerehabilitation provide various forms of patient care remotely at a distance.
Implementation:
National digital programs exist to support healthcare, such as those of Canada Health Infoway built on core systems of patient and provider registries, clinical and diagnostic imaging systems, clinical reports and immunizations. By 2014, 75% of Canadian physicians were using electronic medical records.
In Uganda and Mozambique, partnerships between patients with cell phones, local and regional governments, technologists, non-governmental organizations, academia, and industry have enabled mHealth solutions.
Innovation cycle:
The innovation process for digital health is an iterative cycle for technological solutions classified into five main activity processes beginning from the identification of the healthcare problem, research, digital solution, evaluating the solution to implementation in working clinical practices.
___________________________________________________________________________
Online doctor is a term that emerged during the 2000s, used by both the media and academics, to describe a generation of physicians and health practitioners who deliver healthcare, including drug prescription, over the internet.
Emergence of online doctors:
In the 2000s, many people came to treat the internet as a first, or at least a major, source of information and communication. Health advice is now the second-most popular topic, after pornography, that people search for on the internet.
With the advent of broadband and videoconferencing, many individuals have turned to online doctors to receive online consultations and purchase prescription drugs. Use of this technology has many advantages for both the doctor and the patient, including cost savings, convenience, accessibility, and improved privacy and communication.
In the US, a 2006 study found that searching for information on prescription or over-the-counter drugs was the fifth most popular search topic, and a 2004 study found that 4% of Americans had purchased prescription medications online.
A 2009 survey conducted by Geneva-based Health On the Net Foundation found one-in-ten Europeans buys medicines from websites and one-third claim to use online consultation.
In Germany, approximately seven million people buy from mail-order pharmacies, and mail-order sales account for approximately 8–10% of total pharmaceutical sales.
In 2008, the Royal Pharmaceutical Society of Great Britain reported that approximately two million people in Great Britain were regularly purchasing pharmaceuticals online (both with a prescription from registered online UK doctors and without prescriptions from other websites).
A recent survey commissioned by Pfizer, the Medicines and Healthcare products Regulatory Agency, RPSGB, the Patients Association and HEART UK found that 15% of the British adults asked had bought a prescription-only medicine online.
In developed countries, many online doctors prescribe so-called ‘lifestyle drugs’, such as for weight loss, hair loss or erectile dysfunction. The RPSGB has identified the most popular products prescribed online as Prozac (an antidepressant), Viagra (for erectile dysfunction), Valium (a tranquiliser), Ritalin (a psychostimulant), Serostim (a synthetic growth hormone) and Provigil (a psychostimulant).
A study in the USA has also shown that antibiotics are commonly available online without prescription.
Potential Harm:
Traditionalist critics of online doctors argue that an online doctor cannot provide proper examinations or diagnosis either by email or video call. Such consultations, they argue, will always be dangerous, with the potential for serious disease to be missed. There are also concerns that the absence of proximity leads to treatment by unqualified doctors or patients using false information to secure dangerous drugs.
Proponents argue there is little difference between an e-mail consultation and the sort of telephone assessment and advice that doctors regularly make out of hours or in circumstances where doctors cannot physically examine a patient (e.g., jungle medicine).
Laurence Buckman, chairman of the British Medical Association’s GPs’ committee, says that online consultations make life easier for doctors and patients when used properly. "Many GPs will be very happy with it and it could be useful. When it’s a regular patient you know well, it follows on from telephone consulting.
Voice is essential, vision is desirable. The problem comes when I don’t know the patient". Niall Dickson, chief executive of the General Medical Council, says: "We trust doctors to use their judgement to decide whether they should see a patient in person. Online consultations will be appropriate for some patients, whereas other patients will need a physical examination or may benefit from seeing their doctor in person".
Past and Future Developments
The first medical consulting website in the US was WebMD, founded in 1996 by Jim Clark (one of the founders of Netscape) and Pavan Nigam as Healthscape. Currently, its website carries information regarding health and health care, including a symptom checklist, pharmacy information, drug information, blogs of physicians with specific topics, and a place to store personal medical information.
As of February 2011, WebMD’s network of sites reaches an average of 86.4 million visitors per month and is the leading health portal in the United States.
Other popular US healthcare and medical consulting sites include the following:
- NIH.gov,
- MSN Health,
- Doctorspring,
- MdLive,
- Justdoc,
- Yahoo! Health,
- EverydayHealth,
- WomensHealth.gov,
- MayoClinic,
- Doctor Vista,
Many have experienced dramatic growth. (Healthline, launched in 2005, grew by 269% to 2.7 million average monthly unique visitors in Q1 2007 from 0.8 million average monthly unique visitors in Q1 2006).
Niche consulting sites are also popular including SeniorNet, which deals with age-related syndromes and 4collegewomen.org and GirlsHealth.gov, which target young women.
Several American online doctor companies, including Sherpaa, MDlive, Teladoc, First Stop Health, American Well, WebDoc247, MeMD, and Ringadoc, provide consultations with doctors over the phone or the Internet. Prominent San Francisco-based venture capital firm Founders Fund called such services "extraordinarily fast" and predicted that they will "bring relief to thousands of people with immediate medical needs".
In the UK, e-med was the first online health site to offer both a diagnosis and prescriptions to patients over the Internet. It was established in March 2000 by Dr. Julian Eden, In 2010, DrThom claimed to have 100,000 patients visit their site
NHS Direct (currently NHS Choices) is the free health advice and information service provided by the National Health Service (NHS) for residents and visitors in the UK, with advice offered 24 hours a day via telephone and web contact.
Over 1.5 million patients visit the website every month. More recently, a number of online doctors have emerged in the country, firms such as Now Healthcare Group, Dr Fox Pharmacy, Push Doctor and Lloyds Pharmacy offer consultation and prescriptions via the Internet.
In Australia HealthDirect is the free health advice and information service provided by the government with advice offered 24 hours a day via telephone. Medicare began funding online consultations for specialists on 1 July 2011 which has seen a slow but steady increase in volumes.
New advances in digital information technology mean that in future online doctors and healthcare websites may offer advanced scanning and diagnostic services over the internet.
The Nuffield Council on Bioethics identifies such services as direct-to-consumer body imaging (such as CT and MRI scans) and personal genetic profiling for individual susceptibility to disease. Professor Sir Bruce Keogh, the medical director of the UK NHS, is drawing up plans to introduce online consultations via Skype and has said IT will "completely change the way [doctors] deliver medicine".
This concept is gaining more importance and there are some companies who started cashing on it, few companies like Zocdoc have started providing online doctor booking service.
See also:
- e-Patient
- Health informatics
- mHealth
- Telehealth
- NHS Choices The UK government's medical advice and treatment portal
- FSMB Federation of State Medical Boards, body of the state medical boards that regulate online doctors in the US
- CQC Quality Care Commission Board, body that regulates online doctors in the UK
Assistive Technology
YouTube Video about Mackenzie's Voice: Living with Cerebral Palsy
YouTube Video about Assistive Technology in Action - Meet Elle
Pictured below: Assistive Devices and their Use
Assistive technology is an umbrella term that includes assistive, adaptive, and rehabilitative devices for people with disabilities while also including the process used in selecting, locating, and using them.
People who have disabilities often have difficulty performing activities of daily living (ADLs) independently, or even with assistance. ADLs are self-care activities that include toileting, mobility (ambulation), eating, bathing, dressing and grooming. Assistive technology can ameliorate the effects of disabilities that limit the ability to perform ADLs.
Assistive technology promotes greater independence by enabling people to perform tasks they were formerly unable to accomplish, or had great difficulty accomplishing, by providing enhancements to, or changing methods of interacting with, the technology needed to accomplish such tasks.
For example, wheelchairs provide independent mobility for those who cannot walk, while assistive eating devices can enable people who cannot feed themselves to do so.
Due to assistive technology, people with disabilities have an opportunity of a more positive and easygoing lifestyle, with an increase in "social participation," "security and control," and a greater chance to "reduce institutional costs without significantly increasing household expenses."
Adaptive Technology:
The term adaptive technology is often used as the synonym for assistive technology; however, they are different terms. Assistive technology refers to "any item, piece of equipment, or product system, whether acquired commercially, modified, or customized, that is used to increase, maintain, or improve functional capabilities of individuals with disabilities", while adaptive technology covers items that are specifically designed for persons with disabilities and would seldom be used by non-disabled persons.
In other words, "assistive technology is any object or system that increases or maintains the capabilities of people with disabilities," while adaptive technology is "any object or system that is specifically designed for the purpose of increasing or maintaining the capabilities of people with disabilities." Consequently, adaptive technology is a subset of assistive technology. Adaptive technology often refers specifically to electronic and information technology access.
Mobility impairments:
Wheelchairs:
Main article: Wheelchair
Wheelchairs are devices that can be manually propelled or electrically propelled, and that include a seating system and are designed to be a substitute for the normal mobility that most people enjoy.
Wheelchairs and other mobility devices allow people to perform mobility-related activities of daily living which include feeding, toileting, dressing, grooming, and bathing. The devices come in a number of variations where they can be propelled either by hand or by motors where the occupant uses electrical controls to manage motors and seating control actuators through a joystick, sip-and-puff control, or other input devices.
Often there are handles behind the seat for someone else to do the pushing or input devices for caregivers. Wheelchairs are used by people for whom walking is difficult or impossible due to illness, injury, or disability. People with both sitting and walking disability often need to use a wheelchair or walker.
Transfer devices:
Main article: Patient lift
Patient transfer devices generally allow patients with impaired mobility to be moved by caregivers between beds, wheelchairs, commodes, toilets, chairs, stretchers, shower benches, automobiles, swimming pools, and other patient support systems (i.e., radiology, surgical, or examining tables).
The most common devices are Patient lifts (for vertical transfer), Transfer benches, stretcher or convertible chairs (for lateral, supine transfer), sit-to-stand lifts (for moving patients from one seated position to another i.e., from wheelchairs to commodes), air bearing inflatable mattresses (for supine transfer i.e., transfer from a gurney to an operating room table), and sliding boards (usually used for transfer from a bed to a wheelchair).
Highly dependent patients who cannot assist their caregiver in moving them often require a Patient lift (a floor or ceiling-suspended sling lift) which though invented in 1955 and in common use since the early 1960s is still considered the state-of-the-art transfer device by OSHA and the American Nursing Association.
Walkers:
Main article: Walker
A walker or walking frame or Rollator is a tool for disabled people who need additional support to maintain balance or stability while walking. It consists of a frame that is about waist high, approximately twelve inches deep and slightly wider than the user. Walkers are also available in other sizes, such as for children, or for heavy people. Modern walkers are height-adjustable.
The front two legs of the walker may or may not have wheels attached depending on the strength and abilities of the person using it. It is also common to see caster wheels or glides on the back legs of a walker with wheels on the front.
Prosthesis:
Main article: Prosthesis
A prosthesis, prosthetic, or prosthetic limb is a device that replaces a missing body part. It is part of the field of biomechatronics, the science of using mechanical devices with human muscle, skeleton, and nervous systems to assist or enhance motor control lost by trauma, disease, or defect. Prostheses are typically used to replace parts lost by injury (traumatic) or missing from birth (congenital) or to supplement defective body parts.
Inside the body, artificial heart valves are in common use with artificial hearts and lungs seeing less common use but under active technology development. Other medical devices and aids that can be considered prosthetics include hearing aids, artificial eyes, palatal obturator, gastric bands, and dentures.
Prostheses are specifically not orthoses, although given certain circumstances a prosthesis might end up performing some or all of the same functionary benefits as an orthosis.
Prostheses are technically the complete finished item. For instance, a C-Leg knee alone is not a prosthesis, but only a prosthetic component. The complete prosthesis would consist of the attachment system to the residual limb — usually a "socket", and all the attachment hardware components all the way down to and including the terminal device. Keep this in mind as nomenclature is often interchanged.
The terms "prosthetic" and "orthotic" are adjectives used to describe devices such as a prosthetic knee. The terms "prosthetics" and "orthotics" are used to describe the respective allied health fields.
Visual impairments:
Main article: Blindness § Management
Many people with serious visual impairments live independently, using a wide range of tools and techniques. Examples of assistive technology for visually impairment include screen readers, screen magnifiers, Braille embossers, desktop video magnifiers, and voice recorders.
Screen readers:
Main article: Screen reader
Screen readers are used to help the visually impaired to easily access electronic information. These software programs run on a computer in order to convey the displayed information through voice (text-to-speech) or braille (refreshable braille displays) in combination with magnification for low vision users in some cases. There are a variety of platforms and applications available for a variety of costs with differing feature sets.
One example of screen readers is Apple VoiceOver. This software is provided free of charge on all Apple devices. Apple VoiceOver includes the option to magnify the screen, control the keyboard, and provide verbal descriptions to describe what is happening on the screen. There are thirty languages to select from. It also has the capacity to read aloud file content, as well as web pages, E-mail messages, and word processing files.
Braille and braille embossers:
Main article: Braille
Braille is a system of raised dots formed into units called braille cells. A full braille cell is made up of six dots, with two parallel rows of three dots, but other combinations and quantities of dots represent other letters, numbers, punctuation marks, or words. People can then use their fingers to read the code of raised dots.
A braille embosser is, simply put, a printer for braille. Instead of a standard printer adding ink onto a page, the braille embosser imprints the raised dots of braille onto a page. Some braille embossers combine both braille and ink so the documents can be read with either sight or touch.
Refreshable braille display:
Main article: Refreshable braille display
A refreshable braille display or braille terminal is an electro-mechanical device for displaying braille characters, usually by means of round-tipped pins raised through holes in a flat surface. Computer users who cannot use a computer monitor use it to read a braille output version of the displayed text.
Desktop video magnifier:
Main article: Video magnifier
Desktop video magnifiers are electronic devices that use a camera and a display screen to perform digital magnification of printed materials. They enlarge printed pages for those with low vision. A camera connects to a monitor that displays real-time images, and the user can control settings such as magnification, focus, contrast, underlining, highlighting, and other screen preferences. They come in a variety of sizes and styles; some are small and portable with handheld cameras, while others are much larger and mounted on a fixed stand.
Screen magnification software:
Main article: Screen magnifier
A screen magnifier is software that interfaces with a computer's graphical output to present enlarged screen content. It allows users to enlarge the texts and graphics on their computer screens for easier viewing.
Similar to desktop video magnifiers, this technology assists people with low vision. After the user loads the software into their computer's memory, it serves as a kind of "computer magnifying glass." Wherever the computer cursor moves, it enlarges the area around it. This allows greater computer accessibility for a wide range of visual abilities.
Large-print and tactile keyboards:
A large-print keyboard has large letters printed on the keys. On the keyboard shown, the round buttons at the top control software which can magnify the screen (zoom in), change the background color of the screen, or make the mouse cursor on the screen larger. The "bump dots" on the keys, installed in this case by the organization using the keyboards, help the user find the right keys in a tactile way.
Navigation Assistance
Assistive technology for navigation has exploded on the IEEE Xplore database since 2000, with over 7,500 engineering articles written on assistive technologies and visual impairment in the past 25 years, and over 1,300 articles on solving the problem of navigation for people who are blind or visually impaired.
As well, over 600 articles on augmented reality and visual impairment have appeared in the engineering literature since 2000. Most of these articles were published within the past 5 years, and the number of articles in this area is increasing every year.
GPS, accelerometers, gyroscopes, and cameras can pinpoint the exact location of the user and provide information on what’s in the immediate vicinity, and assistance in getting to a destination.
Wearable Technology:
Main article: Wearable technology
Wearable technology are smart electronic devices that can be worn on the body as an implant or an accessory. New technologies are exploring how the visually impaired can receive visual information through wearable devices.
Some wearable devices for visual impairment include:
Personal emergency response systems:
Main article: Telecare
Personal emergency response systems (PERS), or Telecare (UK term), are a particular sort of assistive technology that use electronic sensors connected to an alarm system to help caregivers manage risk and help vulnerable people stay independent at home longer.
An example would be the systems being put in place for senior people such as fall detectors, thermometers (for hypothermia risk), flooding and unlit gas sensors (for people with mild dementia). Notably, these alerts can be customized to the particular person's risks. When the alert is triggered, a message is sent to a caregiver or contact center who can respond appropriately.
Accessibility software:
Main article: Computer accessibility
In human–computer interaction, computer accessibility (also known as accessible computing) refers to the accessibility of a computer system to all people, regardless of disability or severity of impairment, examples include web accessibility guidelines.
Another approach is for the user to present a token to the computer terminal, such as a smart card, that has configuration information to adjust the computer speed, text size, etc. to their particular needs. This is useful where users want to access public computer based terminals in Libraries, ATM, Information kiosks etc.
The concept is encompassed by the CEN EN 1332-4 Identification Card Systems - Man-Machine Interface. This development of this standard has been supported in Europe by SNAPI and has been successfully incorporated into the Lasseo specifications, but with limited success due to the lack of interest from public computer terminal suppliers.
Hearing impairments:
Main article: Assistive Technology for Deaf and Hard of Hearing
The deaf or hard of hearing community has a difficult time to communicate and perceive information as compared to hearing individuals. Thus, these individuals often rely on visual and tactile mediums for receiving and communicating information.
The use of assistive technology and devices provides this community with various solutions to their problems by providing higher sound (for those who are hard of hearing), tactile feedback, visual cues and improved technology access.
Individuals who are deaf or hard of hearing utilize a variety of assistive technologies that provide them with improved access to information in numerous environments. Most devices either provide amplified sound or alternate ways to access information through vision and/or vibration.
These technologies can be grouped into three general categories: Hearing Technology, alerting devices, and communication support.
Hearing aids:
Main article: Hearing aid
A hearing aid or deaf aid is an electroacoustic device which is designed to amplify sound for the wearer, usually with the aim of making speech more intelligible, and to correct impaired hearing as measured by audiometry.
This type of assistive technology helps people with hearing loss participate more fully in their communities by allowing them to hear more clearly. They amplify any and all sound waves through use of a microphone, amplifier, and speaker. There is a wide variety of hearing aids available, including digital, in-the-ear, in-the-canal, behind-the-ear, and on-the-body aids.
Assistive listening devices:
Main article: Assistive listening device
Assistive listening devices include FM, infrared, and loop assistive listening devices. This type of technology allows people with hearing difficulties to focus on a speaker or subject by getting rid of extra background noises and distractions, making places like auditoriums, classrooms, and meetings much easier to participate in.
The assistive listening device usually uses a microphone to capture an audio source near to its origin and broadcast it wirelessly over an FM (Frequency Modulation) transmission, IR (Infra Red) transmission, IL (Induction Loop) transmission, or other transmission methods.
The person who is listening may use an FM/IR/IL Receiver to tune into the signal and listen at his/her preferred volume.
Amplified telephone equipment:
Main article: Telecommunications device for the deaf#Other devices for the deaf or hard of hearing
This type of assistive technology allows users to amplify the volume and clarity of their phone calls so that they can easily partake in this medium of communication. There are also options to adjust the frequency and tone of a call to suit their individual hearing needs.
Additionally, there is a wide variety of amplified telephones to choose from, with different degrees of amplification. For example, a phone with 26 to 40 decibel is generally sufficient for mild hearing loss, while a phone with 71 to 90 decibel is better for more severe hearing loss.
Augmentative and alternative communication:
Main article: Augmentative and alternative communication
Augmentative and alternative communication (AAC) is an umbrella term that encompasses methods of communication for those with impairments or restrictions on the production or comprehension of spoken or written language.
AAC systems are extremely diverse and depend on the capabilities of the user. They may be as basic as pictures on a board that are used to request food, drink, or other care; or they can be advanced speech generating devices, based on speech synthesis, that are capable of storing hundreds of phrases and words.
Cognitive impairments:
Main article: Cognitive orthotics
Assistive Technology for Cognition (ATC) is the use of technology (usually high tech) to augment and assist cognitive processes such as attention, memory, self-regulation, navigation, emotion recognition and management, planning, and sequencing activity.
Systematic reviews of the field have found that the number of ATC are growing rapidly, but have focused on memory and planning, that there is emerging evidence for efficacy, that a lot of scope exists to develop new ATC.
Examples of ATC include: NeuroPage which prompts users about meetings, Wakamaru, which provides companionship and reminds users to take medicine and calls for help if something is wrong, and telephone Reassurance systems.
Memory aids:
Memory aids are any type of assistive technology that helps a user learn and remember certain information. Many memory aids are used for cognitive impairments such as reading, writing, or organizational difficulties.
For example, a Smartpen records handwritten notes by creating both a digital copy and an audio recording of the text. Users simply tap certain parts of their notes, the pen saves it, and reads it back to them. From there, the user can also download their notes onto a computer for increased accessibility. Digital voice recorders are also used to record "in the moment" information for fast and easy recall at a later time.
Educational software:
Main article: Educational software
Educational software is software that assists people with reading, learning, comprehension, and organizational difficulties. Any accommodation software such as text readers, notetakers, text enlargers, organization tools, word predictions, and talking word processors falls under the category of educational software.
Eating Impairments:
Main article: Assistive eating devices
Adaptive eating devices include items commonly used by the general population like spoons and forks and plates. However they become assistive technology when they are modified to accommodate the needs of people who have difficultly using standard cutlery due to a disabling condition.
Common modifications include increasing the size of the utensil handle to make it easier to grasp. Plates and bowls may have a guard on the edge that stops food being pushed off of the dish when it is being scooped. More sophisticated equipment for eating includes manual and powered feeding devices. These devices support those who have little or no hand and arm function and enable them to eat independently.
In Sports:
Assistive technology in sports is an area of technology design that is growing. Assistive technology is the array of new devices created to enable sports enthusiasts who have disabilities to play. Assistive technology may be used in adaptive sports, where an existing sport is modified to enable players with a disability to participate; or, assistive technology may be used to invent completely new sports with athletes with disabilities exclusively in mind.
An increasing number of people with disabilities are participating in sports, leading to the development of new assistive technology. Assistive technology devices can be simple, or "low-tech", or they may use highly advanced technology. "Low-tech" devices can include velcro gloves and adaptive bands and tubes. "High-tech" devices can include all-terrain wheelchairs and adaptive bicycles.
Accordingly, assistive technology can be found in sports ranging from local community recreation to the elite Paralympic Games. More complex assistive technology devices have been developed over time, and as a result, sports for people with disabilities "have changed from being a clinical therapeutic tool to an increasingly competition-oriented activity".
In Education:
In the United States there are two major pieces of legislation that govern the use of assistive technology within the school system.
The first is Section 504 of the Rehabilitation Act of 1973 and the second being the Individuals with Disabilities Education Act (IDEA) which was first enacted in 1975 under the name The Education for All Handicapped Children Act.
In 2004, during the reauthorization period for IDEA, the National Instructional Material Access Center (NIMAC) was created which provided a repository of accessible text including publisher's textbooks to students with a qualifying disability. Files provided are in XML format and used as a starting platform for braille readers, screen readers, and other digital text software.
IDEA defines assistive technology as follows: "any item, piece of equipment, or product system, whether acquired commercially off the shelf, modified, or customized, that is used to increase, maintain, or improve functional capabilities of a child with a disability. (B) Exception.--The term does not include a medical device that is surgically implanted, or the replacement of such device."
Assistive technology in this area is broken down into low, mid, and high tech categories. Low tech encompasses equipment that is often low cost and does not include batteries or requires charging. Examples include adapted paper and pencil grips for writing or masks and color overlays for reading.
Mid tech supports used in the school setting include the use of handheld spelling dictionaries and portable word processors used to keyboard writing. High tech supports involve the use of tablet devices and computers with accompanying software. Software supports for writing include the use of auditory feedback while keyboarding, word prediction for spelling, and speech to text.
Supports for reading include the use of text to speech (TTS) software and font modification via access to digital text. Limited supports are available for math instruction and mostly consist of grid based software to allow younger students to keyboard equations and auditory feedback of more complex equations using MathML and Daisy.
Computer Accessibility:
Main article: Computer accessibility
One of the largest problems that affect people with disabilities is discomfort with prostheses. An experiment performed in Massachusetts utilized 20 people with various sensors attached to their arms. The subjects tried different arm exercises, and the sensors recorded their movements. All of the data helped engineers develop new engineering concepts for prosthetics.
Assistive technology may attempt to improve the ergonomics of the devices themselves such as Dvorak and other alternative keyboard layouts, which offer more ergonomic layouts of the keys.
Assistive technology devices have been created to enable people with disabilities to use modern touch screen mobile computers such as the iPad, iPhone and iPod touch. The Pererro is a plug and play adapter for iOS devices which uses the built in Apple VoiceOver feature in combination with a basic switch.
This brings touch screen technology to those who were previously unable to use it. Apple, with the release of iOS 7 had introduced the ability to navigate apps using switch control. Switch access could be activated either through an external bluetooth connected switch, single touch of the screen, or use of right and left head turns using the device's camera.
Additional accessibility features include the use of Assistive Touch which allows a user to access multi-touch gestures through pre-programmed onscreen buttons.
For users with physical disabilities a large variety of switches are available and customizable to the user's needs varying in size, shape, or amount of pressure required for activation. Switch access may be placed near any area of the body which has consistent and reliable mobility and less subject to fatigue. Common sites include the hands, head, and feet.
Eye gaze and head mouse systems can also be used as an alternative mouse navigation. A user may utilize single or multiple switch sites and the process often involves a scanning through items on a screen and activating the switch once the desired object is highlighted.
Home Automation:
The form of home automation called assistive domotics focuses on making it possible for elderly and disabled people to live independently. Home automation is becoming a viable option for the elderly and disabled who would prefer to stay in their own homes rather than move to a healthcare facility.
This field uses much of the same technology and equipment as home automation for security, entertainment, and energy conservation but tailors it towards elderly and disabled users. For example, automated prompts and reminders utilize motion sensors and pre-recorded audio messages; an automated prompt in the kitchen may remind the resident to turn off the oven, and one by the front door may remind the resident to lock the door.
Impacts:
Overall, assistive technology aims to allow people with disabilities to "participate more fully in all aspects of life (home, school, and community)" and increases their opportunities for "education, social interactions, and potential for meaningful employment". It creates greater independence and control for disabled individuals.
For example, in one study of 1,342 infants, toddlers and preschoolers, all with some kind of developmental, physical, sensory, or cognitive disability, the use of assistive technology created improvements in child development.
These included improvements in "cognitive, social, communication, literacy, motor, adaptive, and increases in engagement in learning activities". Additionally, it has been found to lighten caregiver load. Both family and professional caregivers benefit from assistive technology.
Through its use, the time that a family member or friend would need to care for a patient significantly decreases. However, studies show that care time for a professional caregiver increases when assistive technology is used. Nonetheless, their work load is significantly easier as the assistive technology frees them of having to perform certain tasks
See Also:
People who have disabilities often have difficulty performing activities of daily living (ADLs) independently, or even with assistance. ADLs are self-care activities that include toileting, mobility (ambulation), eating, bathing, dressing and grooming. Assistive technology can ameliorate the effects of disabilities that limit the ability to perform ADLs.
Assistive technology promotes greater independence by enabling people to perform tasks they were formerly unable to accomplish, or had great difficulty accomplishing, by providing enhancements to, or changing methods of interacting with, the technology needed to accomplish such tasks.
For example, wheelchairs provide independent mobility for those who cannot walk, while assistive eating devices can enable people who cannot feed themselves to do so.
Due to assistive technology, people with disabilities have an opportunity of a more positive and easygoing lifestyle, with an increase in "social participation," "security and control," and a greater chance to "reduce institutional costs without significantly increasing household expenses."
Adaptive Technology:
The term adaptive technology is often used as the synonym for assistive technology; however, they are different terms. Assistive technology refers to "any item, piece of equipment, or product system, whether acquired commercially, modified, or customized, that is used to increase, maintain, or improve functional capabilities of individuals with disabilities", while adaptive technology covers items that are specifically designed for persons with disabilities and would seldom be used by non-disabled persons.
In other words, "assistive technology is any object or system that increases or maintains the capabilities of people with disabilities," while adaptive technology is "any object or system that is specifically designed for the purpose of increasing or maintaining the capabilities of people with disabilities." Consequently, adaptive technology is a subset of assistive technology. Adaptive technology often refers specifically to electronic and information technology access.
Mobility impairments:
Wheelchairs:
Main article: Wheelchair
Wheelchairs are devices that can be manually propelled or electrically propelled, and that include a seating system and are designed to be a substitute for the normal mobility that most people enjoy.
Wheelchairs and other mobility devices allow people to perform mobility-related activities of daily living which include feeding, toileting, dressing, grooming, and bathing. The devices come in a number of variations where they can be propelled either by hand or by motors where the occupant uses electrical controls to manage motors and seating control actuators through a joystick, sip-and-puff control, or other input devices.
Often there are handles behind the seat for someone else to do the pushing or input devices for caregivers. Wheelchairs are used by people for whom walking is difficult or impossible due to illness, injury, or disability. People with both sitting and walking disability often need to use a wheelchair or walker.
Transfer devices:
Main article: Patient lift
Patient transfer devices generally allow patients with impaired mobility to be moved by caregivers between beds, wheelchairs, commodes, toilets, chairs, stretchers, shower benches, automobiles, swimming pools, and other patient support systems (i.e., radiology, surgical, or examining tables).
The most common devices are Patient lifts (for vertical transfer), Transfer benches, stretcher or convertible chairs (for lateral, supine transfer), sit-to-stand lifts (for moving patients from one seated position to another i.e., from wheelchairs to commodes), air bearing inflatable mattresses (for supine transfer i.e., transfer from a gurney to an operating room table), and sliding boards (usually used for transfer from a bed to a wheelchair).
Highly dependent patients who cannot assist their caregiver in moving them often require a Patient lift (a floor or ceiling-suspended sling lift) which though invented in 1955 and in common use since the early 1960s is still considered the state-of-the-art transfer device by OSHA and the American Nursing Association.
Walkers:
Main article: Walker
A walker or walking frame or Rollator is a tool for disabled people who need additional support to maintain balance or stability while walking. It consists of a frame that is about waist high, approximately twelve inches deep and slightly wider than the user. Walkers are also available in other sizes, such as for children, or for heavy people. Modern walkers are height-adjustable.
The front two legs of the walker may or may not have wheels attached depending on the strength and abilities of the person using it. It is also common to see caster wheels or glides on the back legs of a walker with wheels on the front.
Prosthesis:
Main article: Prosthesis
A prosthesis, prosthetic, or prosthetic limb is a device that replaces a missing body part. It is part of the field of biomechatronics, the science of using mechanical devices with human muscle, skeleton, and nervous systems to assist or enhance motor control lost by trauma, disease, or defect. Prostheses are typically used to replace parts lost by injury (traumatic) or missing from birth (congenital) or to supplement defective body parts.
Inside the body, artificial heart valves are in common use with artificial hearts and lungs seeing less common use but under active technology development. Other medical devices and aids that can be considered prosthetics include hearing aids, artificial eyes, palatal obturator, gastric bands, and dentures.
Prostheses are specifically not orthoses, although given certain circumstances a prosthesis might end up performing some or all of the same functionary benefits as an orthosis.
Prostheses are technically the complete finished item. For instance, a C-Leg knee alone is not a prosthesis, but only a prosthetic component. The complete prosthesis would consist of the attachment system to the residual limb — usually a "socket", and all the attachment hardware components all the way down to and including the terminal device. Keep this in mind as nomenclature is often interchanged.
The terms "prosthetic" and "orthotic" are adjectives used to describe devices such as a prosthetic knee. The terms "prosthetics" and "orthotics" are used to describe the respective allied health fields.
Visual impairments:
Main article: Blindness § Management
Many people with serious visual impairments live independently, using a wide range of tools and techniques. Examples of assistive technology for visually impairment include screen readers, screen magnifiers, Braille embossers, desktop video magnifiers, and voice recorders.
Screen readers:
Main article: Screen reader
Screen readers are used to help the visually impaired to easily access electronic information. These software programs run on a computer in order to convey the displayed information through voice (text-to-speech) or braille (refreshable braille displays) in combination with magnification for low vision users in some cases. There are a variety of platforms and applications available for a variety of costs with differing feature sets.
One example of screen readers is Apple VoiceOver. This software is provided free of charge on all Apple devices. Apple VoiceOver includes the option to magnify the screen, control the keyboard, and provide verbal descriptions to describe what is happening on the screen. There are thirty languages to select from. It also has the capacity to read aloud file content, as well as web pages, E-mail messages, and word processing files.
Braille and braille embossers:
Main article: Braille
Braille is a system of raised dots formed into units called braille cells. A full braille cell is made up of six dots, with two parallel rows of three dots, but other combinations and quantities of dots represent other letters, numbers, punctuation marks, or words. People can then use their fingers to read the code of raised dots.
A braille embosser is, simply put, a printer for braille. Instead of a standard printer adding ink onto a page, the braille embosser imprints the raised dots of braille onto a page. Some braille embossers combine both braille and ink so the documents can be read with either sight or touch.
Refreshable braille display:
Main article: Refreshable braille display
A refreshable braille display or braille terminal is an electro-mechanical device for displaying braille characters, usually by means of round-tipped pins raised through holes in a flat surface. Computer users who cannot use a computer monitor use it to read a braille output version of the displayed text.
Desktop video magnifier:
Main article: Video magnifier
Desktop video magnifiers are electronic devices that use a camera and a display screen to perform digital magnification of printed materials. They enlarge printed pages for those with low vision. A camera connects to a monitor that displays real-time images, and the user can control settings such as magnification, focus, contrast, underlining, highlighting, and other screen preferences. They come in a variety of sizes and styles; some are small and portable with handheld cameras, while others are much larger and mounted on a fixed stand.
Screen magnification software:
Main article: Screen magnifier
A screen magnifier is software that interfaces with a computer's graphical output to present enlarged screen content. It allows users to enlarge the texts and graphics on their computer screens for easier viewing.
Similar to desktop video magnifiers, this technology assists people with low vision. After the user loads the software into their computer's memory, it serves as a kind of "computer magnifying glass." Wherever the computer cursor moves, it enlarges the area around it. This allows greater computer accessibility for a wide range of visual abilities.
Large-print and tactile keyboards:
A large-print keyboard has large letters printed on the keys. On the keyboard shown, the round buttons at the top control software which can magnify the screen (zoom in), change the background color of the screen, or make the mouse cursor on the screen larger. The "bump dots" on the keys, installed in this case by the organization using the keyboards, help the user find the right keys in a tactile way.
Navigation Assistance
Assistive technology for navigation has exploded on the IEEE Xplore database since 2000, with over 7,500 engineering articles written on assistive technologies and visual impairment in the past 25 years, and over 1,300 articles on solving the problem of navigation for people who are blind or visually impaired.
As well, over 600 articles on augmented reality and visual impairment have appeared in the engineering literature since 2000. Most of these articles were published within the past 5 years, and the number of articles in this area is increasing every year.
GPS, accelerometers, gyroscopes, and cameras can pinpoint the exact location of the user and provide information on what’s in the immediate vicinity, and assistance in getting to a destination.
Wearable Technology:
Main article: Wearable technology
Wearable technology are smart electronic devices that can be worn on the body as an implant or an accessory. New technologies are exploring how the visually impaired can receive visual information through wearable devices.
Some wearable devices for visual impairment include:
Personal emergency response systems:
Main article: Telecare
Personal emergency response systems (PERS), or Telecare (UK term), are a particular sort of assistive technology that use electronic sensors connected to an alarm system to help caregivers manage risk and help vulnerable people stay independent at home longer.
An example would be the systems being put in place for senior people such as fall detectors, thermometers (for hypothermia risk), flooding and unlit gas sensors (for people with mild dementia). Notably, these alerts can be customized to the particular person's risks. When the alert is triggered, a message is sent to a caregiver or contact center who can respond appropriately.
Accessibility software:
Main article: Computer accessibility
In human–computer interaction, computer accessibility (also known as accessible computing) refers to the accessibility of a computer system to all people, regardless of disability or severity of impairment, examples include web accessibility guidelines.
Another approach is for the user to present a token to the computer terminal, such as a smart card, that has configuration information to adjust the computer speed, text size, etc. to their particular needs. This is useful where users want to access public computer based terminals in Libraries, ATM, Information kiosks etc.
The concept is encompassed by the CEN EN 1332-4 Identification Card Systems - Man-Machine Interface. This development of this standard has been supported in Europe by SNAPI and has been successfully incorporated into the Lasseo specifications, but with limited success due to the lack of interest from public computer terminal suppliers.
Hearing impairments:
Main article: Assistive Technology for Deaf and Hard of Hearing
The deaf or hard of hearing community has a difficult time to communicate and perceive information as compared to hearing individuals. Thus, these individuals often rely on visual and tactile mediums for receiving and communicating information.
The use of assistive technology and devices provides this community with various solutions to their problems by providing higher sound (for those who are hard of hearing), tactile feedback, visual cues and improved technology access.
Individuals who are deaf or hard of hearing utilize a variety of assistive technologies that provide them with improved access to information in numerous environments. Most devices either provide amplified sound or alternate ways to access information through vision and/or vibration.
These technologies can be grouped into three general categories: Hearing Technology, alerting devices, and communication support.
Hearing aids:
Main article: Hearing aid
A hearing aid or deaf aid is an electroacoustic device which is designed to amplify sound for the wearer, usually with the aim of making speech more intelligible, and to correct impaired hearing as measured by audiometry.
This type of assistive technology helps people with hearing loss participate more fully in their communities by allowing them to hear more clearly. They amplify any and all sound waves through use of a microphone, amplifier, and speaker. There is a wide variety of hearing aids available, including digital, in-the-ear, in-the-canal, behind-the-ear, and on-the-body aids.
Assistive listening devices:
Main article: Assistive listening device
Assistive listening devices include FM, infrared, and loop assistive listening devices. This type of technology allows people with hearing difficulties to focus on a speaker or subject by getting rid of extra background noises and distractions, making places like auditoriums, classrooms, and meetings much easier to participate in.
The assistive listening device usually uses a microphone to capture an audio source near to its origin and broadcast it wirelessly over an FM (Frequency Modulation) transmission, IR (Infra Red) transmission, IL (Induction Loop) transmission, or other transmission methods.
The person who is listening may use an FM/IR/IL Receiver to tune into the signal and listen at his/her preferred volume.
Amplified telephone equipment:
Main article: Telecommunications device for the deaf#Other devices for the deaf or hard of hearing
This type of assistive technology allows users to amplify the volume and clarity of their phone calls so that they can easily partake in this medium of communication. There are also options to adjust the frequency and tone of a call to suit their individual hearing needs.
Additionally, there is a wide variety of amplified telephones to choose from, with different degrees of amplification. For example, a phone with 26 to 40 decibel is generally sufficient for mild hearing loss, while a phone with 71 to 90 decibel is better for more severe hearing loss.
Augmentative and alternative communication:
Main article: Augmentative and alternative communication
Augmentative and alternative communication (AAC) is an umbrella term that encompasses methods of communication for those with impairments or restrictions on the production or comprehension of spoken or written language.
AAC systems are extremely diverse and depend on the capabilities of the user. They may be as basic as pictures on a board that are used to request food, drink, or other care; or they can be advanced speech generating devices, based on speech synthesis, that are capable of storing hundreds of phrases and words.
Cognitive impairments:
Main article: Cognitive orthotics
Assistive Technology for Cognition (ATC) is the use of technology (usually high tech) to augment and assist cognitive processes such as attention, memory, self-regulation, navigation, emotion recognition and management, planning, and sequencing activity.
Systematic reviews of the field have found that the number of ATC are growing rapidly, but have focused on memory and planning, that there is emerging evidence for efficacy, that a lot of scope exists to develop new ATC.
Examples of ATC include: NeuroPage which prompts users about meetings, Wakamaru, which provides companionship and reminds users to take medicine and calls for help if something is wrong, and telephone Reassurance systems.
Memory aids:
Memory aids are any type of assistive technology that helps a user learn and remember certain information. Many memory aids are used for cognitive impairments such as reading, writing, or organizational difficulties.
For example, a Smartpen records handwritten notes by creating both a digital copy and an audio recording of the text. Users simply tap certain parts of their notes, the pen saves it, and reads it back to them. From there, the user can also download their notes onto a computer for increased accessibility. Digital voice recorders are also used to record "in the moment" information for fast and easy recall at a later time.
Educational software:
Main article: Educational software
Educational software is software that assists people with reading, learning, comprehension, and organizational difficulties. Any accommodation software such as text readers, notetakers, text enlargers, organization tools, word predictions, and talking word processors falls under the category of educational software.
Eating Impairments:
Main article: Assistive eating devices
Adaptive eating devices include items commonly used by the general population like spoons and forks and plates. However they become assistive technology when they are modified to accommodate the needs of people who have difficultly using standard cutlery due to a disabling condition.
Common modifications include increasing the size of the utensil handle to make it easier to grasp. Plates and bowls may have a guard on the edge that stops food being pushed off of the dish when it is being scooped. More sophisticated equipment for eating includes manual and powered feeding devices. These devices support those who have little or no hand and arm function and enable them to eat independently.
In Sports:
Assistive technology in sports is an area of technology design that is growing. Assistive technology is the array of new devices created to enable sports enthusiasts who have disabilities to play. Assistive technology may be used in adaptive sports, where an existing sport is modified to enable players with a disability to participate; or, assistive technology may be used to invent completely new sports with athletes with disabilities exclusively in mind.
An increasing number of people with disabilities are participating in sports, leading to the development of new assistive technology. Assistive technology devices can be simple, or "low-tech", or they may use highly advanced technology. "Low-tech" devices can include velcro gloves and adaptive bands and tubes. "High-tech" devices can include all-terrain wheelchairs and adaptive bicycles.
Accordingly, assistive technology can be found in sports ranging from local community recreation to the elite Paralympic Games. More complex assistive technology devices have been developed over time, and as a result, sports for people with disabilities "have changed from being a clinical therapeutic tool to an increasingly competition-oriented activity".
In Education:
In the United States there are two major pieces of legislation that govern the use of assistive technology within the school system.
The first is Section 504 of the Rehabilitation Act of 1973 and the second being the Individuals with Disabilities Education Act (IDEA) which was first enacted in 1975 under the name The Education for All Handicapped Children Act.
In 2004, during the reauthorization period for IDEA, the National Instructional Material Access Center (NIMAC) was created which provided a repository of accessible text including publisher's textbooks to students with a qualifying disability. Files provided are in XML format and used as a starting platform for braille readers, screen readers, and other digital text software.
IDEA defines assistive technology as follows: "any item, piece of equipment, or product system, whether acquired commercially off the shelf, modified, or customized, that is used to increase, maintain, or improve functional capabilities of a child with a disability. (B) Exception.--The term does not include a medical device that is surgically implanted, or the replacement of such device."
Assistive technology in this area is broken down into low, mid, and high tech categories. Low tech encompasses equipment that is often low cost and does not include batteries or requires charging. Examples include adapted paper and pencil grips for writing or masks and color overlays for reading.
Mid tech supports used in the school setting include the use of handheld spelling dictionaries and portable word processors used to keyboard writing. High tech supports involve the use of tablet devices and computers with accompanying software. Software supports for writing include the use of auditory feedback while keyboarding, word prediction for spelling, and speech to text.
Supports for reading include the use of text to speech (TTS) software and font modification via access to digital text. Limited supports are available for math instruction and mostly consist of grid based software to allow younger students to keyboard equations and auditory feedback of more complex equations using MathML and Daisy.
Computer Accessibility:
Main article: Computer accessibility
One of the largest problems that affect people with disabilities is discomfort with prostheses. An experiment performed in Massachusetts utilized 20 people with various sensors attached to their arms. The subjects tried different arm exercises, and the sensors recorded their movements. All of the data helped engineers develop new engineering concepts for prosthetics.
Assistive technology may attempt to improve the ergonomics of the devices themselves such as Dvorak and other alternative keyboard layouts, which offer more ergonomic layouts of the keys.
Assistive technology devices have been created to enable people with disabilities to use modern touch screen mobile computers such as the iPad, iPhone and iPod touch. The Pererro is a plug and play adapter for iOS devices which uses the built in Apple VoiceOver feature in combination with a basic switch.
This brings touch screen technology to those who were previously unable to use it. Apple, with the release of iOS 7 had introduced the ability to navigate apps using switch control. Switch access could be activated either through an external bluetooth connected switch, single touch of the screen, or use of right and left head turns using the device's camera.
Additional accessibility features include the use of Assistive Touch which allows a user to access multi-touch gestures through pre-programmed onscreen buttons.
For users with physical disabilities a large variety of switches are available and customizable to the user's needs varying in size, shape, or amount of pressure required for activation. Switch access may be placed near any area of the body which has consistent and reliable mobility and less subject to fatigue. Common sites include the hands, head, and feet.
Eye gaze and head mouse systems can also be used as an alternative mouse navigation. A user may utilize single or multiple switch sites and the process often involves a scanning through items on a screen and activating the switch once the desired object is highlighted.
Home Automation:
The form of home automation called assistive domotics focuses on making it possible for elderly and disabled people to live independently. Home automation is becoming a viable option for the elderly and disabled who would prefer to stay in their own homes rather than move to a healthcare facility.
This field uses much of the same technology and equipment as home automation for security, entertainment, and energy conservation but tailors it towards elderly and disabled users. For example, automated prompts and reminders utilize motion sensors and pre-recorded audio messages; an automated prompt in the kitchen may remind the resident to turn off the oven, and one by the front door may remind the resident to lock the door.
Impacts:
Overall, assistive technology aims to allow people with disabilities to "participate more fully in all aspects of life (home, school, and community)" and increases their opportunities for "education, social interactions, and potential for meaningful employment". It creates greater independence and control for disabled individuals.
For example, in one study of 1,342 infants, toddlers and preschoolers, all with some kind of developmental, physical, sensory, or cognitive disability, the use of assistive technology created improvements in child development.
These included improvements in "cognitive, social, communication, literacy, motor, adaptive, and increases in engagement in learning activities". Additionally, it has been found to lighten caregiver load. Both family and professional caregivers benefit from assistive technology.
Through its use, the time that a family member or friend would need to care for a patient significantly decreases. However, studies show that care time for a professional caregiver increases when assistive technology is used. Nonetheless, their work load is significantly easier as the assistive technology frees them of having to perform certain tasks
See Also:
- Accessibility
- Augmentative and alternative communication
- Braille technology
- Design for All (in ICT)
- Durable medical equipment
- Matching Person & Technology Model
- OATS: Open Source Assistive Technology Software
- Occupational Therapy
- Transgenerational design
- Universal access to education
Health Technology in the United States
- YouTube Video of the Best Medical Breakthroughs in 2021 by the Cleveland Clinic
- YouTube Video of 5 Medical Breakthroughs Shaping the Future of Health by WebMD
- Click here for a List of FDA-approved Drugs covering Hematology/Oncology (Cancer) Approvals & Safety Notifications.
- Click here for a list of FDA-approved Novel Drugs for 2018
Health technology is defined by the World Health Organization as the "application of organized knowledge and skills in the form of devices, medicines, vaccines, procedures and systems developed to solve a health problem and improve quality of lives". This includes the pharmaceuticals, devices, procedures, organizational systems used in health care, and computer-supported information systems. In the United States, these technologies involve standardized physical objects as well as traditional and designed social means and methods to treat or care for patients.
Medical Technology:
Medical technology, or "medtech", encompasses a wide range of healthcare products and is used to treat diseases and medical conditions affecting humans.
Such technologies (applications of medical science) are intended to improve the quality of healthcare delivered through earlier diagnosis, less invasive treatment options and reduction in hospital stays and rehabilitation times.
Recent advances in medical technology have also focused on cost reduction. Medical technology may broadly include medical devices, information technology, biotech, and healthcare services.
The impacts of medical technology involve social and ethical issues. For example, physicians can seek objective information from technology rather than read subjective patient reports.
A major driver of the sector's growth is the consumerization of medtech. Supported by the widespread availability of smartphones and tablets, providers are able to reach a large audience at low cost, a trend that stands to be consolidated as wearable technologies spread throughout the market.
In the past 5 years running up to the end of 2015, venture funding has grown 200%, allowing US$11.7 billion to flow into health tech businesses from over 30,000 investors in the space.
The over-dependence on the use of technology in every step of the treatment process can result in severe economic burdens to families and individuals.
There has been an unprecedented rise in the utilization of automated clinical laboratories and CT scanners without any proof that they are necessary and beneficial to the individuals and families.
Education
Companies such as Surgical Theater, provide new technology capable of capturing 3D virtual images of patients' brains to use as practice for operations. 3D printing allows medical companies to produce prototypes to practice on before an operation created with artificial tissue.
Medical virtual reality provides doctors multiple surgical scenarios that could happen and allows them to practice and prepare themselves for these situations. It also permits medical students a hands on experience of different procedures without the consequences of making potential mistakes.
ORamaVR is one of the leading companies that employs such medical virtual reality technologies to transform medical education (knowledge) and training (skills) in order to improve patient outcomes, reduce surgical errors and training time and democratise medical education and training.
Privacy of health data:
Phones that can track one's whereabouts, steps and more can serve as medical devices, and medical devices have much the same effect as these phones. In the research article, Privacy Attitudes among Early Adopters of Emerging Health Technologies by Cynthia Cheung, Matthew Bietz, Kevin Patrick and Cinnamon Bloss discovered people were willing to share personal data for scientific advancements, although they still expressed uncertainty about who would have access to their data. People are naturally cautious about giving out sensitive personal information.
In 2015 the Medical Access and CHIP Reauthorization Act (MACRA) was passed which will be put into play in 2018 pushing towards electronic health records. Health Information Technology: Integration, Patient Empowerment, and Security by K. Marvin provided multiple different polls based on people's views on different types of technology entering the medical field most answers where responded with somewhat likely and very few completely disagreed on technology being used in medicine.
Marvin discusses the maintenance required to protect medical data and technology against cyber attacks as well as providing a proper data backup system for the information.
Patient Protection and Affordable Care Act (ACA) also known as Obamacare and health information technology health care is entering the digital era. Although with this development it needs to be protected.
Both health information and financial information now made digital within the health industry might become a larger target for cybercrime. Even with multiple different types of safeguards hackers some how still find their way in so the security that is in place needs to constantly be updated to prevent these breaches.
Allied Professions:
The term medical technology may also refer to the duties performed by clinical laboratory professionals in various settings within the public and private sectors.
The work of these professionals encompass clinical applications that include:
- chemistry,
- genetics,
- hematology,
- immunohematology (blood bank),
- immunology,
- microbiology,
- serology,
- urinalysis,
- and miscellaneous body fluid analysis.
Depending on location, educational level and certifying body, these professionals may be referred to as biomedical scientists, medical laboratory scientists (MLS), medical technologists (MT), medical laboratory technologists and medical laboratory technicians
Technology Testing:
All medical equipment introduced commercially must meet both United States and international regulations. The devices are tested on their material, effects on the human body, all components including devices that have other devices included with them, and the mechanical aspects.
Medical device user fee and modernization act of 2002 was created to make the FDA hurry up on their approval process of medical technology. By introducing sponsor user fees for a faster review time with predetermined performance target for review time:
36 devices and apps were approved by the FDA in 2016.
Types of Technology:
Medical technology has evolved into smaller portable devices, e.g., smartphones, touchscreens, tablets, laptops, digital ink, voice and face recognition and more.
With this technology, the following innovations came to exist:
- electronic health records (EHR),
- health information exchange (HIE),
- Nationwide Health Information Network (NwHIN),
- personal health records (PHRs),
- patient portals,
- nanomedicine,
- genome-based personalized medicine,
- Geographical Positioning System (GPS),
- radio frequency identification (RFID),
- telemedicine,
- clinical decision support (CDS),
- mobile home health care
- and cloud computing came to exist.
3D printing can be used to produce specialized splints, prostheses, parts for medical devices and inert implants. The end goal of 3D printing is being able to print out customized replaceable body parts
Assessment:
The concept of health technology assessment (HTA) was first coined in 1967 by the U.S. Congress in response to the increasing need to address the unintended and potential consequences of health technology, along with its prominent role in society. It was further institutionalized with the establishment of the congressional Office of Technology Assessment (OTA) in 1972-1973.
HTA is defined as a comprehensive form of policy research that examines short- and long-term consequences of the application of technology, including benefits, costs, and risks.
Due to the broad scope of technology assessment, it requires the participation of individuals besides scientists and health care practitioners such as managers and even the consumers.
There are several American organizations that provide health technology assessments and these include the Centers for Medicare and Medicaid Services (CMS) and the Veterans Administration through its VA Technology Assessment Program (VATAP). The models adopted by these institutions vary, although they focus on whether a medical technology being offered is therapeutically relevant. A study conducted in 2007 noted that the assessments still did not use formal economic analyses.
Aside from its development, however, assessment in the health technology industry has been viewed as sporadic and fragmented Issues such as the determination of products that needed to be developed, cost, and access, among others, also emerged. These - some argue - need to be included in the assessment since health technology is never purely a matter of science but also of beliefs, values, and ideologies.
One of the mechanisms being suggested – either as an element of- or an alternative to the current TAs is bioethics, which is also referred to as the "fourth-generation" evaluation framework.
There are at least two dimensions to an ethical HTA. The first involves the incorporation of ethics in the methodological standards employed to assess technologies while the second is concerned with the use of ethical framework in research and judgment on the part of the researchers who produce information used in the industry.
Monitoring One's Health:
Smartphones, tablets, and wearable computers have allowed people to monitor their own health. These devices run numerous applications that are designed to provide simple health services and the monitoring of one's health.
An example of this is Fitbit, a fitness tracker that is worn on the user's wrist. This wearable technology allows people to track their steps, heart rate, floors climbed, miles walked, active minutes, and even sleep patterns. The data collected and analyzed allow users not just to keep track of their own health but also help manage it, particularly through its capability to identify health risk factors.
There is also the case of the Internet, which serves as a repository of information and expert content that can be used to "self-diagnose" instead of going to their doctor. For instance, one need only enumerate symptoms as search parameters at Google and the search engine could identify the illness from the list of contents uploaded to the world wide web, particularly those provided by expert/medical sources.
These advance may eventually have some effect on doctor visits from patients and change the role of the health professionals from "gatekeeper to secondary care to facilitator of information interpretation and decision-making." Apart from basic services provided by Google in Search, there are also companies such as WebMD or Infermedica with its product Symptomate that already offer dedicated symptom-checking apps
Careers:
There are numerous careers to choose from in health technology in the USA. Listed below are some job titles and average salaries.
- Athletic Trainer, Salary: $41,340. Athletic trainers treat athletes and other individuals who have sustained injuries. They also teach people how to prevent injuries. They perform their job under the supervision of physicians.
- Dental Hygienist, Salary: $67,340. Dental hygienists provide preventative dental care and teach patients how to maintain good oral health. They usually work under dentists' supervision.
- Laboratory Technicians and Technologists, Salary: $51,770. Lab technicians and technologists perform laboratory tests and procedures. Technicians work under the supervision of a laboratory technologist or a laboratory manager.
- Nuclear Medicine Technologist, Salary: $67,910. Nuclear medicine technologists prepare and administer radiopharmaceuticals, radioactive drugs, to patients in order to treat or diagnose diseases.
- Pharmacy Technician, Salary: $28,070. Pharmacy technicians assist pharmacists with the preparation of prescription medications for customers.
Health Updates from AARP Magazine, October/November 2018 Issue YouTube Videos: Pictured below:
(L) Belinda Smith with her battery-operated “heart”
(R) CAR T-cell therapy helped eliminate tumors on Marge Jacques' neck.
(L) Belinda Smith with her battery-operated “heart”
(R) CAR T-cell therapy helped eliminate tumors on Marge Jacques' neck.
Heart Pump gives Patients More Time:
by Sari Harrar
BELINDA SMITH needed a new heart. Congestive heart failure had reduced the pumping power of her own ticker by 90 percent. “I needed a heart transplant, but I couldn’t wait that long,” recalls Smith, now 50, a mother of four from Dayton, Ohio. “Doctors gave me less than a year to live.” New hearts aren’t easy to come by.
There are about 4,000 people awaiting transplants, but just 2,500 donor hearts are available each year in the U.S. For people like Smith, though, there’s new hope: a battery-powered implant that plugs into the wall at night. Smith’s life was saved by one of these left ventricular assist devices (LVADs), which doctors installed in her chest in 2017. “It does the work my heart can’t do, pumping blood out into my body,” Smith explains. “I can do almost everything I did before — I just can’t get the cord and batteries wet. I have a waterproof bag for taking a shower, but I can’t go swimming.”
Her LVAD is the size of a D battery and weighs less than a pound.
During the day it runs on a battery pack attached to a power cord extending through her chest wall. “I carry two tote bags whenever I leave my apartment — with my controller, extra batteries, a car charger and an extra pump. If the batteries ever run out, I’ve got less than five minutes to plug in new ones,” she notes. “When I’m going to sleep, I plug myself into a wall outlet with a 25-foot power cord.” LVADs pump blood continuously; users don’t have a heartbeat. “You have to wear a medical-alert bracelet so EMTs don’t think you’re dead or try to give you CPR. That would crush the pump,” Smith says. “But I can hear the sound of the pump when I put my head on my arm at night. It purrs.”
Smith hopes to get a spot on a heart-transplant list soon. But for thousands of adults with heart failure, an LVAD is their future. The Food and Drug Administration last fall approved the first LVAD as a “destination therapy” for indefinite use for people with end-stage heart failure who aren’t candidates for a heart transplant.
“For more and more people, LVADs are a long-term option,” says Smith’s cardiologist, Hareeprasad R. Vongooru. “For someone with severe heart failure whose one-year survival odds are as low as 25 percent, this is a new lease on life.”
In-Artery Sensor Tracks Heart Failure:
For 25 percent of the 5.7 million Americans with congestive heart failure, the heart’s ability to pump blood is dangerously weak. Just walking across the room leaves them breathless. The risk of dying within a year is as high as 15 percent. Catching signs of worsening heart failure early is crucial, so a daily weigh-in to check for water-weight gain is required.
Now the first miniature sensor for this purpose picks up heart changes sooner, by sensing blood pressure shifts inside the pulmonary artery. The device, called CardioMEMS, has cut the risk of death by a respectable 30 percent in people with severe heart failure, according to a brand-new study of 2,174 people presented at the American College of Cardiology’s 67th annual Scientific Session & Expo this year.
The wireless sensor, about the size of a half-inch piece of spaghetti, transmits a constant stream of data to a website monitored by a health care practitioner who can quickly recommend medication adjustments.
On the Horizon: Heart Failure Treatment:
"The big push is prevention. That means giving people not just aspirin and cholesterol-lowering statins but also beta-blockers, ACE inhibitors and the diuretic Aldactone (spironolactone), all of which help protect heart muscle after a heart attack. Though these drugs are recognized for helping to prevent heart failure, they are not always given to people after a heart attack. The other important push is controlling high blood pressure. Even in older adults, keeping blood pressure at a healthy level can reduce the risk for heart failure."
— HAREEPRASAD R. VONGOORU, cardiologist and assistant professor of cardiovascular health and disease at the University of Cincinnati
___________________________________________________________________________
New Weapons in Fight Against Cancer
CAR T-cell therapy, gene testing can help beat this deadly disease
by Sari Harrar, AARP The Magazine, October/November 2018
MARGE JACQUES battled non-Hodgkin’s lymphoma for 15 years, but the white blood cell cancer kept coming back despite multidrug chemotherapy cocktails, liquid radiation and even stem cell therapy. “It returned every two years. In 2014 I had lumps all down my neck, because there were tumors in my lymph glands,” says Jacques, now 63. “I cried as I drove to see my oncologist in Philadelphia. Then he told me there was something new to try.”
Jacques became one of the first American adults to get CAR T-cell therapy — the nation’s first personalized cellular cancer treatment — for her specific cancer, diffuse large B-cell lymphoma. The treatment removed some of her T cells, which are the immune system’s frontline killers. The cells were then genetically modified to produce a receptor that will attach to cancer cells and attack them.
Miraculously, Jacques’ cancer went into remission. “Slowly, the tumors shrank. Blood tests and scans showed it was gone,” she adds. Even better: It has not returned, and she is back to enjoying life. “Throughout my years with cancer, I never felt sorry for myself. But it’s a rocky road,” says this single mother and project manager from Collegeville, Pa. “Humor, family and friends got me through all this.”
Today the groundbreaking treatment is available commercially. In October 2017 and May 2018, the Food and Drug Administration (FDA) approved two CAR T-cell drugs for patients such as Jacques, whose diffuse large B-cell lymphoma doesn’t respond to other treatments.
Experts expect that these drugs, which are covered by Medicare, will be deployed against a variety of cancers in the future. “This is a spectacular new therapy,” notes Jacques’ oncologist, CAR T-cell researcher Stephen J. Schuster, director of the lymphoma program at the University of Pennsylvania. “It’s saving the lives of people whose prognosis was measured in months, not years. In studies, 50 percent respond and about 1 in 3 are long-term, disease-free survivors.”
Mega Gene Test Checks for Targeted Cancer Treatment:
Designated a breakthrough by the FDA when it was approved in late 2017, FoundationOne CDx is one-stop shopping for gene-based treatments. Just one test can check for mutations in 324 genes involved in cancer growth, allowing oncologists to identify the most promising treatments quickly. The FDA recently approved a similar test from Memorial Sloan Kettering Cancer Center in New York City; it’s called MSK-Impact, and it looks for mutations in 468 genes.
On the Horizon: Cancer Treatment:
"From August 2017 through July 2018, the FDA approved 14 new anticancer therapeutics. These include nine molecularly targeted therapeutics, part of the new wave of precision medicines, as well as two targeted forms of radiotherapy and two new CAR T-cell therapies.
There are preclinical studies of vaccines that target oncogenes (early genetic changes that initiate cancers). We expect to see these tested in the next few years in individuals who are at high risk for cancer development."
— ELIZABETH M. JAFFEE, oncologist and president of the American Association for Cancer Research
by Sari Harrar
BELINDA SMITH needed a new heart. Congestive heart failure had reduced the pumping power of her own ticker by 90 percent. “I needed a heart transplant, but I couldn’t wait that long,” recalls Smith, now 50, a mother of four from Dayton, Ohio. “Doctors gave me less than a year to live.” New hearts aren’t easy to come by.
There are about 4,000 people awaiting transplants, but just 2,500 donor hearts are available each year in the U.S. For people like Smith, though, there’s new hope: a battery-powered implant that plugs into the wall at night. Smith’s life was saved by one of these left ventricular assist devices (LVADs), which doctors installed in her chest in 2017. “It does the work my heart can’t do, pumping blood out into my body,” Smith explains. “I can do almost everything I did before — I just can’t get the cord and batteries wet. I have a waterproof bag for taking a shower, but I can’t go swimming.”
Her LVAD is the size of a D battery and weighs less than a pound.
During the day it runs on a battery pack attached to a power cord extending through her chest wall. “I carry two tote bags whenever I leave my apartment — with my controller, extra batteries, a car charger and an extra pump. If the batteries ever run out, I’ve got less than five minutes to plug in new ones,” she notes. “When I’m going to sleep, I plug myself into a wall outlet with a 25-foot power cord.” LVADs pump blood continuously; users don’t have a heartbeat. “You have to wear a medical-alert bracelet so EMTs don’t think you’re dead or try to give you CPR. That would crush the pump,” Smith says. “But I can hear the sound of the pump when I put my head on my arm at night. It purrs.”
Smith hopes to get a spot on a heart-transplant list soon. But for thousands of adults with heart failure, an LVAD is their future. The Food and Drug Administration last fall approved the first LVAD as a “destination therapy” for indefinite use for people with end-stage heart failure who aren’t candidates for a heart transplant.
“For more and more people, LVADs are a long-term option,” says Smith’s cardiologist, Hareeprasad R. Vongooru. “For someone with severe heart failure whose one-year survival odds are as low as 25 percent, this is a new lease on life.”
In-Artery Sensor Tracks Heart Failure:
For 25 percent of the 5.7 million Americans with congestive heart failure, the heart’s ability to pump blood is dangerously weak. Just walking across the room leaves them breathless. The risk of dying within a year is as high as 15 percent. Catching signs of worsening heart failure early is crucial, so a daily weigh-in to check for water-weight gain is required.
Now the first miniature sensor for this purpose picks up heart changes sooner, by sensing blood pressure shifts inside the pulmonary artery. The device, called CardioMEMS, has cut the risk of death by a respectable 30 percent in people with severe heart failure, according to a brand-new study of 2,174 people presented at the American College of Cardiology’s 67th annual Scientific Session & Expo this year.
The wireless sensor, about the size of a half-inch piece of spaghetti, transmits a constant stream of data to a website monitored by a health care practitioner who can quickly recommend medication adjustments.
On the Horizon: Heart Failure Treatment:
"The big push is prevention. That means giving people not just aspirin and cholesterol-lowering statins but also beta-blockers, ACE inhibitors and the diuretic Aldactone (spironolactone), all of which help protect heart muscle after a heart attack. Though these drugs are recognized for helping to prevent heart failure, they are not always given to people after a heart attack. The other important push is controlling high blood pressure. Even in older adults, keeping blood pressure at a healthy level can reduce the risk for heart failure."
— HAREEPRASAD R. VONGOORU, cardiologist and assistant professor of cardiovascular health and disease at the University of Cincinnati
___________________________________________________________________________
New Weapons in Fight Against Cancer
CAR T-cell therapy, gene testing can help beat this deadly disease
by Sari Harrar, AARP The Magazine, October/November 2018
MARGE JACQUES battled non-Hodgkin’s lymphoma for 15 years, but the white blood cell cancer kept coming back despite multidrug chemotherapy cocktails, liquid radiation and even stem cell therapy. “It returned every two years. In 2014 I had lumps all down my neck, because there were tumors in my lymph glands,” says Jacques, now 63. “I cried as I drove to see my oncologist in Philadelphia. Then he told me there was something new to try.”
Jacques became one of the first American adults to get CAR T-cell therapy — the nation’s first personalized cellular cancer treatment — for her specific cancer, diffuse large B-cell lymphoma. The treatment removed some of her T cells, which are the immune system’s frontline killers. The cells were then genetically modified to produce a receptor that will attach to cancer cells and attack them.
Miraculously, Jacques’ cancer went into remission. “Slowly, the tumors shrank. Blood tests and scans showed it was gone,” she adds. Even better: It has not returned, and she is back to enjoying life. “Throughout my years with cancer, I never felt sorry for myself. But it’s a rocky road,” says this single mother and project manager from Collegeville, Pa. “Humor, family and friends got me through all this.”
Today the groundbreaking treatment is available commercially. In October 2017 and May 2018, the Food and Drug Administration (FDA) approved two CAR T-cell drugs for patients such as Jacques, whose diffuse large B-cell lymphoma doesn’t respond to other treatments.
Experts expect that these drugs, which are covered by Medicare, will be deployed against a variety of cancers in the future. “This is a spectacular new therapy,” notes Jacques’ oncologist, CAR T-cell researcher Stephen J. Schuster, director of the lymphoma program at the University of Pennsylvania. “It’s saving the lives of people whose prognosis was measured in months, not years. In studies, 50 percent respond and about 1 in 3 are long-term, disease-free survivors.”
Mega Gene Test Checks for Targeted Cancer Treatment:
Designated a breakthrough by the FDA when it was approved in late 2017, FoundationOne CDx is one-stop shopping for gene-based treatments. Just one test can check for mutations in 324 genes involved in cancer growth, allowing oncologists to identify the most promising treatments quickly. The FDA recently approved a similar test from Memorial Sloan Kettering Cancer Center in New York City; it’s called MSK-Impact, and it looks for mutations in 468 genes.
On the Horizon: Cancer Treatment:
"From August 2017 through July 2018, the FDA approved 14 new anticancer therapeutics. These include nine molecularly targeted therapeutics, part of the new wave of precision medicines, as well as two targeted forms of radiotherapy and two new CAR T-cell therapies.
There are preclinical studies of vaccines that target oncogenes (early genetic changes that initiate cancers). We expect to see these tested in the next few years in individuals who are at high risk for cancer development."
— ELIZABETH M. JAFFEE, oncologist and president of the American Association for Cancer Research
Regenerative Medicine including Tissue Engineering by Mooney Lab Laboratory for Cell and Tissue Engineering (Harvard University)
YouTube Video: Healing from Within: The Promise of Regenerative Medicine (Mayo Clinic)
Pictured below: Tissue Engineering and Regenerative Medicine (see further amplification by Mooney Labs following image)
Tissue Engineering and Regenerative Medicine by Mooney Lab
Laboratory for Cell and Tissue Engineering (Harvard University): "There is a tremendous need for new strategies to promote the regeneration of musculoskeletal, skin, vascular and dental/oral/craniofacial tissues, due to the large number of patients suffering from disease or trauma to these tissues.."
Regenerative Medicine by Wikipedia:
Regenerative medicine is a branch of translational research in tissue engineering and molecular biology which deals with the "process of replacing, engineering or regenerating human cells, tissues or organs to restore or establish normal function". This field holds the promise of engineering damaged tissues and organs by stimulating the body's own repair mechanisms to functionally heal previously irreparable tissues or organs.
Regenerative medicine also includes the possibility of growing tissues and organs in the laboratory and implanting them when the body cannot heal itself. If a regenerated organ's cells would be derived from the patient's own tissue or cells, this would potentially solve the problem of the shortage of organs available for donation, and the problem of organ transplant rejection.
Some of the biomedical approaches within the field of regenerative medicine may involve the use of stem cells. Examples include the injection of stem cells or progenitor cells obtained through directed differentiation (cell therapies); the induction of regeneration by biologically active molecules administered alone or as a secretion by infused cells (immunomodulation therapy); and transplantation of in vitro grown organs and tissues (tissue engineering).
Click on any of the following blue hyperlinks for more about Regenerative Medicine:
Laboratory for Cell and Tissue Engineering (Harvard University): "There is a tremendous need for new strategies to promote the regeneration of musculoskeletal, skin, vascular and dental/oral/craniofacial tissues, due to the large number of patients suffering from disease or trauma to these tissues.."
Regenerative Medicine by Wikipedia:
Regenerative medicine is a branch of translational research in tissue engineering and molecular biology which deals with the "process of replacing, engineering or regenerating human cells, tissues or organs to restore or establish normal function". This field holds the promise of engineering damaged tissues and organs by stimulating the body's own repair mechanisms to functionally heal previously irreparable tissues or organs.
Regenerative medicine also includes the possibility of growing tissues and organs in the laboratory and implanting them when the body cannot heal itself. If a regenerated organ's cells would be derived from the patient's own tissue or cells, this would potentially solve the problem of the shortage of organs available for donation, and the problem of organ transplant rejection.
Some of the biomedical approaches within the field of regenerative medicine may involve the use of stem cells. Examples include the injection of stem cells or progenitor cells obtained through directed differentiation (cell therapies); the induction of regeneration by biologically active molecules administered alone or as a secretion by infused cells (immunomodulation therapy); and transplantation of in vitro grown organs and tissues (tissue engineering).
Click on any of the following blue hyperlinks for more about Regenerative Medicine:
Mayo Clinic
- YouTube Video: Telemedicine Advances-Mayo Clinic
- YouTube Video: Mayo Clinic Minute: The Mayo Clinic Diet
- YouTube Video and theMayo Clinic Minute: What to Eat for Brain Health
The Mayo Clinic is a nonprofit academic medical center based in Rochester, Minnesota, focused on integrated clinical practice, education, and research. It employs more than 4,500 physicians and scientists, along with another 58,400 administrative and allied health staff.
The practice specializes in treating difficult cases through tertiary care and destination medicine. It is home to the highly ranked Mayo Clinic Alix School of Medicine in addition to many of the largest, best regarded residency education programs in the United States. It spends over $660 million a year on research and has more than 3,000 full-time research personnel.
William Worrall Mayo settled his family in Rochester in 1864 and opened a sole proprietorship medical practice that evolved under his sons, Will and Charlie Mayo, into Mayo Clinic.
Today, in addition to its flagship hospital in Rochester, Mayo Clinic has major campuses in Arizona and Florida. The Mayo Clinic Health System also operates affiliated facilities throughout Minnesota, Wisconsin, and Iowa.
Mayo Clinic is ranked number 1 in the United States on the 2018–2019 U.S. News & World Report Best Hospitals Honor Roll, maintaining a position at or near the top for more than 27 years. It has been on the list of "100 Best Companies to Work For" published by Fortune magazine for fourteen consecutive years, and has continued to achieve this ranking through 2017.
Core Operations:
Patient care:
Each year, more than 1.3 million different patients from all 50 states and from more than 150 countries are seen at one of the Mayo Clinic facilities. Mayo Clinic offers highly specialized medical care, and a large portion of the patient population are referrals from smaller clinics and hospitals across the upper Midwest and the United States.
Mayo Clinic physicians are paid a fixed salary, which is not linked to patient volume (relative value units) or income from fee-for-service payments. This practice is thought to decrease the monetary motivation to see patients in excessive numbers and increase the incentive to spend more time with individuals. Salaries are determined by the marketplace salaries for physicians in comparable large group practices.
Research:
Mayo Clinic researchers contribute to the understanding of disease processes, best clinical practices, and translation of findings from the laboratory to the clinical practice. Nearly 600 doctoral level physicians and research scientists are employed, with an additional 3,400 other health personnel and students with appointments in research.
In 2015, more than 2,700 research protocols were reviewed by the Mayo Clinic Institutional review board and 11,000 ongoing human research studies. These research initiatives led to more than 7,300 research publications and review articles in peer-review journals.[6]
Education:
Main article: Mayo Clinic College of Medicine and Science
The Mayo Clinic College of Medicine and Science (MCCMS), established in 1915, offers educational programs embedded in Mayo Clinic's clinical practice and biomedical research activities.
MCCMS consists of five accredited schools, including the M.D. degree-granting Mayo Clinic Alix School of Medicine as well as the master's and Ph.D. degree-granting Mayo Clinic Graduate School of Biomedical Sciences.
The Mayo Clinic School of Health Sciences offers training for about 50 health sciences career fields. The Mayo Clinic School of Graduate Medical Education runs over 270 residences and fellowships in all medical and surgical specialties. The Mayo Clinic School of Continuous Professional Development delivers continuing education courses aimed at practicing medical professionals.
Innovation:
Mayo Clinic has adopted more than 15,000 mobile devices from Apple for patient care; including the iPad, iPad Mini and iPhone. Mayo Clinic then created an app for these devices called Synthesis Mobile, which integrated hundreds of their health systems. For Mayo Clinic Care Network members, more apps were created to help patients see their medical records or ask clinicians for assistance.
In 2014, Mayo Clinic was developing an app for Apple's HealthKit to help users maintain healthy lifestyles and warn of certain health signs that need attention.
Mayo Clinic, in collaboration with real estate firm Delos Living, launched the Well Living Lab in September 2015. This research facility is designed to simulate real-world, non-hospital environments to allow Mayo Clinic researchers to study the interaction between indoor spaces and human health.
The Mayo Clinic Center for Innovation, established in 2008, was one of the pioneers of innovation in healthcare. It has since worked on over 270 projects and is often looked to as a role model for using design in healthcare.
In March 2018, Mayo Clinic and Mytonomy, a healthcare education system company, partnered to provide video content for cancer patients. The video content is used to address important questions and answers and designed to aid in the decision-making process between patient and doctor.
Click on any of the following blue hyperlinks for more about the Mayo Clinic:
The practice specializes in treating difficult cases through tertiary care and destination medicine. It is home to the highly ranked Mayo Clinic Alix School of Medicine in addition to many of the largest, best regarded residency education programs in the United States. It spends over $660 million a year on research and has more than 3,000 full-time research personnel.
William Worrall Mayo settled his family in Rochester in 1864 and opened a sole proprietorship medical practice that evolved under his sons, Will and Charlie Mayo, into Mayo Clinic.
Today, in addition to its flagship hospital in Rochester, Mayo Clinic has major campuses in Arizona and Florida. The Mayo Clinic Health System also operates affiliated facilities throughout Minnesota, Wisconsin, and Iowa.
Mayo Clinic is ranked number 1 in the United States on the 2018–2019 U.S. News & World Report Best Hospitals Honor Roll, maintaining a position at or near the top for more than 27 years. It has been on the list of "100 Best Companies to Work For" published by Fortune magazine for fourteen consecutive years, and has continued to achieve this ranking through 2017.
Core Operations:
Patient care:
Each year, more than 1.3 million different patients from all 50 states and from more than 150 countries are seen at one of the Mayo Clinic facilities. Mayo Clinic offers highly specialized medical care, and a large portion of the patient population are referrals from smaller clinics and hospitals across the upper Midwest and the United States.
Mayo Clinic physicians are paid a fixed salary, which is not linked to patient volume (relative value units) or income from fee-for-service payments. This practice is thought to decrease the monetary motivation to see patients in excessive numbers and increase the incentive to spend more time with individuals. Salaries are determined by the marketplace salaries for physicians in comparable large group practices.
Research:
Mayo Clinic researchers contribute to the understanding of disease processes, best clinical practices, and translation of findings from the laboratory to the clinical practice. Nearly 600 doctoral level physicians and research scientists are employed, with an additional 3,400 other health personnel and students with appointments in research.
In 2015, more than 2,700 research protocols were reviewed by the Mayo Clinic Institutional review board and 11,000 ongoing human research studies. These research initiatives led to more than 7,300 research publications and review articles in peer-review journals.[6]
Education:
Main article: Mayo Clinic College of Medicine and Science
The Mayo Clinic College of Medicine and Science (MCCMS), established in 1915, offers educational programs embedded in Mayo Clinic's clinical practice and biomedical research activities.
MCCMS consists of five accredited schools, including the M.D. degree-granting Mayo Clinic Alix School of Medicine as well as the master's and Ph.D. degree-granting Mayo Clinic Graduate School of Biomedical Sciences.
The Mayo Clinic School of Health Sciences offers training for about 50 health sciences career fields. The Mayo Clinic School of Graduate Medical Education runs over 270 residences and fellowships in all medical and surgical specialties. The Mayo Clinic School of Continuous Professional Development delivers continuing education courses aimed at practicing medical professionals.
Innovation:
Mayo Clinic has adopted more than 15,000 mobile devices from Apple for patient care; including the iPad, iPad Mini and iPhone. Mayo Clinic then created an app for these devices called Synthesis Mobile, which integrated hundreds of their health systems. For Mayo Clinic Care Network members, more apps were created to help patients see their medical records or ask clinicians for assistance.
In 2014, Mayo Clinic was developing an app for Apple's HealthKit to help users maintain healthy lifestyles and warn of certain health signs that need attention.
Mayo Clinic, in collaboration with real estate firm Delos Living, launched the Well Living Lab in September 2015. This research facility is designed to simulate real-world, non-hospital environments to allow Mayo Clinic researchers to study the interaction between indoor spaces and human health.
The Mayo Clinic Center for Innovation, established in 2008, was one of the pioneers of innovation in healthcare. It has since worked on over 270 projects and is often looked to as a role model for using design in healthcare.
In March 2018, Mayo Clinic and Mytonomy, a healthcare education system company, partnered to provide video content for cancer patients. The video content is used to address important questions and answers and designed to aid in the decision-making process between patient and doctor.
Click on any of the following blue hyperlinks for more about the Mayo Clinic:
Breakthrough Medical Therapy (FDA)
- YouTube Video: Breakthrough Therapy (FDA December 2015)
- YouTube Video: FDA Approves Breakthrough Cancer Treatment
Breakthrough therapy is a United States Food and Drug Administration designation that expedites drug development that was created by Congress under Section 902 of the 9 July 2012 Food and Drug Administration Safety and Innovation Act.
The FDA's "breakthrough therapy" designation is not intended to imply that a drug is actually a "breakthrough" or that there is high-quality evidence of treatment efficacy for a particular condition; rather, it allows the FDA to grant priority review to drug candidates if preliminary clinical trials indicate that the therapy may offer substantial treatment advantages over existing options for patients with serious or life-threatening diseases.
The FDA has other mechanisms for expediting the review and approval process for promising drugs, including fast track designation, accelerated approval, and priority review.
Requirements:
A breakthrough therapy designation can be assigned to a drug if "it is a drug which is intended alone or in combination with one or more other drugs to treat a serious or life threatening disease or condition" and if the preliminary clinical evidence indicates that the drug may demonstrate substantial improvement over existing therapies on one or more clinically significant endpoints, such as substantial treatment effects observed early in clinical development."
Requests are reviewed by the Center for Drug Evaluation and Research (CDER) and the Center for Biologics Evaluation and Research (CBER). CDER receives approximately 100 requests per year for breakthrough designation. Historically, about one third were approved. CBER receives 15–30 requests per year.
Sponsors must apply for breakthrough status separately for each indication they intend to label the drug for.
Breakthrough designation applications are submitted as an amendment to the IND applications before the initiation of clinical trials.
Incentives:
Drugs that have been granted breakthrough status are given priority review. The FDA works with the sponsor of the drug application to expedite the approval process. This expedited process can include rolling reviews, smaller clinical trials, and alternative trial designs.
Issues:
Critics have said that the name is misleading and provides companies that obtain a breakthrough designation for a drug candidate with a marketing advantage that may be undeserved.
The FDA acknowledges that the name "breakthrough therapy" may be misleading. It was never meant to imply that these drugs are actually “breakthroughs,” and it does not ensure that they will provide clinical benefit, but still critics complain that they are based on preliminary evidence, including changes in surrogate markers such as laboratory measurements, that often don't reflect "meaningful clinical benefit."
The FDA guidance states: "Not all products designated as breakthrough therapies ultimately will be shown to have the substantial improvement over available therapies suggested by the preliminary clinical evidence at the time of designation. If the designation is no longer supported by subsequent data, FDA may rescind the designation."
See also:
The FDA's "breakthrough therapy" designation is not intended to imply that a drug is actually a "breakthrough" or that there is high-quality evidence of treatment efficacy for a particular condition; rather, it allows the FDA to grant priority review to drug candidates if preliminary clinical trials indicate that the therapy may offer substantial treatment advantages over existing options for patients with serious or life-threatening diseases.
The FDA has other mechanisms for expediting the review and approval process for promising drugs, including fast track designation, accelerated approval, and priority review.
Requirements:
A breakthrough therapy designation can be assigned to a drug if "it is a drug which is intended alone or in combination with one or more other drugs to treat a serious or life threatening disease or condition" and if the preliminary clinical evidence indicates that the drug may demonstrate substantial improvement over existing therapies on one or more clinically significant endpoints, such as substantial treatment effects observed early in clinical development."
Requests are reviewed by the Center for Drug Evaluation and Research (CDER) and the Center for Biologics Evaluation and Research (CBER). CDER receives approximately 100 requests per year for breakthrough designation. Historically, about one third were approved. CBER receives 15–30 requests per year.
Sponsors must apply for breakthrough status separately for each indication they intend to label the drug for.
Breakthrough designation applications are submitted as an amendment to the IND applications before the initiation of clinical trials.
Incentives:
Drugs that have been granted breakthrough status are given priority review. The FDA works with the sponsor of the drug application to expedite the approval process. This expedited process can include rolling reviews, smaller clinical trials, and alternative trial designs.
Issues:
Critics have said that the name is misleading and provides companies that obtain a breakthrough designation for a drug candidate with a marketing advantage that may be undeserved.
The FDA acknowledges that the name "breakthrough therapy" may be misleading. It was never meant to imply that these drugs are actually “breakthroughs,” and it does not ensure that they will provide clinical benefit, but still critics complain that they are based on preliminary evidence, including changes in surrogate markers such as laboratory measurements, that often don't reflect "meaningful clinical benefit."
The FDA guidance states: "Not all products designated as breakthrough therapies ultimately will be shown to have the substantial improvement over available therapies suggested by the preliminary clinical evidence at the time of designation. If the designation is no longer supported by subsequent data, FDA may rescind the designation."
See also:
- Official FDA Website
- List of drugs granted breakthrough therapy designation
- FDA Fast Track Development Program
- Priority review (FDA)
- Orphan drug
Personalities, including a List
- YouTube Video: A Psychiatrist's Perspective about Donald Trump's Personality
- YouTube Video: 'The Dangerous Case Of Donald Trump': 27 Psychiatrists Assess | The Last Word | MSNBC
- YouTube Video: Proof Trump Has Narcissistic Personality Disorder
Personality is defined as the characteristic set of behaviors, cognitions, and emotional patterns that evolve from biological and environmental factors.
While there is no generally agreed upon definition of personality, most theories focus on motivation and psychological interactions with one's environment.
Trait-based personality theories, such as those defined by Raymond Cattell define personality as the traits that predict a person's behavior. On the other hand, more behaviorally based approaches define personality through learning and habits. Nevertheless, most theories view personality as relatively stable.
The study of the psychology of personality, called personality psychology, attempts to explain the tendencies that underlie differences in behavior. Many approaches have been taken on to study personality, including biological, cognitive, learning and trait based theories, as well as psychodynamic, and humanistic approaches.
Personality psychology is divided among the first theorists, with a few influential theories being posited by Sigmund Freud, Alfred Adler, Gordon Allport, Hans Eysenck, Abraham Maslow, and Carl Rogers.
Measuring:
Personality can be determined through a variety of tests. Due to the fact that personality is a complex idea, the dimensions of personality and scales of personality tests vary and often are poorly defined.
Two main tools to measure personality are objective tests and projective measures. Examples of such tests are the: Big Five Inventory (BFI), Minnesota Multiphasic Personality Inventory (MMPI-2), Rorschach Inkblot test, Neurotic Personality Questionnaire KON-2006, or Eysenck's Personality Questionnaire (EPQ-R).
All of these tests are beneficial because they have both reliability and validity, two factors that make a test accurate. "Each item should be influenced to a degree by the underlying trait construct, giving rise to a pattern of positive intercorrelations so long as all items are oriented (worded) in the same direction."
A recent, but not well-known, measuring tool that psychologists use is the 16 PF. It measures personality based on Cattell's 16 factor theory of personality. Psychologists also use it as a clinical measuring tool to diagnose psychiatric disorders and help with prognosis and therapy planning.
The Big Five Inventory is the most used measuring tool because it has criterion that expands across different factors in personality, allowing psychologists to have the most accurate information they can garner.
Five-factor model:
Personality is often broken into statistically-identified factors called the Big Five, which are openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism (or emotional stability). These components are generally stable over time, and about half of the variance appears to be attributable to a person's genetics rather than the effects of one's environment.
Some research has investigated whether the relationship between happiness and extraversion seen in adults can also be seen in children. The implications of these findings can help identify children that are more likely to experience episodes of depression and develop types of treatment that such children are likely to respond to. In both children and adults, research shows that genetics, as opposed to environmental factors, exert a greater influence on happiness levels.
Personality is not stable over the course of a lifetime, but it changes much more quickly during childhood, so personality constructs in children are referred to as temperament.
Temperament is regarded as the precursor to personality. Whereas McCrae and Costa's Big Five model assesses personality traits in adults, the EAS (emotionality, activity, and sociability) model is used to assess temperament in children. This model measures levels of emotionality, activity, sociability, and shyness in children. The personality theorists consider temperament EAS model similar to the Big Five model in adults; however, this might be due to a conflation of concepts of personality and temperament as described above.
Findings show that high degrees of sociability and low degrees of shyness are equivalent to adult extraversion, and correlate with higher levels of life satisfaction in children.
Another interesting finding has been the link found between acting extraverted and positive affect. Extraverted behaviors include acting talkative, assertive, adventurous, and outgoing.
For the purposes of this study, positive affect is defined as experiences of happy and enjoyable emotions. This study investigated the effects of acting in a way that is counter to a person's dispositional nature. In other words, the study focused on the benefits and drawbacks of introverts (people who are shy, socially inhibited and non-aggressive) acting extraverted, and of extraverts acting introverted.
After acting extraverted, introverts' experience of positive affect increased whereas extraverts seemed to experience lower levels of positive affect and suffered from the phenomenon of ego depletion. Ego depletion, or cognitive fatigue, is the use of one's energy to overtly act in a way that is contrary to one's inner disposition. When people act in a contrary fashion, they divert most, if not all, (cognitive) energy toward regulating this foreign style of behavior and attitudes.
Because all available energy is being used to maintain this contrary behavior, the result is an inability to use any energy to make important or difficult decisions, plan for the future, control or regulate emotions, or perform effectively on other cognitive tasks.
One question that has been posed is why extraverts tend to be happier than introverts. The two types of explanations attempt to account for this difference are instrumental theories and temperamental theories. The instrumental theory suggests that extraverts end up making choices that place them in more positive situations and they also react more strongly than introverts to positive situations.
The temperamental theory suggests that extraverts have a disposition that generally leads them to experience a higher degree of positive affect. In their study of extraversion, Lucas and Baird found no statistically significant support for the instrumental theory but did, however, find that extraverts generally experience a higher level of positive affect.
Research has been done to uncover some of the mediators that are responsible for the correlation between extraversion and happiness. Self-esteem and self-efficacy are two such mediators.
Self-efficacy is one's belief about abilities to perform up to personal standards, the ability to produce desired results, and the feeling of having some ability to make important life decisions. Self-efficacy has been found to be related to the personality traits of extraversion and subjective well-being.
Self-efficacy, however, only partially mediates the relationship between extraversion (and neuroticism) and subjective happiness. This implies that there are most likely other factors that mediate the relationship between subjective happiness and personality traits. Self-esteem may be another similar factor.
Individuals with a greater degree of confidence about themselves and their abilities seem to have both higher degrees of subjective well-being and higher levels of extraversion.
Other research has examined the phenomenon of mood maintenance as another possible mediator. Mood maintenance is the ability to maintain one's average level of happiness in the face of an ambiguous situation – meaning a situation that has the potential to engender either positive or negative emotions in different individuals.
It has been found to be a stronger force in extraverts. This means that the happiness levels of extraverted individuals are less susceptible to the influence of external events. This finding implies that extraverts' positive moods last longer than those of introverts.
Developmental biological model:
Modern conceptions of personality, such as the Temperament and Character Inventory have suggested four basic temperaments that are thought to reflect basic and automatic responses to danger and reward that rely on associative learning.
The four temperaments, harm avoidance, reward dependence, novelty seeking and persistence are somewhat analogous to ancient conceptions of melancholic, sanguine, choleric, phlegmatic personality types, although the temperaments reflect dimensions rather than distance categories.
While factor based approaches to personality have yielded models that account for significant variance, the developmental biological model has been argued to better reflect underlying biological processes. Distinct genetic, neurochemical and neuroanatomical correlates responsible for each temperamental trait have been observed, unlike with five factor models.
The harm avoidance trait has been associated with increased reactivity in insular and amygdala salience networks, as well as reduced 5-HT2 receptor binding peripherally, and reduced GABA concentrations. Novelty seeking has been associated with reduced activity in insular salience networks increased striatal connectivity. Novelty seeking correlates with dopamine synthesis capacity in the striatum, and reduced auto receptor availability in the midbrain.
Reward dependence has been linked with the oxytocin system, with increased concentration of plasma oxytocin being observed, as well as increased volume in oxytocin related regions of the hypothalamus. Persistence has been associated with increased striatal-mPFC connectivity, increased activation of ventral striatal-orbitofrontal-anterior cingulate circuits, as well as increased salivary amylase levels indicative of increased noradrenergic tone.
Environmental influences:
It has been shown that personality traits are more malleable by environmental influences than researchers originally believed. Personality differences predict the occurrence of life experiences.
One study that has shown how the home environment, specifically the types of parents a person has, can affect and shape their personality. Mary Ainsworth's Strange Situation experiment showcased how babies reacted to having their mother leave them alone in a room with a stranger.
The different styles of attachment, labelled by Ainsworth, were Secure, Ambivalent, avoidant, and disorganized. Children who were securely attached tend to be more trusting, sociable, and are confident in their day-to-day life. Children who were disorganized were reported to have higher levels of anxiety, anger, and risk-taking behavior.
Judith Rich Harris's group socialization theory postulates that an individual's peer groups, rather than parental figures, are the primary influence of personality and behavior in adulthood.
Intra- and intergroup processes, not dyadic relationships such as parent-child relationships, are responsible for the transmission of culture and for environmental modification of children's personality characteristics. Thus, this theory points at the peer group representing the environmental influence on a child's personality rather than the parental style or home environment.
Tessuya Kawamoto's Personality Change from Life Experiences: Moderation Effect of Attachment Security talked about laboratory tests. The study mainly focused on the effects of life experiences on change in personality on and life experiences. The assessments suggested that "the accumulation of small daily experiences may work for the personality development of university students and that environmental influences may vary by individual susceptibility to experiences, like attachment security".
Cross-cultural studies:
There has been some recent debate over the subject of studying personality in a different culture. Some people think that personality comes entirely from culture and therefore there can be no meaningful study in cross-culture study. On the other hand, others believe that some elements are shared by all cultures and an effort is being made to demonstrate the cross-cultural applicability of "the Big Five".
Cross-cultural assessment depends on the universality of personality traits, which is whether there are common traits among humans regardless of culture or other factors. If there is a common foundation of personality, then it can be studied on the basis of human traits rather than within certain cultures. This can be measured by comparing whether assessment tools are measuring similar constructs across countries or cultures.
Two approaches to researching personality are looking at emic and etic traits. Emic traits are constructs unique to each culture, which are determined by local customs, thoughts, beliefs, and characteristics. Etic traits are considered universal constructs, which establish traits that are evident across cultures that represent a biological bases of human personality.
If personality traits are unique to individual culture, then different traits should be apparent in different cultures. However, the idea that personality traits are universal across cultures is supported by establishing the Five Factor Model of personality across multiple translations of the NEO-PI-R, which is one of the most widely used personality measures.
When administering the NEO-PI-R to 7,134 people across six languages, the results show a similar pattern of the same five underlying constructs that are found in the American factor structure.
Similar results were found using the Big Five Inventory (BFI), as it was administered in 56 nations across 28 languages. The five factors continued to be supported both conceptually and statistically across major regions of the world, suggesting that these underlying factors are common across cultures.
There are some differences across culture but they may be a consequence of using a lexical approach to study personality structures, as language has limitations in translation and different cultures have unique words to describe emotion or situations.
For example, the term "feeling blue" is used to describe sadness in more Westernized cultures, but does not translate to other languages. Differences across cultures could be due to real cultural differences, but they could also be consequences of poor translations, biased sampling, or differences in response styles across cultures.
Examining personality questionnaires developed within a culture can also be useful evidence for the universality of traits across cultures, as the same underlying factors can still be found. Results from several European and Asian studies have found overlapping dimensions with the Five Factor Model as well as additional culture-unique dimensions. Finding similar factors across cultures provides support for the universality of personality trait structure, but more research is necessary to gain stronger support.
Historical development of concept:
The modern sense of individual personality is a result of the shifts in culture originating in the Renaissance, an essential element in modernity. In contrast, the Medieval European's sense of self was linked to a network of social roles: "the household, the kinship network, the guild, the corporation – these were the building blocks of personhood".
Stephen Greenblatt observes, in recounting the recovery (1417) and career of Lucretius' poem De rerum natura: "at the core of the poem lay key principles of a modern understanding of the world." "Dependent on the family, the individual alone was nothing," Jacques Gélis observes. "The characteristic mark of the modern man has two parts: one internal, the other external; one dealing with his environment, the other with his attitudes, values, and feelings."
Rather than being linked to a network of social roles, the modern man is largely influenced by the environmental factors such as: "urbanization, education, mass communication, industrialization, and politicization."
Temperament and philosophy:
William James (1842–1910) argued that temperament explains a great deal of the controversies in the history of philosophy by arguing that it is a very influential premise in the arguments of philosophers. Despite seeking only impersonal reasons for their conclusions, James argued, the temperament of philosophers influenced their philosophy.
Temperament thus conceived is tantamount to a bias. Such bias, James explained, was a consequence of the trust philosophers place in their own temperament. James thought the significance of his observation lay on the premise that in philosophy an objective measure of success is whether a philosophy is peculiar to its philosopher or not, and whether a philosopher is dissatisfied with any other way of seeing things or not.
Mental make-up:
James argued that temperament may be the basis of several divisions in academia, but focused on philosophy in his 1907 lectures on Pragmatism. In fact, James' lecture of 1907 fashioned a sort of trait theory of the empiricist and rationalist camps of philosophy.
As in most modern trait theories, the traits of each camp are described by James as distinct and opposite, and may be possessed in different proportions on a continuum, and thus characterize the personality of philosophers of each camp. The "mental make-up" (i.e. personality) of rationalist philosophers is described as "tender-minded" and "going by "principles," and that of empiricist philosophers is described as "tough-minded" and "going by "facts."
James distinguishes each not only in terms of the philosophical claims they made in 1907, but by arguing that such claims are made primarily on the basis of temperament.
Furthermore, such categorization was only incidental to James' purpose of explaining his pragmatist philosophy, and is not exhaustive.
Empiricists and rationalists:
According to James, the temperament of rationalist philosophers differed fundamentally from the temperament of empiricist philosophers of his day. The tendency of rationalist philosophers toward refinement and superficiality never satisfied an empiricist temper of mind.
Rationalism leads to the creation of closed systems, and such optimism is considered shallow by the fact-loving mind, for whom perfection is far off. Rationalism is regarded as pretension, and a temperament most inclined to abstraction. The temperament of rationalists, according to James, led to sticking with logic.
Empiricists, on the other hand, stick with the external senses rather than logic. British empiricist John Locke's (1632–1704) explanation of personal identity provides an example of what James referred to. Locke explains the identity of a person, i.e. personality, on the basis of a precise definition of identity, by which the meaning of identity differs according to what 3it is being applied to.
The identity of a person, is quite distinct from the identity of a man, woman, or substance according to Locke. Locke concludes that consciousness is personality because it "always accompanies thinking, it is that which makes every one to be what he calls self," and remains constant in different places at different times. Thus his explanation of personal identity is in terms of experience as James indeed maintained is the case for most empiricists.
Rationalists conceived of the identity of persons differently than empiricists such as Locke who distinguished identity of substance, person, and life. According to Locke, Rene Descartes (1596–1650) agreed only insofar as he did not argue that one immaterial spirit is the basis of the person "for fear of making brutes thinking things too."
According to James, Locke tolerated arguments that a soul was behind the consciousness of any person. However, Locke's successor David Hume (1711–1776), and empirical psychologists after him denied the soul except for being a term to describe the cohesion of inner lives.
However, some research suggests Hume excluded personal identity from his opus An Inquiry Concerning Human Understanding because he thought his argument was sufficient but not compelling. Descartes himself distinguished active and passive faculties of mind, each contributing to thinking and consciousness in different ways.
The passive faculty, Descartes argued, simply receives, whereas the active faculty produces and forms ideas, but does not presuppose thought, and thus cannot be within the thinking thing. The active faculty mustn't be within self because ideas are produced without any awareness of them, and are sometimes produced against one's will.
Rationalist philosopher Benedictus Spinoza (1632–1677) argued that ideas are the first element constituting the human mind, but existed only for actually existing things. In other words, ideas of non-existent things are without meaning for Spinoza, because an idea of a non-existent thing cannot exist. Further, Spinoza's rationalism argued that the mind does not know itself, except insofar as it perceives the "ideas of the modifications of body," in describing its external perceptions, or perceptions from without.
On the contrary, from within, Spinoza argued, perceptions connect various ideas clearly and distinctly. The mind is not the free cause of its actions for Spinoza. Spinoza equates the will with the understanding, and explains the common distinction of these things as being two different things as error which results from the individual's misunderstanding of the nature of thinking.
Biology:
The biological basis of personality is the theory that anatomical structures located in the brain contribute to personality traits. This stems from neuropsychology, which studies how the structure of the brain relates to various psychological processes and behaviors.
For instance, in human beings, the frontal lobes are responsible for foresight and anticipation, and the occipital lobes are responsible for processing visual information. In addition, certain physiological functions such as hormone secretion also affect personality. For example, the hormone testosterone is important for sociability, affectivity, aggressiveness, and sexuality.
Additionally, studies show that the expression of a personality trait depends on the volume of the brain cortex it is associated with.
There is also a confusion among some psychologists who conflate personality with temperament. Temperament traits that are based on weak neurochemical imbalances within neurotransmitter systems are much more stable, consistent in behavior and show up in early childhood; they can't be changed easily but can be compensated for in behavior. In contrast to that, personality traits and features are the product of the socio-cultural development of humans and can be learned and/or changed.
Personology:
Personology confers a multidimensional, complex, and comprehensive approach to personality. According to Henry A. Murray, personology is "The branch of psychology which concerns itself with the study of human lives and the factors that influence their course which investigates individual differences and types of personality… the science of men, taken as gross units… encompassing “psychoanalysis” (Freud), “analytical psychology” (Jung), “individual psychology” (Adler) and other terms that stand for methods of inquiry or doctrines rather than realms of knowledge."
From a holistic perspective, personology studies personality as a whole, as a system, but in the same time through all its components, levels and spheres.
Psychiatry:
Psychiatry is the medical specialty devoted to the diagnosis, prevention and treatment of mental disorders. High neuroticism is an independent prospective predictor for the development of the common mental disorders.
Interest in the history of psychiatry continues to grow, with an increasing emphasis on topics of current interest such as the history of psychopharmacology, electroconvulsive therapy, and the interplay between psychiatry and society.
See also:
While there is no generally agreed upon definition of personality, most theories focus on motivation and psychological interactions with one's environment.
Trait-based personality theories, such as those defined by Raymond Cattell define personality as the traits that predict a person's behavior. On the other hand, more behaviorally based approaches define personality through learning and habits. Nevertheless, most theories view personality as relatively stable.
The study of the psychology of personality, called personality psychology, attempts to explain the tendencies that underlie differences in behavior. Many approaches have been taken on to study personality, including biological, cognitive, learning and trait based theories, as well as psychodynamic, and humanistic approaches.
Personality psychology is divided among the first theorists, with a few influential theories being posited by Sigmund Freud, Alfred Adler, Gordon Allport, Hans Eysenck, Abraham Maslow, and Carl Rogers.
Measuring:
Personality can be determined through a variety of tests. Due to the fact that personality is a complex idea, the dimensions of personality and scales of personality tests vary and often are poorly defined.
Two main tools to measure personality are objective tests and projective measures. Examples of such tests are the: Big Five Inventory (BFI), Minnesota Multiphasic Personality Inventory (MMPI-2), Rorschach Inkblot test, Neurotic Personality Questionnaire KON-2006, or Eysenck's Personality Questionnaire (EPQ-R).
All of these tests are beneficial because they have both reliability and validity, two factors that make a test accurate. "Each item should be influenced to a degree by the underlying trait construct, giving rise to a pattern of positive intercorrelations so long as all items are oriented (worded) in the same direction."
A recent, but not well-known, measuring tool that psychologists use is the 16 PF. It measures personality based on Cattell's 16 factor theory of personality. Psychologists also use it as a clinical measuring tool to diagnose psychiatric disorders and help with prognosis and therapy planning.
The Big Five Inventory is the most used measuring tool because it has criterion that expands across different factors in personality, allowing psychologists to have the most accurate information they can garner.
Five-factor model:
Personality is often broken into statistically-identified factors called the Big Five, which are openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism (or emotional stability). These components are generally stable over time, and about half of the variance appears to be attributable to a person's genetics rather than the effects of one's environment.
Some research has investigated whether the relationship between happiness and extraversion seen in adults can also be seen in children. The implications of these findings can help identify children that are more likely to experience episodes of depression and develop types of treatment that such children are likely to respond to. In both children and adults, research shows that genetics, as opposed to environmental factors, exert a greater influence on happiness levels.
Personality is not stable over the course of a lifetime, but it changes much more quickly during childhood, so personality constructs in children are referred to as temperament.
Temperament is regarded as the precursor to personality. Whereas McCrae and Costa's Big Five model assesses personality traits in adults, the EAS (emotionality, activity, and sociability) model is used to assess temperament in children. This model measures levels of emotionality, activity, sociability, and shyness in children. The personality theorists consider temperament EAS model similar to the Big Five model in adults; however, this might be due to a conflation of concepts of personality and temperament as described above.
Findings show that high degrees of sociability and low degrees of shyness are equivalent to adult extraversion, and correlate with higher levels of life satisfaction in children.
Another interesting finding has been the link found between acting extraverted and positive affect. Extraverted behaviors include acting talkative, assertive, adventurous, and outgoing.
For the purposes of this study, positive affect is defined as experiences of happy and enjoyable emotions. This study investigated the effects of acting in a way that is counter to a person's dispositional nature. In other words, the study focused on the benefits and drawbacks of introverts (people who are shy, socially inhibited and non-aggressive) acting extraverted, and of extraverts acting introverted.
After acting extraverted, introverts' experience of positive affect increased whereas extraverts seemed to experience lower levels of positive affect and suffered from the phenomenon of ego depletion. Ego depletion, or cognitive fatigue, is the use of one's energy to overtly act in a way that is contrary to one's inner disposition. When people act in a contrary fashion, they divert most, if not all, (cognitive) energy toward regulating this foreign style of behavior and attitudes.
Because all available energy is being used to maintain this contrary behavior, the result is an inability to use any energy to make important or difficult decisions, plan for the future, control or regulate emotions, or perform effectively on other cognitive tasks.
One question that has been posed is why extraverts tend to be happier than introverts. The two types of explanations attempt to account for this difference are instrumental theories and temperamental theories. The instrumental theory suggests that extraverts end up making choices that place them in more positive situations and they also react more strongly than introverts to positive situations.
The temperamental theory suggests that extraverts have a disposition that generally leads them to experience a higher degree of positive affect. In their study of extraversion, Lucas and Baird found no statistically significant support for the instrumental theory but did, however, find that extraverts generally experience a higher level of positive affect.
Research has been done to uncover some of the mediators that are responsible for the correlation between extraversion and happiness. Self-esteem and self-efficacy are two such mediators.
Self-efficacy is one's belief about abilities to perform up to personal standards, the ability to produce desired results, and the feeling of having some ability to make important life decisions. Self-efficacy has been found to be related to the personality traits of extraversion and subjective well-being.
Self-efficacy, however, only partially mediates the relationship between extraversion (and neuroticism) and subjective happiness. This implies that there are most likely other factors that mediate the relationship between subjective happiness and personality traits. Self-esteem may be another similar factor.
Individuals with a greater degree of confidence about themselves and their abilities seem to have both higher degrees of subjective well-being and higher levels of extraversion.
Other research has examined the phenomenon of mood maintenance as another possible mediator. Mood maintenance is the ability to maintain one's average level of happiness in the face of an ambiguous situation – meaning a situation that has the potential to engender either positive or negative emotions in different individuals.
It has been found to be a stronger force in extraverts. This means that the happiness levels of extraverted individuals are less susceptible to the influence of external events. This finding implies that extraverts' positive moods last longer than those of introverts.
Developmental biological model:
Modern conceptions of personality, such as the Temperament and Character Inventory have suggested four basic temperaments that are thought to reflect basic and automatic responses to danger and reward that rely on associative learning.
The four temperaments, harm avoidance, reward dependence, novelty seeking and persistence are somewhat analogous to ancient conceptions of melancholic, sanguine, choleric, phlegmatic personality types, although the temperaments reflect dimensions rather than distance categories.
While factor based approaches to personality have yielded models that account for significant variance, the developmental biological model has been argued to better reflect underlying biological processes. Distinct genetic, neurochemical and neuroanatomical correlates responsible for each temperamental trait have been observed, unlike with five factor models.
The harm avoidance trait has been associated with increased reactivity in insular and amygdala salience networks, as well as reduced 5-HT2 receptor binding peripherally, and reduced GABA concentrations. Novelty seeking has been associated with reduced activity in insular salience networks increased striatal connectivity. Novelty seeking correlates with dopamine synthesis capacity in the striatum, and reduced auto receptor availability in the midbrain.
Reward dependence has been linked with the oxytocin system, with increased concentration of plasma oxytocin being observed, as well as increased volume in oxytocin related regions of the hypothalamus. Persistence has been associated with increased striatal-mPFC connectivity, increased activation of ventral striatal-orbitofrontal-anterior cingulate circuits, as well as increased salivary amylase levels indicative of increased noradrenergic tone.
Environmental influences:
It has been shown that personality traits are more malleable by environmental influences than researchers originally believed. Personality differences predict the occurrence of life experiences.
One study that has shown how the home environment, specifically the types of parents a person has, can affect and shape their personality. Mary Ainsworth's Strange Situation experiment showcased how babies reacted to having their mother leave them alone in a room with a stranger.
The different styles of attachment, labelled by Ainsworth, were Secure, Ambivalent, avoidant, and disorganized. Children who were securely attached tend to be more trusting, sociable, and are confident in their day-to-day life. Children who were disorganized were reported to have higher levels of anxiety, anger, and risk-taking behavior.
Judith Rich Harris's group socialization theory postulates that an individual's peer groups, rather than parental figures, are the primary influence of personality and behavior in adulthood.
Intra- and intergroup processes, not dyadic relationships such as parent-child relationships, are responsible for the transmission of culture and for environmental modification of children's personality characteristics. Thus, this theory points at the peer group representing the environmental influence on a child's personality rather than the parental style or home environment.
Tessuya Kawamoto's Personality Change from Life Experiences: Moderation Effect of Attachment Security talked about laboratory tests. The study mainly focused on the effects of life experiences on change in personality on and life experiences. The assessments suggested that "the accumulation of small daily experiences may work for the personality development of university students and that environmental influences may vary by individual susceptibility to experiences, like attachment security".
Cross-cultural studies:
There has been some recent debate over the subject of studying personality in a different culture. Some people think that personality comes entirely from culture and therefore there can be no meaningful study in cross-culture study. On the other hand, others believe that some elements are shared by all cultures and an effort is being made to demonstrate the cross-cultural applicability of "the Big Five".
Cross-cultural assessment depends on the universality of personality traits, which is whether there are common traits among humans regardless of culture or other factors. If there is a common foundation of personality, then it can be studied on the basis of human traits rather than within certain cultures. This can be measured by comparing whether assessment tools are measuring similar constructs across countries or cultures.
Two approaches to researching personality are looking at emic and etic traits. Emic traits are constructs unique to each culture, which are determined by local customs, thoughts, beliefs, and characteristics. Etic traits are considered universal constructs, which establish traits that are evident across cultures that represent a biological bases of human personality.
If personality traits are unique to individual culture, then different traits should be apparent in different cultures. However, the idea that personality traits are universal across cultures is supported by establishing the Five Factor Model of personality across multiple translations of the NEO-PI-R, which is one of the most widely used personality measures.
When administering the NEO-PI-R to 7,134 people across six languages, the results show a similar pattern of the same five underlying constructs that are found in the American factor structure.
Similar results were found using the Big Five Inventory (BFI), as it was administered in 56 nations across 28 languages. The five factors continued to be supported both conceptually and statistically across major regions of the world, suggesting that these underlying factors are common across cultures.
There are some differences across culture but they may be a consequence of using a lexical approach to study personality structures, as language has limitations in translation and different cultures have unique words to describe emotion or situations.
For example, the term "feeling blue" is used to describe sadness in more Westernized cultures, but does not translate to other languages. Differences across cultures could be due to real cultural differences, but they could also be consequences of poor translations, biased sampling, or differences in response styles across cultures.
Examining personality questionnaires developed within a culture can also be useful evidence for the universality of traits across cultures, as the same underlying factors can still be found. Results from several European and Asian studies have found overlapping dimensions with the Five Factor Model as well as additional culture-unique dimensions. Finding similar factors across cultures provides support for the universality of personality trait structure, but more research is necessary to gain stronger support.
Historical development of concept:
The modern sense of individual personality is a result of the shifts in culture originating in the Renaissance, an essential element in modernity. In contrast, the Medieval European's sense of self was linked to a network of social roles: "the household, the kinship network, the guild, the corporation – these were the building blocks of personhood".
Stephen Greenblatt observes, in recounting the recovery (1417) and career of Lucretius' poem De rerum natura: "at the core of the poem lay key principles of a modern understanding of the world." "Dependent on the family, the individual alone was nothing," Jacques Gélis observes. "The characteristic mark of the modern man has two parts: one internal, the other external; one dealing with his environment, the other with his attitudes, values, and feelings."
Rather than being linked to a network of social roles, the modern man is largely influenced by the environmental factors such as: "urbanization, education, mass communication, industrialization, and politicization."
Temperament and philosophy:
William James (1842–1910) argued that temperament explains a great deal of the controversies in the history of philosophy by arguing that it is a very influential premise in the arguments of philosophers. Despite seeking only impersonal reasons for their conclusions, James argued, the temperament of philosophers influenced their philosophy.
Temperament thus conceived is tantamount to a bias. Such bias, James explained, was a consequence of the trust philosophers place in their own temperament. James thought the significance of his observation lay on the premise that in philosophy an objective measure of success is whether a philosophy is peculiar to its philosopher or not, and whether a philosopher is dissatisfied with any other way of seeing things or not.
Mental make-up:
James argued that temperament may be the basis of several divisions in academia, but focused on philosophy in his 1907 lectures on Pragmatism. In fact, James' lecture of 1907 fashioned a sort of trait theory of the empiricist and rationalist camps of philosophy.
As in most modern trait theories, the traits of each camp are described by James as distinct and opposite, and may be possessed in different proportions on a continuum, and thus characterize the personality of philosophers of each camp. The "mental make-up" (i.e. personality) of rationalist philosophers is described as "tender-minded" and "going by "principles," and that of empiricist philosophers is described as "tough-minded" and "going by "facts."
James distinguishes each not only in terms of the philosophical claims they made in 1907, but by arguing that such claims are made primarily on the basis of temperament.
Furthermore, such categorization was only incidental to James' purpose of explaining his pragmatist philosophy, and is not exhaustive.
Empiricists and rationalists:
According to James, the temperament of rationalist philosophers differed fundamentally from the temperament of empiricist philosophers of his day. The tendency of rationalist philosophers toward refinement and superficiality never satisfied an empiricist temper of mind.
Rationalism leads to the creation of closed systems, and such optimism is considered shallow by the fact-loving mind, for whom perfection is far off. Rationalism is regarded as pretension, and a temperament most inclined to abstraction. The temperament of rationalists, according to James, led to sticking with logic.
Empiricists, on the other hand, stick with the external senses rather than logic. British empiricist John Locke's (1632–1704) explanation of personal identity provides an example of what James referred to. Locke explains the identity of a person, i.e. personality, on the basis of a precise definition of identity, by which the meaning of identity differs according to what 3it is being applied to.
The identity of a person, is quite distinct from the identity of a man, woman, or substance according to Locke. Locke concludes that consciousness is personality because it "always accompanies thinking, it is that which makes every one to be what he calls self," and remains constant in different places at different times. Thus his explanation of personal identity is in terms of experience as James indeed maintained is the case for most empiricists.
Rationalists conceived of the identity of persons differently than empiricists such as Locke who distinguished identity of substance, person, and life. According to Locke, Rene Descartes (1596–1650) agreed only insofar as he did not argue that one immaterial spirit is the basis of the person "for fear of making brutes thinking things too."
According to James, Locke tolerated arguments that a soul was behind the consciousness of any person. However, Locke's successor David Hume (1711–1776), and empirical psychologists after him denied the soul except for being a term to describe the cohesion of inner lives.
However, some research suggests Hume excluded personal identity from his opus An Inquiry Concerning Human Understanding because he thought his argument was sufficient but not compelling. Descartes himself distinguished active and passive faculties of mind, each contributing to thinking and consciousness in different ways.
The passive faculty, Descartes argued, simply receives, whereas the active faculty produces and forms ideas, but does not presuppose thought, and thus cannot be within the thinking thing. The active faculty mustn't be within self because ideas are produced without any awareness of them, and are sometimes produced against one's will.
Rationalist philosopher Benedictus Spinoza (1632–1677) argued that ideas are the first element constituting the human mind, but existed only for actually existing things. In other words, ideas of non-existent things are without meaning for Spinoza, because an idea of a non-existent thing cannot exist. Further, Spinoza's rationalism argued that the mind does not know itself, except insofar as it perceives the "ideas of the modifications of body," in describing its external perceptions, or perceptions from without.
On the contrary, from within, Spinoza argued, perceptions connect various ideas clearly and distinctly. The mind is not the free cause of its actions for Spinoza. Spinoza equates the will with the understanding, and explains the common distinction of these things as being two different things as error which results from the individual's misunderstanding of the nature of thinking.
Biology:
The biological basis of personality is the theory that anatomical structures located in the brain contribute to personality traits. This stems from neuropsychology, which studies how the structure of the brain relates to various psychological processes and behaviors.
For instance, in human beings, the frontal lobes are responsible for foresight and anticipation, and the occipital lobes are responsible for processing visual information. In addition, certain physiological functions such as hormone secretion also affect personality. For example, the hormone testosterone is important for sociability, affectivity, aggressiveness, and sexuality.
Additionally, studies show that the expression of a personality trait depends on the volume of the brain cortex it is associated with.
There is also a confusion among some psychologists who conflate personality with temperament. Temperament traits that are based on weak neurochemical imbalances within neurotransmitter systems are much more stable, consistent in behavior and show up in early childhood; they can't be changed easily but can be compensated for in behavior. In contrast to that, personality traits and features are the product of the socio-cultural development of humans and can be learned and/or changed.
Personology:
Personology confers a multidimensional, complex, and comprehensive approach to personality. According to Henry A. Murray, personology is "The branch of psychology which concerns itself with the study of human lives and the factors that influence their course which investigates individual differences and types of personality… the science of men, taken as gross units… encompassing “psychoanalysis” (Freud), “analytical psychology” (Jung), “individual psychology” (Adler) and other terms that stand for methods of inquiry or doctrines rather than realms of knowledge."
From a holistic perspective, personology studies personality as a whole, as a system, but in the same time through all its components, levels and spheres.
Psychiatry:
Psychiatry is the medical specialty devoted to the diagnosis, prevention and treatment of mental disorders. High neuroticism is an independent prospective predictor for the development of the common mental disorders.
Interest in the history of psychiatry continues to grow, with an increasing emphasis on topics of current interest such as the history of psychopharmacology, electroconvulsive therapy, and the interplay between psychiatry and society.
See also:
- Cult of personality, political institution in which a leader uses mass media to create a larger-than-life public image
- Differential psychology
- Human variability
- Offender profiling
- Personality and Individual Differences, a scientific journal published bi-monthly by Elsevier
- Personality crisis (disambiguation)
- Personality rights, consisting of the right to individual publicity and privacy
- Personality style
- Personality disorder
- Personality computing
Mental Health including Body Image and Self-Esteem
YouTube Video: To reach beyond your limits by training your mind
(By Marisa Peer | TEDxKCS)
Pictured: Negative perceptions of Mental Health, Body Image and Self-esteem
YouTube Video: To reach beyond your limits by training your mind
(By Marisa Peer | TEDxKCS)
Pictured: Negative perceptions of Mental Health, Body Image and Self-esteem
Mental health is a level of psychological well-being, or an absence of mental illness. It is the "psychological state of someone who is functioning at a satisfactory level of emotional and behavioral adjustment". From the perspective of positive psychology or holism, mental health may include an individual's ability to enjoy life, and create a balance between life activities and efforts to achieve psychological resilience.
According to the World Health Organization (WHO), mental health includes "subjective well-being, perceived self-efficacy, autonomy, competence, inter-generational dependence, and self-actualization of one's intellectual and emotional potential, among others."
The WHO further states that the well-being of an individual is encompassed in the realization of their abilities, coping with normal stresses of life, productive work and contribution to their community. Cultural differences, subjective assessments, and competing professional theories all affect how "mental health" is defined. A widely accepted definition of health by mental health specialists is psychoanalyst Sigmund Freud's definition: the capacity "to work and to love".
Mental Health and Mental Illness:
According to the U.S. surgeon general (1999), mental health is the successful performance of mental function, resulting in productive activities, fulfilling relationships with other people, and providing the ability to adapt to change and cope with adversity. The term mental illness refers collectively to all diagnosable mental disorders—health conditions characterized by alterations in thinking, mood, or behavior associated with distress or impaired functioning.
A person struggling with their mental health may experience stress, depression, anxiety, relationship problems, grief, addiction, ADHD or learning disabilities, mood disorders, or other mental illnesses of varying degrees.
Therapists, psychiatrists, psychologists, social workers, nurse practitioners or physicians can help manage mental illness with treatments such as therapy, counseling, or medication.
Click on any of the following blue hyperlinks for more about Mental Health:
Body Image is a person's perception of the aesthetics or sexual attractiveness of their own body. The phrase body image was first coined by the Austrian neurologist and psychoanalyst Paul Schilder in his book The Image and Appearance of the Human Body (1935).
Human society has at all times placed great value on beauty of the human body, but a person's perception of their own body may not correspond to society's standards.
The concept of body image is used in a number of disciplines, including psychology, medicine, psychiatry, psychoanalysis, philosophy and cultural and feminist studies. The term is also often used in the media. Across these disciplines and media there is no consensus definition.
A person's body image is thought to be, in part, a product of their personal experiences, personality, and various social and cultural forces. A person's sense of their own physical appearance, usually in relation to others or in relation to some cultural "ideal," can shape their body image. A person's perception of their appearance can be different from how others actually perceive them.
Research suggests that exposure to mass media depicting the thin-ideal body may be linked to body image disturbance in women. This meta-analysis examined experimental and correlational studies testing the links between media exposure to body dissatisfaction, internalization of the thin ideal, and eating behaviors and beliefs with a sample of 77 studies that yielded 141 effect sizes.
Effects for some outcome variables were moderated by publication year and study design. The findings support the notion that exposure to media images depicting the thin-ideal body is related to body image concerns for women.
A 2007 report by the American Psychological Association found that a culture-wide sexualization of girls and women was contributing to increased female anxiety associated with body image.
Similar findings associated with body image were found by an Australian government Senate Standing Committee report on the sexualization of children in the media. However, other scholars have expressed concern that these claims are not based on solid data.
Body image can have a wide range of psychological effects and physical effects. Throughout history, it has been extremely difficult for people to live up to the standards of society and what they believe the ideal body is.
There are many factors that lead to a person’s body image, some of these include: family dynamics, mental illness, biological predispositions and environmental causes for obesity or malnutrition, and cultural expectations (e.g., media and politics).
People who are both underweight and overweight can have poor body image. However, because people are constantly told and shown the cosmetic appeal of weight loss and are warned about the risks of obesity, those who are normal or overweight on the BMI scale have higher risks of poor body image.
This is something that can lead to a change in a person's body image. Often, people who have a low body image will try to alter their bodies in some way, such as by dieting or undergoing cosmetic surgery.
Click on any of the following blue hyperlinks for more about Body Image:
Self-esteem:
In sociology and psychology, self-esteem reflects a person's overall subjective emotional evaluation of his or her own worth. It is a judgment of oneself as well as an attitude toward the self. Self-esteem encompasses beliefs about oneself, (for example, "I am competent", "I am worthy"), as well as emotional states, such as triumph, despair, pride, and shame.
Smith and Mackie (2007) defined it by saying "The self-concept is what we think about the self; self-esteem, is the positive or negative evaluations of the self, as in how we feel about it.".
Self-esteem is attractive as a social psychological construct because researchers have conceptualized it as an influential predictor of certain outcomes, such as academic achievement, happiness, satisfaction in marriage and relationships, and criminal behavior.
Self-esteem can apply specifically to a particular dimension (for example, "I believe I am a good writer and feel happy about that") or a global extent (for example, "I believe I am a bad person, and feel bad about myself in general").
Psychologists usually regard self-esteem as an enduring personality characteristic ("trait" self-esteem), though normal, short-term variations ("state" self-esteem) also exist. Synonyms or near-synonyms of self-esteem include: self-worth, self-regard, self-respect, and self-integrity
Click on any of the following blue hyperlinks for more about Self Esteem:
According to the World Health Organization (WHO), mental health includes "subjective well-being, perceived self-efficacy, autonomy, competence, inter-generational dependence, and self-actualization of one's intellectual and emotional potential, among others."
The WHO further states that the well-being of an individual is encompassed in the realization of their abilities, coping with normal stresses of life, productive work and contribution to their community. Cultural differences, subjective assessments, and competing professional theories all affect how "mental health" is defined. A widely accepted definition of health by mental health specialists is psychoanalyst Sigmund Freud's definition: the capacity "to work and to love".
Mental Health and Mental Illness:
According to the U.S. surgeon general (1999), mental health is the successful performance of mental function, resulting in productive activities, fulfilling relationships with other people, and providing the ability to adapt to change and cope with adversity. The term mental illness refers collectively to all diagnosable mental disorders—health conditions characterized by alterations in thinking, mood, or behavior associated with distress or impaired functioning.
A person struggling with their mental health may experience stress, depression, anxiety, relationship problems, grief, addiction, ADHD or learning disabilities, mood disorders, or other mental illnesses of varying degrees.
Therapists, psychiatrists, psychologists, social workers, nurse practitioners or physicians can help manage mental illness with treatments such as therapy, counseling, or medication.
Click on any of the following blue hyperlinks for more about Mental Health:
- History
- Significance
- Perspectives
- Mental well-being
- Children and young adults
- Prevention
- Cultural and religious considerations
- Emotional improvement
- Emotional issues
- Treatment
- Activity therapies
- Biofeedback
- Expressive therapies
- Group therapy
- Psychotherapy
- Meditation
- Spiritual counseling
- Social work in mental health Prevalence and programs
- Roles and functions
- History
- United States
- Canada
- India
- Australia
- See also:
- Ethnopsychopharmacology
- Health
- Mental environment
- Reason
- Sanity
- Technology and mental health issues
- World Mental Health Day
- Related disciplines and specialties
- Mental health in different occupations and regions
- National Institute of Mental Health (United States)
- The National Mental Health Development Unit (NMHDU), England
- Health-EU Portal Mental Health in the EU
Body Image is a person's perception of the aesthetics or sexual attractiveness of their own body. The phrase body image was first coined by the Austrian neurologist and psychoanalyst Paul Schilder in his book The Image and Appearance of the Human Body (1935).
Human society has at all times placed great value on beauty of the human body, but a person's perception of their own body may not correspond to society's standards.
The concept of body image is used in a number of disciplines, including psychology, medicine, psychiatry, psychoanalysis, philosophy and cultural and feminist studies. The term is also often used in the media. Across these disciplines and media there is no consensus definition.
A person's body image is thought to be, in part, a product of their personal experiences, personality, and various social and cultural forces. A person's sense of their own physical appearance, usually in relation to others or in relation to some cultural "ideal," can shape their body image. A person's perception of their appearance can be different from how others actually perceive them.
Research suggests that exposure to mass media depicting the thin-ideal body may be linked to body image disturbance in women. This meta-analysis examined experimental and correlational studies testing the links between media exposure to body dissatisfaction, internalization of the thin ideal, and eating behaviors and beliefs with a sample of 77 studies that yielded 141 effect sizes.
Effects for some outcome variables were moderated by publication year and study design. The findings support the notion that exposure to media images depicting the thin-ideal body is related to body image concerns for women.
A 2007 report by the American Psychological Association found that a culture-wide sexualization of girls and women was contributing to increased female anxiety associated with body image.
Similar findings associated with body image were found by an Australian government Senate Standing Committee report on the sexualization of children in the media. However, other scholars have expressed concern that these claims are not based on solid data.
Body image can have a wide range of psychological effects and physical effects. Throughout history, it has been extremely difficult for people to live up to the standards of society and what they believe the ideal body is.
There are many factors that lead to a person’s body image, some of these include: family dynamics, mental illness, biological predispositions and environmental causes for obesity or malnutrition, and cultural expectations (e.g., media and politics).
People who are both underweight and overweight can have poor body image. However, because people are constantly told and shown the cosmetic appeal of weight loss and are warned about the risks of obesity, those who are normal or overweight on the BMI scale have higher risks of poor body image.
This is something that can lead to a change in a person's body image. Often, people who have a low body image will try to alter their bodies in some way, such as by dieting or undergoing cosmetic surgery.
Click on any of the following blue hyperlinks for more about Body Image:
- Media Impact
- On women and in general
- On men
- On fashion industry
- Social media and disorders
- Measurement
- Figure preferences
- Video projection techniques
- Questionnaires
- Gender differences
- Weight
- Race
- See also:
Self-esteem:
In sociology and psychology, self-esteem reflects a person's overall subjective emotional evaluation of his or her own worth. It is a judgment of oneself as well as an attitude toward the self. Self-esteem encompasses beliefs about oneself, (for example, "I am competent", "I am worthy"), as well as emotional states, such as triumph, despair, pride, and shame.
Smith and Mackie (2007) defined it by saying "The self-concept is what we think about the self; self-esteem, is the positive or negative evaluations of the self, as in how we feel about it.".
Self-esteem is attractive as a social psychological construct because researchers have conceptualized it as an influential predictor of certain outcomes, such as academic achievement, happiness, satisfaction in marriage and relationships, and criminal behavior.
Self-esteem can apply specifically to a particular dimension (for example, "I believe I am a good writer and feel happy about that") or a global extent (for example, "I believe I am a bad person, and feel bad about myself in general").
Psychologists usually regard self-esteem as an enduring personality characteristic ("trait" self-esteem), though normal, short-term variations ("state" self-esteem) also exist. Synonyms or near-synonyms of self-esteem include: self-worth, self-regard, self-respect, and self-integrity
Click on any of the following blue hyperlinks for more about Self Esteem:
- History
- Effect on public policy
- Theories
- Measurement
- Development across lifespan
- Types
- Importance
- Neuroscience
- Criticism and controversy
- See also:
- Assertiveness
- Blue Eyed
- Clinical depression
- Dunning–Kruger effect
- Emotional competence
- Fear of negative evaluation
- Gumption trap
- Hubris
- Identity
- Inner critic
- Invisible support
- List of confidence tricks
- Optimism bias
- Outline of self
- Overconfidence effect
- Passiveness
- Performance anxiety
- Self-awareness
- Self-compassion
- Self-esteem functions
- Self-esteem instability
- Self-evaluation maintenance theory
- Self image
- Shyness
- Social anxiety
- Social phobia
- Suicide prevention
Crohn's Disease
- YouTube Video: What is Crohn's Disease?
- YouTube Video: Crohns or Colitis? - Mayo Clinic
- YouTube Video: Living with Crohn's Disease - BBC News
Crohn's disease is a type of inflammatory bowel disease (IBD) that may affect any part of the gastrointestinal tract from mouth to anus. Signs and symptoms often include abdominal pain, diarrhea (which may be bloody if inflammation is severe), fever, and weight loss.
Other complications may occur outside the gastrointestinal tract and include anemia, skin rashes, arthritis, inflammation of the eye, and tiredness. The skin rashes may be due to infections as well as pyoderma gangrenosum or erythema nodosum. Bowel obstruction may occur as a complication of chronic inflammation, and those with the disease are at greater risk of bowel cancer.
While the cause of Crohn's disease is unknown, it is believed to be due to a combination of environmental, immune, and bacterial factors in genetically susceptible individuals. It results in a chronic inflammatory disorder, in which the body's immune system attacks the gastrointestinal tract, possibly targeting microbial antigens.
While Crohn's is an immune-related disease, it does not appear to be an autoimmune disease (in that the immune system is not being triggered by the body itself). The exact underlying immune problem is not clear; however, it may be an immunodeficiency state.
About half of the overall risk is related to genetics with more than 70 genes having been found to be involved. Tobacco smokers are twice as likely to develop Crohn's disease as nonsmokers. It also often begins after gastroenteritis.
Diagnosis is based on a number of findings including biopsy and appearance of the bowel wall, medical imaging and description of the disease. Other conditions that can present similarly include irritable bowel syndrome and Behçet's disease.
There are no medications or surgical procedures that can cure Crohn's disease. Treatment options are intended to help with symptoms, maintain remission, and prevent relapse. In those newly diagnosed, a corticosteroid may be used for a brief period of time to rapidly improve symptoms alongside another medication such as either methotrexate or a thiopurine used to prevent recurrence.
Stopping smoking is recommended in people with Crohn's disease.
One in five people with the disease is admitted to hospital each year, and half of those with the disease will require surgery for the disease at some point over a ten-year period. While surgery should be used as little as possible, it is necessary to address some abscesses, certain bowel obstructions, and cancers. Checking for bowel cancer via colonoscopy is recommended every few years, starting eight years after the disease has begun.
Crohn's disease affects about 3.2 per 1,000 people in Europe and North America. It is less common in Asia and Africa. It has historically been more common in the developed world. Rates have, however, been increasing, particularly in the developing world, since the 1970s.
Inflammatory bowel disease resulted in 47,400 deaths in 2015 and those with Crohn's disease have a slightly reduced life expectancy. It tends to start in the teens and twenties, although it can occur at any age.
Males and females are equally affected. The disease was named after gastroenterologist Burrill Bernard Crohn, who in 1932, together with two other colleagues at Mount Sinai Hospital in New York, described a series of patients with inflammation of the terminal ileum of the small intestine, the area most commonly affected by the illness.
Click on any of the following blue hyperlinks for more about Crohn's Disease:
Other complications may occur outside the gastrointestinal tract and include anemia, skin rashes, arthritis, inflammation of the eye, and tiredness. The skin rashes may be due to infections as well as pyoderma gangrenosum or erythema nodosum. Bowel obstruction may occur as a complication of chronic inflammation, and those with the disease are at greater risk of bowel cancer.
While the cause of Crohn's disease is unknown, it is believed to be due to a combination of environmental, immune, and bacterial factors in genetically susceptible individuals. It results in a chronic inflammatory disorder, in which the body's immune system attacks the gastrointestinal tract, possibly targeting microbial antigens.
While Crohn's is an immune-related disease, it does not appear to be an autoimmune disease (in that the immune system is not being triggered by the body itself). The exact underlying immune problem is not clear; however, it may be an immunodeficiency state.
About half of the overall risk is related to genetics with more than 70 genes having been found to be involved. Tobacco smokers are twice as likely to develop Crohn's disease as nonsmokers. It also often begins after gastroenteritis.
Diagnosis is based on a number of findings including biopsy and appearance of the bowel wall, medical imaging and description of the disease. Other conditions that can present similarly include irritable bowel syndrome and Behçet's disease.
There are no medications or surgical procedures that can cure Crohn's disease. Treatment options are intended to help with symptoms, maintain remission, and prevent relapse. In those newly diagnosed, a corticosteroid may be used for a brief period of time to rapidly improve symptoms alongside another medication such as either methotrexate or a thiopurine used to prevent recurrence.
Stopping smoking is recommended in people with Crohn's disease.
One in five people with the disease is admitted to hospital each year, and half of those with the disease will require surgery for the disease at some point over a ten-year period. While surgery should be used as little as possible, it is necessary to address some abscesses, certain bowel obstructions, and cancers. Checking for bowel cancer via colonoscopy is recommended every few years, starting eight years after the disease has begun.
Crohn's disease affects about 3.2 per 1,000 people in Europe and North America. It is less common in Asia and Africa. It has historically been more common in the developed world. Rates have, however, been increasing, particularly in the developing world, since the 1970s.
Inflammatory bowel disease resulted in 47,400 deaths in 2015 and those with Crohn's disease have a slightly reduced life expectancy. It tends to start in the teens and twenties, although it can occur at any age.
Males and females are equally affected. The disease was named after gastroenterologist Burrill Bernard Crohn, who in 1932, together with two other colleagues at Mount Sinai Hospital in New York, described a series of patients with inflammation of the terminal ileum of the small intestine, the area most commonly affected by the illness.
Click on any of the following blue hyperlinks for more about Crohn's Disease:
- Signs and symptoms
- Cause
- Pathophysiology
- Diagnosis
- Management
- Prognosis
- Epidemiology
- History
- Research
- See also:
DNA (DeoxyriboNucleic Acid) including a List of Human Genes
YouTube Video: What is DNA and How Does it Work?
Pictured: The structure of the DNA double helix. The atoms in the structure are color-coded by element and the detailed structure of two base pairs are shown in the bottom right.
YouTube Video: What is DNA and How Does it Work?
Pictured: The structure of the DNA double helix. The atoms in the structure are color-coded by element and the detailed structure of two base pairs are shown in the bottom right.
Deoxyribonucleic acid (DNA) is a molecule that carries most of the genetic instructions used in the growth, development, functioning and reproduction of all known living organisms and many viruses.
DNA and RNA are nucleic acids; alongside proteins and complex carbohydrates, they are one of the three major types of macro molecule that are essential for all known forms of life.
Most DNA molecules consist of two biopolymer strands coiled around each other to form a double helix. The two DNA strands are known as polynucleotides since they are composed of simpler units called nucleotides.
Each nucleotide is composed of a nitrogen-containing nucleobase—either cytosine (C), guanine (G), adenine (A), or thymine (T)—as well as a sugar called deoxyribose and a phosphate group.
The nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone.
According to base pairing rules (A with T, and C with G), hydrogen bonds bind the nitrogenous bases of the two separate polynucleotide strands to make double-stranded DNA.
The total amount of related DNA base pairs on Earth is estimated at 5.0 x 1037, and weighs 50 billion tons. In comparison, the total mass of the biosphere has been estimated to be as much as 4 TtC (trillion tons of carbon).
DNA stores biological information. The DNA backbone is resistant to cleavage, and both strands of the double-stranded structure store the same biological information. Biological information is replicated as the two strands are separated. A significant portion of DNA (more than 98% for humans) is non-coding, meaning that these sections do not serve as patterns for protein sequences.
The two strands of DNA run in opposite directions to each other and are therefore anti-parallel. Attached to each sugar is one of four types of nucleobases (informally, bases). It is the sequence of these four nucleobases along the backbone that encodes biological information.
Under the genetic code, RNA strands are translated to specify the sequence of amino acids within proteins. These RNA strands are initially created using DNA strands as a template in a process called transcription.
Within cells, DNA is organized into long structures called chromosomes. During cell division these chromosomes are duplicated in the process of DNA replication, providing each cell its own complete set of chromosomes.
Eukaryotic organisms (animals, plants, fungi, and protists) store most of their DNA inside the cell nucleus and some of their DNA in organelles, such as mitochondria or chloroplasts.
In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm. Within the chromosomes, chromatin proteins such as histones compact and organize DNA. These compact structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed.
DNA was first isolated by Friedrich Miescher in 1869. Its molecular structure was identified by James Watson and Francis Crick in 1953, whose model-building efforts were guided by X-ray diffraction data acquired by Rosalind Franklin.
DNA is used by researchers as a molecular tool to explore physical laws and theories, such as the ergodic theorem and the theory of elasticity. The unique material properties of DNA have made it an attractive molecule for material scientists and engineers interested in micro- and nano-fabrication. Among notable advances in this field are DNA origami and DNA-based hybrid materials.
Click here for further amplification.
____________________________________________________________
A List of Human Genes
The human genome is the complete set of nucleic acid sequences for humans, encoded as DNA within the 23 chromosome pairs in cell nuclei and in a small DNA molecule found within individual mitochondria.
Human genomes include both protein-coding DNA genes and noncoding DNA. Haploid human genomes, which are contained in germ cells (the egg and sperm gamete cells created in the meiosis phase of sexual reproduction before fertilization creates a zygote) consist of three billion DNA base pairs, while diploid genomes (found in somatic cells) have twice the DNA content.
While there are significant differences among the genomes of human individuals (on the order of 0.1%), these are considerably smaller than the differences between humans and their closest living relatives, the chimpanzees (approximately 4%) and bonobos.
Below is a List human genes by chromosome:
DNA and RNA are nucleic acids; alongside proteins and complex carbohydrates, they are one of the three major types of macro molecule that are essential for all known forms of life.
Most DNA molecules consist of two biopolymer strands coiled around each other to form a double helix. The two DNA strands are known as polynucleotides since they are composed of simpler units called nucleotides.
Each nucleotide is composed of a nitrogen-containing nucleobase—either cytosine (C), guanine (G), adenine (A), or thymine (T)—as well as a sugar called deoxyribose and a phosphate group.
The nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone.
According to base pairing rules (A with T, and C with G), hydrogen bonds bind the nitrogenous bases of the two separate polynucleotide strands to make double-stranded DNA.
The total amount of related DNA base pairs on Earth is estimated at 5.0 x 1037, and weighs 50 billion tons. In comparison, the total mass of the biosphere has been estimated to be as much as 4 TtC (trillion tons of carbon).
DNA stores biological information. The DNA backbone is resistant to cleavage, and both strands of the double-stranded structure store the same biological information. Biological information is replicated as the two strands are separated. A significant portion of DNA (more than 98% for humans) is non-coding, meaning that these sections do not serve as patterns for protein sequences.
The two strands of DNA run in opposite directions to each other and are therefore anti-parallel. Attached to each sugar is one of four types of nucleobases (informally, bases). It is the sequence of these four nucleobases along the backbone that encodes biological information.
Under the genetic code, RNA strands are translated to specify the sequence of amino acids within proteins. These RNA strands are initially created using DNA strands as a template in a process called transcription.
Within cells, DNA is organized into long structures called chromosomes. During cell division these chromosomes are duplicated in the process of DNA replication, providing each cell its own complete set of chromosomes.
Eukaryotic organisms (animals, plants, fungi, and protists) store most of their DNA inside the cell nucleus and some of their DNA in organelles, such as mitochondria or chloroplasts.
In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm. Within the chromosomes, chromatin proteins such as histones compact and organize DNA. These compact structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed.
DNA was first isolated by Friedrich Miescher in 1869. Its molecular structure was identified by James Watson and Francis Crick in 1953, whose model-building efforts were guided by X-ray diffraction data acquired by Rosalind Franklin.
DNA is used by researchers as a molecular tool to explore physical laws and theories, such as the ergodic theorem and the theory of elasticity. The unique material properties of DNA have made it an attractive molecule for material scientists and engineers interested in micro- and nano-fabrication. Among notable advances in this field are DNA origami and DNA-based hybrid materials.
Click here for further amplification.
____________________________________________________________
A List of Human Genes
The human genome is the complete set of nucleic acid sequences for humans, encoded as DNA within the 23 chromosome pairs in cell nuclei and in a small DNA molecule found within individual mitochondria.
Human genomes include both protein-coding DNA genes and noncoding DNA. Haploid human genomes, which are contained in germ cells (the egg and sperm gamete cells created in the meiosis phase of sexual reproduction before fertilization creates a zygote) consist of three billion DNA base pairs, while diploid genomes (found in somatic cells) have twice the DNA content.
While there are significant differences among the genomes of human individuals (on the order of 0.1%), these are considerably smaller than the differences between humans and their closest living relatives, the chimpanzees (approximately 4%) and bonobos.
Below is a List human genes by chromosome:
- Chromosome 1 (human)
- Chromosome 2 (human)
- Chromosome 3 (human)
- Chromosome 4 (human)
- Chromosome 5 (human)
- Chromosome 6 (human)
- Chromosome 7 (human)
- Chromosome 8 (human)
- Chromosome 9 (human)
- Chromosome 10 (human)
- Chromosome 11 (human)
- Chromosome 12 (human)
- Chromosome 13 (human)
- Chromosome 14 (human)
- Chromosome 15 (human)
- Chromosome 16 (human)
- Chromosome 17 (human)
- Chromosome 18 (human)
- Chromosome 19 (human)
- Chromosome 20 (human)
- Chromosome 21 (human)
- Chromosome 22 (human)
- Chromosome X (human)
- Chromosome Y (human)
Genetics is the study of genes, genetic variation, and heredity in living organisms. It is generally considered a field of biology, but it intersects frequently with many of the life sciences and is strongly linked with the study of information systems.
The father of genetics is Gregor Mendel, a late 19th-century scientist and Augustinian friar. Mendel studied 'trait inheritance', patterns in the way traits were handed down from parents to offspring. He observed that organisms (pea plants) inherit traits by way of discrete "units of inheritance". This term, still used today, is a somewhat ambiguous definition of what is referred to as a gene.
Trait inheritance and molecular inheritance mechanisms of genes are still primary principles of genetics in the 21st century, but modern genetics has expanded beyond inheritance to studying the function and behavior of genes.
Gene structure and function, variation, and distribution are studied within the context of the cell, the organism (e.g. dominance) and within the context of a population. Genetics has given rise to a number of sub-fields including epigenetics and population genetics.
Organisms studied within the broad field span the domain of life, including bacteria, plants, animals, and humans.
Genetic processes work in combination with an organism's environment and experiences to influence development and behavior, often referred to as nature versus nurture. The intra- or extra-cellular environment of a cell or organism may switch gene transcription on or off. A classic example is two seeds of genetically identical corn, one placed in a temperate climate and one in an arid climate.
While the average height of the two corn stalks may be genetically determined to be equal, the one in the arid climate only grows to half the height of the one in the temperate climate due to lack of water and nutrients in its environment.
For amplification, click on any of the following:
The father of genetics is Gregor Mendel, a late 19th-century scientist and Augustinian friar. Mendel studied 'trait inheritance', patterns in the way traits were handed down from parents to offspring. He observed that organisms (pea plants) inherit traits by way of discrete "units of inheritance". This term, still used today, is a somewhat ambiguous definition of what is referred to as a gene.
Trait inheritance and molecular inheritance mechanisms of genes are still primary principles of genetics in the 21st century, but modern genetics has expanded beyond inheritance to studying the function and behavior of genes.
Gene structure and function, variation, and distribution are studied within the context of the cell, the organism (e.g. dominance) and within the context of a population. Genetics has given rise to a number of sub-fields including epigenetics and population genetics.
Organisms studied within the broad field span the domain of life, including bacteria, plants, animals, and humans.
Genetic processes work in combination with an organism's environment and experiences to influence development and behavior, often referred to as nature versus nurture. The intra- or extra-cellular environment of a cell or organism may switch gene transcription on or off. A classic example is two seeds of genetically identical corn, one placed in a temperate climate and one in an arid climate.
While the average height of the two corn stalks may be genetically determined to be equal, the one in the arid climate only grows to half the height of the one in the temperate climate due to lack of water and nutrients in its environment.
For amplification, click on any of the following:
- The gene
- History
- Features of inheritance
- Molecular basis for inheritance
- Gene expression
- Genetic change
- Society and culture
- See also:
Mental Disorders, including a List of Mental Disorders
YouTube Video: The Truth about Mental Health Disorders
Pictured: Common Mental Disorders for (L) Adults and (R) Children
YouTube Video: The Truth about Mental Health Disorders
Pictured: Common Mental Disorders for (L) Adults and (R) Children
Click here for a List of Mental Disorders.
A mental disorder, also called a mental illness or psychiatric disorder, is a behavioral or mental pattern that may cause suffering or a poor ability to function in life.
Such features may be persistent, relapsing and remitting, or occur as a single episode. Many disorders have been described, with signs and symptoms that vary widely between specific disorders. Such disorders may be diagnosed by a mental health professional.
The causes of mental disorders are often unclear. Theories may incorporate findings from a range of fields. Mental disorders are usually defined by a combination of how a person behaves, feels, perceives, or thinks.
This may be associated with particular regions or functions of the brain, often in a social context. A mental disorder is one aspect of mental health. Cultural and religious beliefs, as well as social norms, should be taken into account when making a diagnosis.
Services are based in psychiatric hospitals or in the community, and assessments are carried out by psychiatrists, psychologists, and clinical social workers, using various methods but often relying on observation and questioning.
Treatments are provided by various mental health professionals. Psychotherapy and psychiatric medication are two major treatment options.
Other treatments include social interventions, peer support, and self-help. In a minority of cases there might be involuntary detention or treatment. Prevention programs have been shown to reduce depression.
Common mental disorders include depression, which affects about 400 million, dementia which affects about 35 million, and schizophrenia, which affects about 21 million people globally.
Stigma and discrimination can add to the suffering and disability associated with mental disorders, leading to various social movements attempting to increase understanding and challenge social exclusion.
The definition and classification of mental disorders are key issues for researchers as well as service providers and those who may be diagnosed. For a mental state to classify as a disorder, it generally needs to cause dysfunction. Most international clinical documents use the term mental "disorder", while "illness" is also common. It has been noted that using the term "mental" (i.e., of the mind) is not necessarily meant to imply separateness from brain or body.
According to DSM-IV, a mental disorder is a psychological syndrome or pattern which is associated with distress (e.g. via a painful symptom), disability (impairment in one or more important areas of functioning), increased risk of death, or causes a significant loss of autonomy; however it excludes normal responses such as grief from loss of a loved one, and also excludes deviant behavior for political, religious, or societal reasons not arising from a dysfunction in the individual.
DSM-IV precedes the definition with caveats, stating that, as in the case with many medical terms, mental disorder "lacks a consistent operational definition that covers all situations", noting that different levels of abstraction can be used for medical definitions, including pathology, symptomology, deviance from a normal range, or etiology, and that the same is true for mental disorders, so that sometimes one type of definition is appropriate, and sometimes another, depending on the situation.
In 2013, the American Psychiatric Association (APA) redefined mental disorders in the DSM-5 as "a syndrome characterized by clinically significant disturbance in an individual's cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning.”
Click on any of the following blue hyperlinks for more about Mental Disorders:
A mental disorder, also called a mental illness or psychiatric disorder, is a behavioral or mental pattern that may cause suffering or a poor ability to function in life.
Such features may be persistent, relapsing and remitting, or occur as a single episode. Many disorders have been described, with signs and symptoms that vary widely between specific disorders. Such disorders may be diagnosed by a mental health professional.
The causes of mental disorders are often unclear. Theories may incorporate findings from a range of fields. Mental disorders are usually defined by a combination of how a person behaves, feels, perceives, or thinks.
This may be associated with particular regions or functions of the brain, often in a social context. A mental disorder is one aspect of mental health. Cultural and religious beliefs, as well as social norms, should be taken into account when making a diagnosis.
Services are based in psychiatric hospitals or in the community, and assessments are carried out by psychiatrists, psychologists, and clinical social workers, using various methods but often relying on observation and questioning.
Treatments are provided by various mental health professionals. Psychotherapy and psychiatric medication are two major treatment options.
Other treatments include social interventions, peer support, and self-help. In a minority of cases there might be involuntary detention or treatment. Prevention programs have been shown to reduce depression.
Common mental disorders include depression, which affects about 400 million, dementia which affects about 35 million, and schizophrenia, which affects about 21 million people globally.
Stigma and discrimination can add to the suffering and disability associated with mental disorders, leading to various social movements attempting to increase understanding and challenge social exclusion.
The definition and classification of mental disorders are key issues for researchers as well as service providers and those who may be diagnosed. For a mental state to classify as a disorder, it generally needs to cause dysfunction. Most international clinical documents use the term mental "disorder", while "illness" is also common. It has been noted that using the term "mental" (i.e., of the mind) is not necessarily meant to imply separateness from brain or body.
According to DSM-IV, a mental disorder is a psychological syndrome or pattern which is associated with distress (e.g. via a painful symptom), disability (impairment in one or more important areas of functioning), increased risk of death, or causes a significant loss of autonomy; however it excludes normal responses such as grief from loss of a loved one, and also excludes deviant behavior for political, religious, or societal reasons not arising from a dysfunction in the individual.
DSM-IV precedes the definition with caveats, stating that, as in the case with many medical terms, mental disorder "lacks a consistent operational definition that covers all situations", noting that different levels of abstraction can be used for medical definitions, including pathology, symptomology, deviance from a normal range, or etiology, and that the same is true for mental disorders, so that sometimes one type of definition is appropriate, and sometimes another, depending on the situation.
In 2013, the American Psychiatric Association (APA) redefined mental disorders in the DSM-5 as "a syndrome characterized by clinically significant disturbance in an individual's cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning.”
Click on any of the following blue hyperlinks for more about Mental Disorders:
- Classifications
- Disorders
- Signs and symptoms
- Causes
- Drugs
Genetics
Models
- Drugs
- Diagnosis
- Prevention
- Depression
Anxiety
Psychosis
Mental health strategies
Prevention programmes
Targeted vs universal
- Depression
- Management
- Psychotherapy
Medication
Other
- Psychotherapy
- Epidemiology
- History
- Ancient civilizations
Europe
Europe and the U.S.
- Ancient civilizations
- Society and culture
- Religion
Movements
Cultural bias
Laws and policies
Perception and discrimination
- Religion
- Mental health
- See also:
- Erving Goffman
- Mental illness portrayed in media
- Mental illness in American prisons
- National Institute of Mental Health
- Psychological evaluation
- Parity of esteem
- NIMH.NIH.gov – National Institute of Mental Health
- International Committee of Women Leaders on Mental Health
- Psychology Dictionary
- Metapsychology Online Reviews: Mental Health
- The New York Times: Mental Health & Disorders
- The Guardian: Mental Health
- Perring, Christian (22 February 2010). "Mental Illness". Stanford Encyclopedia of Philosophy.
- "Insane, Statistics of". Encyclopedia Americana. 1920.
- Adverse Childhood Experiences: Risk Factors for Substance Misuse and Mental Health Dr. Robert Anda of the U.S. Centers for Disease Control describes the relation between childhood adversity and mental health (video)
Cancer, including a List of Different Cancers as well as Treatment Options
YouTube Video: Cancer treatment drug heading for US approval (by CNN 7-13-17)
Pictured: (L) Symptoms of cancer metastasis depend on the location of the tumor. (Courtesy of Mikael Häggström); (R) The incidence of lung cancer is highly correlated with smoking. (Courtesy of Sakurambo - Vectorized version of Image:Cancer smoking lung cancer correlation from NIH.png, originally published on the nih.gov)
YouTube Video: Cancer treatment drug heading for US approval (by CNN 7-13-17)
Pictured: (L) Symptoms of cancer metastasis depend on the location of the tumor. (Courtesy of Mikael Häggström); (R) The incidence of lung cancer is highly correlated with smoking. (Courtesy of Sakurambo - Vectorized version of Image:Cancer smoking lung cancer correlation from NIH.png, originally published on the nih.gov)
Click here for a List of Different forms of Cancer.
Cancer is a group of diseases involving abnormal cell growth with the potential to invade or spread to other parts of the body. Not all tumors are cancerous; benign tumors do not spread to other parts of the body.
Possible signs and symptoms include a lump, abnormal bleeding, prolonged cough, unexplained weight loss, and a change in bowel movements. While these symptoms may indicate cancer, they may have other causes. Over 100 types of cancers affect humans.
Tobacco use is the cause of about 22% of cancer deaths. Another 10% is due to obesity, poor diet, lack of physical activity, and excessive drinking of alcohol.
Other factors include certain infections, exposure to ionizing radiation and environmental pollutants.
In the developing world nearly 20% of cancers are due to infections such as hepatitis B, hepatitis C and human papillomavirus infection. These factors act, at least partly, by changing the genes of a cell.
Typically many genetic changes are required before cancer develops. Approximately 5–10% of cancers are due to inherited genetic defects from a person's parents. Cancer can be detected by certain signs and symptoms or screening tests. It is then typically further investigated by medical imaging and confirmed by biopsy.
Many cancers can be prevented by not smoking, maintaining a healthy weight, not drinking too much alcohol, eating plenty of vegetables, fruits and whole grains, vaccination against certain infectious diseases, not eating too much processed and red meat, and avoiding too much sunlight exposure.
Early detection through screening is useful for cervical and colorectal cancer. The benefits of screening in breast cancer are controversial. Cancer is often treated with some combination of radiation therapy, surgery, chemotherapy, and targeted therapy.
Pain and symptom management are an important part of care. Palliative care is particularly important in people with advanced disease.
The chance of survival depends on the type of cancer and extent of disease at the start of treatment. In children under 15 at diagnosis the five-year survival rate in the developed world is on average 80%. For cancer in the United States the average five-year survival rate is 66%.
In 2015 about 90.5 million people had cancer. About 14.1 million new cases occur a year (not including skin cancer other than melanoma). It caused about 8.8 million deaths (15.7%) of human deaths.
The most common types of cancer in males are lung cancer, prostate cancer, colorectal cancer and stomach cancer.
In females, the most common types are breast cancer, colorectal cancer, lung cancer and cervical cancer.
If skin cancer other than melanoma were included in total new cancers each year it would account for around 40% of cases.
In children, acute lymphoblastic leukemia and brain tumors are most common except in Africa where non-Hodgkin lymphoma occurs more often. In 2012, about 165,000 children under 15 years of age were diagnosed with cancer.
The risk of cancer increases significantly with age and many cancers occur more commonly in developed countries. Rates are increasing as more people live to an old age and as lifestyle changes occur in the developing world. The financial costs of cancer were estimated at $1.16 trillion USD per year as of 2010.
Cancers are a large family of diseases that involve abnormal cell growth with the potential to invade or spread to other parts of the body. They form a subset of neoplasms. A neoplasm or tumor is a group of cells that have undergone unregulated growth and will often form a mass or lump, but may be distributed diffusely.
All tumor cells show the six hallmarks of cancer. These characteristics are required to produce a malignant tumor. They include:
Signs and Symptoms:
Main article: Cancer signs and symptoms
When cancer begins, it produces no symptoms. Signs and symptoms appear as the mass grows or ulcerates. The findings that result depend on the cancer's type and location. Few symptoms are specific. Many frequently occur in individuals who have other conditions.
Cancer is a "great imitator". Thus, it is common for people diagnosed with cancer to have been treated for other diseases, which were hypothesized to be causing their symptoms.
People may become anxious or depressed post-diagnosis. The risk of suicide in people with cancer is approximately double.
Click on any of the following blue hyperlinks for more about Cancer:
Treatment of Cancer:
Cancer can be treated by any of the following methods:
The choice of therapy depends upon the location and grade of the tumor and the stage of the disease, as well as the general state of the patient (performance status). A number of experimental cancer treatments are also under development. Under current estimates, two in five people will have cancer at some point in their lifetime.
Complete removal of the cancer without damage to the rest of the body (that is, achieving cure with near-zero adverse effects) is the ideal goal of treatment and is often the goal in practice.
Sometimes this can be accomplished by surgery, but the propensity of cancers to invade adjacent tissue or to spread to distant sites by microscopic metastasis often limits its effectiveness; and chemotherapy and radiotherapy can have a negative effect on normal cells.
Therefore, cure with nonnegligible adverse effects may be accepted as a practical goal in some cases; and besides curative intent, practical goals of therapy can also include (1) suppressing the cancer to a subclinical state and maintaining that state for years of good quality of life (that is, treating the cancer as a chronic disease), and (2) palliative care without curative intent (for advanced-stage metastatic cancers).
Because "cancer" refers to a class of diseases, it is unlikely that there will ever be a single "cure for cancer" any more than there will be a single treatment for all infectious diseases.
Angiogenesis inhibitors were once thought to have potential as a "silver bullet" treatment applicable to many types of cancer, but this has not been the case in practice.
Click on any of the following blue hyperlinks for more about Treatment of Cancer:
Cancer is a group of diseases involving abnormal cell growth with the potential to invade or spread to other parts of the body. Not all tumors are cancerous; benign tumors do not spread to other parts of the body.
Possible signs and symptoms include a lump, abnormal bleeding, prolonged cough, unexplained weight loss, and a change in bowel movements. While these symptoms may indicate cancer, they may have other causes. Over 100 types of cancers affect humans.
Tobacco use is the cause of about 22% of cancer deaths. Another 10% is due to obesity, poor diet, lack of physical activity, and excessive drinking of alcohol.
Other factors include certain infections, exposure to ionizing radiation and environmental pollutants.
In the developing world nearly 20% of cancers are due to infections such as hepatitis B, hepatitis C and human papillomavirus infection. These factors act, at least partly, by changing the genes of a cell.
Typically many genetic changes are required before cancer develops. Approximately 5–10% of cancers are due to inherited genetic defects from a person's parents. Cancer can be detected by certain signs and symptoms or screening tests. It is then typically further investigated by medical imaging and confirmed by biopsy.
Many cancers can be prevented by not smoking, maintaining a healthy weight, not drinking too much alcohol, eating plenty of vegetables, fruits and whole grains, vaccination against certain infectious diseases, not eating too much processed and red meat, and avoiding too much sunlight exposure.
Early detection through screening is useful for cervical and colorectal cancer. The benefits of screening in breast cancer are controversial. Cancer is often treated with some combination of radiation therapy, surgery, chemotherapy, and targeted therapy.
Pain and symptom management are an important part of care. Palliative care is particularly important in people with advanced disease.
The chance of survival depends on the type of cancer and extent of disease at the start of treatment. In children under 15 at diagnosis the five-year survival rate in the developed world is on average 80%. For cancer in the United States the average five-year survival rate is 66%.
In 2015 about 90.5 million people had cancer. About 14.1 million new cases occur a year (not including skin cancer other than melanoma). It caused about 8.8 million deaths (15.7%) of human deaths.
The most common types of cancer in males are lung cancer, prostate cancer, colorectal cancer and stomach cancer.
In females, the most common types are breast cancer, colorectal cancer, lung cancer and cervical cancer.
If skin cancer other than melanoma were included in total new cancers each year it would account for around 40% of cases.
In children, acute lymphoblastic leukemia and brain tumors are most common except in Africa where non-Hodgkin lymphoma occurs more often. In 2012, about 165,000 children under 15 years of age were diagnosed with cancer.
The risk of cancer increases significantly with age and many cancers occur more commonly in developed countries. Rates are increasing as more people live to an old age and as lifestyle changes occur in the developing world. The financial costs of cancer were estimated at $1.16 trillion USD per year as of 2010.
Cancers are a large family of diseases that involve abnormal cell growth with the potential to invade or spread to other parts of the body. They form a subset of neoplasms. A neoplasm or tumor is a group of cells that have undergone unregulated growth and will often form a mass or lump, but may be distributed diffusely.
All tumor cells show the six hallmarks of cancer. These characteristics are required to produce a malignant tumor. They include:
- Cell growth and division absent the proper signals
- Continuous growth and division even given contrary signals
- Avoidance of programmed cell death
- Limitless number of cell divisions
- Promoting blood vessel construction
- Invasion of tissue and formation of metastases
Signs and Symptoms:
Main article: Cancer signs and symptoms
When cancer begins, it produces no symptoms. Signs and symptoms appear as the mass grows or ulcerates. The findings that result depend on the cancer's type and location. Few symptoms are specific. Many frequently occur in individuals who have other conditions.
Cancer is a "great imitator". Thus, it is common for people diagnosed with cancer to have been treated for other diseases, which were hypothesized to be causing their symptoms.
People may become anxious or depressed post-diagnosis. The risk of suicide in people with cancer is approximately double.
Click on any of the following blue hyperlinks for more about Cancer:
- Local symptoms
- Systemic symptoms
- Metastasis
- Causes
- Pathophysiology
- Diagnosis
- Classification
- Prevention
- Screening
- Management
- Prognosis
- Epidemiology
- History
- Society and culture
- Research
- Pregnancy
Treatment of Cancer:
Cancer can be treated by any of the following methods:
- surgery,
- chemotherapy,
- radiation therapy,
- hormonal therapy,
- targeted therapy (including immunotherapy such as monoclonal antibody therapy)
- and synthetic lethality.
The choice of therapy depends upon the location and grade of the tumor and the stage of the disease, as well as the general state of the patient (performance status). A number of experimental cancer treatments are also under development. Under current estimates, two in five people will have cancer at some point in their lifetime.
Complete removal of the cancer without damage to the rest of the body (that is, achieving cure with near-zero adverse effects) is the ideal goal of treatment and is often the goal in practice.
Sometimes this can be accomplished by surgery, but the propensity of cancers to invade adjacent tissue or to spread to distant sites by microscopic metastasis often limits its effectiveness; and chemotherapy and radiotherapy can have a negative effect on normal cells.
Therefore, cure with nonnegligible adverse effects may be accepted as a practical goal in some cases; and besides curative intent, practical goals of therapy can also include (1) suppressing the cancer to a subclinical state and maintaining that state for years of good quality of life (that is, treating the cancer as a chronic disease), and (2) palliative care without curative intent (for advanced-stage metastatic cancers).
Because "cancer" refers to a class of diseases, it is unlikely that there will ever be a single "cure for cancer" any more than there will be a single treatment for all infectious diseases.
Angiogenesis inhibitors were once thought to have potential as a "silver bullet" treatment applicable to many types of cancer, but this has not been the case in practice.
Click on any of the following blue hyperlinks for more about Treatment of Cancer:
- Types of treatments
- Symptom control and palliative care
- Research
- Complementary and alternative
- Special circumstances
- See also:
Eating Disorders including Types of Eating Disorders and Any Impact on Development
YouTube Video: A First-Person Account of Binge Eating Disorder
by WebMD
Pictured: Comparison of Anoxeria and Bulimia Treatments
YouTube Video: A First-Person Account of Binge Eating Disorder
by WebMD
Pictured: Comparison of Anoxeria and Bulimia Treatments
Click here for a List of Eating Disorders.
An eating disorder is a mental disorder defined by abnormal eating habits that negatively affect a person's physical or mental health. They include:
Anxiety disorders, depression, and substance abuse are common among people with eating disorders. These disorders do not include obesity.
The cause of eating disorders is not clear. Both biological and environmental factors appear to play a role. Cultural idealization of thinness is believed to contribute. Eating disorders affect about 12 percent of dancers.Those who have experienced sexual abuse are also more likely to develop eating disorders. Some disorders such as pica and rumination disorder occur more often in people with intellectual disabilities. Only one eating disorder can be diagnosed at a given time.
Treatment can be effective for many eating disorders. This typically involves counselling, a proper diet, a normal amount of exercise, and the reduction of efforts to eliminate food.
Hospitalization is occasionally needed. Medications may be used to help with some of the associated symptoms. At five years about 70% of people with anorexia and 50% of people with bulimia recover. Recovery from binge eating disorder is less clear and estimated at 20% to 60%. Both anorexia and bulimia increase the risk of death.
In the developed world binge eating disorder affects about 1.6% of women and 0.8% of men in a given year. Anorexia affects about 0.4% and bulimia affects about 1.3% of young women in a given year.
During the entire life up to 4% of women have anorexia, 2% have bulimia, and 2% have binge eating disorder. Anorexia and bulimia occur nearly ten times more often in females than males. Typically they begin in late childhood or early adulthood. Rates of other eating disorders are not clear. Rates of eating disorders appear to be lower in less developed countries.
Click on any of the following blue hyperlinks for more about Eating Disorders:
Eating disorders and development:
Eating disorders typically peak at specific periods in development, notably sensitive and transitional periods such as puberty.
Feeding and eating disorders in childhood are often the result of a complex interplay of organic and non-organic factors. Medical conditions, developmental problems and temperament are all strongly correlated with feeding disorders, but important contextual features of the environment and parental behavior have also been found to influence the development of childhood eating disorders. Given the complexity of early childhood eating problems, consideration of both biological and behavioral factors is warranted for diagnosis and treatment.
Revisions in the DSM-5 (see next topic) have attempted to improve diagnostic utility for clinicians working with feeding and eating disorder patients. In the DSM-5, diagnostic categories are less defined by age of patient, and guided more by developmental differences in presentation and expression of eating problems.
Click on any of the following blue hyperlinks for more about Eating Disorders & Development:
An eating disorder is a mental disorder defined by abnormal eating habits that negatively affect a person's physical or mental health. They include:
- binge eating disorder where people eat a large amount in a short period of time,
- anorexia nervosa where people eat very little and thus have a low body weight,
- bulimia nervosa where people eat a lot and then try to rid themselves of the food,
- pica where people eat non-food items,
- rumination disorder where people regurgitate food,
- avoidant/restrictive food intake disorder where people have a lack of interest in food,
- and a group of other specified feeding or eating disorders.
Anxiety disorders, depression, and substance abuse are common among people with eating disorders. These disorders do not include obesity.
The cause of eating disorders is not clear. Both biological and environmental factors appear to play a role. Cultural idealization of thinness is believed to contribute. Eating disorders affect about 12 percent of dancers.Those who have experienced sexual abuse are also more likely to develop eating disorders. Some disorders such as pica and rumination disorder occur more often in people with intellectual disabilities. Only one eating disorder can be diagnosed at a given time.
Treatment can be effective for many eating disorders. This typically involves counselling, a proper diet, a normal amount of exercise, and the reduction of efforts to eliminate food.
Hospitalization is occasionally needed. Medications may be used to help with some of the associated symptoms. At five years about 70% of people with anorexia and 50% of people with bulimia recover. Recovery from binge eating disorder is less clear and estimated at 20% to 60%. Both anorexia and bulimia increase the risk of death.
In the developed world binge eating disorder affects about 1.6% of women and 0.8% of men in a given year. Anorexia affects about 0.4% and bulimia affects about 1.3% of young women in a given year.
During the entire life up to 4% of women have anorexia, 2% have bulimia, and 2% have binge eating disorder. Anorexia and bulimia occur nearly ten times more often in females than males. Typically they begin in late childhood or early adulthood. Rates of other eating disorders are not clear. Rates of eating disorders appear to be lower in less developed countries.
Click on any of the following blue hyperlinks for more about Eating Disorders:
- Classification
- Signs and symptoms
- Causes
- Mechanisms
- Diagnosis
- Prevention
- Treatment
- Outcomes
- Epidemiology
- Economics
- See also:
Eating disorders and development:
Eating disorders typically peak at specific periods in development, notably sensitive and transitional periods such as puberty.
Feeding and eating disorders in childhood are often the result of a complex interplay of organic and non-organic factors. Medical conditions, developmental problems and temperament are all strongly correlated with feeding disorders, but important contextual features of the environment and parental behavior have also been found to influence the development of childhood eating disorders. Given the complexity of early childhood eating problems, consideration of both biological and behavioral factors is warranted for diagnosis and treatment.
Revisions in the DSM-5 (see next topic) have attempted to improve diagnostic utility for clinicians working with feeding and eating disorder patients. In the DSM-5, diagnostic categories are less defined by age of patient, and guided more by developmental differences in presentation and expression of eating problems.
Click on any of the following blue hyperlinks for more about Eating Disorders & Development:
- Avoidant/Restrictive Intake Disorder (ARFID)
- Pica
- Rumination disorder
- Anorexia nervosa
- Bulimia nervosa
- Binge eating disorder
Human Emotions, including Categories of Emotion
YouTube Video of an Emotional Acting Scene by Marlon Brando in the 1951 Movie "A Streetcar Named Desire" (1951 film)
Pictured Below:
(Top) Examples of the type of emotions
(Bottom) Plutchik's wheel of emotions
YouTube Video of an Emotional Acting Scene by Marlon Brando in the 1951 Movie "A Streetcar Named Desire" (1951 film)
Pictured Below:
(Top) Examples of the type of emotions
(Bottom) Plutchik's wheel of emotions
Click here for a List of Emotions by Category (alphabetical A-Z)
Emotion is any conscious experience characterized by intense mental activity and a high degree of pleasure or displeasure. Scientific discourse has drifted to other meanings and there is no consensus on a definition.
Emotion is often intertwined with mood, temperament, personality, disposition, and motivation. In some theories, cognition is an important aspect of emotion. Those acting primarily on the emotions they are feeling may seem as if they are not thinking, but mental processes are still essential, particularly in the interpretation of events. For example, the realization of our believing that we are in a dangerous situation and the subsequent arousal of our body's nervous system (rapid heartbeat and breathing, sweating, muscle tension) is integral to the experience of our feeling afraid.
Other theories, however, claim that emotion is separate from and can precede cognition.
Emotions are complex. According to some theories, they are states of feeling that result in physical and psychological changes that influence our behavior.The physiology of emotion is closely linked to arousal of the nervous system with various states and strengths of arousal relating, apparently, to particular emotions.
Emotion is also linked to behavioral tendency. Extroverted people are more likely to be social and express their emotions, while introverted people are more likely to be more socially withdrawn and conceal their emotions.
Emotion is often the driving force behind motivation, positive or negative. According to other theories, emotions are not causal forces but simply syndromes of components, which might include motivation, feeling, behavior, and physiological changes, but no one of these components is the emotion. Nor is the emotion an entity that causes these components.
Emotions involve different components, such as subjective experience, cognitive processes, expressive behavior, psychophysiological changes, and instrumental behavior. At one time, academics attempted to identify the emotion with one of the components: William James with a subjective experience, behaviorists with instrumental behavior, psychophysiologists with physiological changes, and so on.
More recently, emotion is said to consist of all the components. The different components of emotion are categorized somewhat differently depending on the academic discipline.
In psychology and philosophy, emotion typically includes a subjective, conscious experience characterized primarily by psychophysiological expressions, biological reactions, and mental states.
A similar multi-componential description of emotion is found in sociology
Emotion is any conscious experience characterized by intense mental activity and a high degree of pleasure or displeasure. Scientific discourse has drifted to other meanings and there is no consensus on a definition.
Emotion is often intertwined with mood, temperament, personality, disposition, and motivation. In some theories, cognition is an important aspect of emotion. Those acting primarily on the emotions they are feeling may seem as if they are not thinking, but mental processes are still essential, particularly in the interpretation of events. For example, the realization of our believing that we are in a dangerous situation and the subsequent arousal of our body's nervous system (rapid heartbeat and breathing, sweating, muscle tension) is integral to the experience of our feeling afraid.
Other theories, however, claim that emotion is separate from and can precede cognition.
Emotions are complex. According to some theories, they are states of feeling that result in physical and psychological changes that influence our behavior.The physiology of emotion is closely linked to arousal of the nervous system with various states and strengths of arousal relating, apparently, to particular emotions.
Emotion is also linked to behavioral tendency. Extroverted people are more likely to be social and express their emotions, while introverted people are more likely to be more socially withdrawn and conceal their emotions.
Emotion is often the driving force behind motivation, positive or negative. According to other theories, emotions are not causal forces but simply syndromes of components, which might include motivation, feeling, behavior, and physiological changes, but no one of these components is the emotion. Nor is the emotion an entity that causes these components.
Emotions involve different components, such as subjective experience, cognitive processes, expressive behavior, psychophysiological changes, and instrumental behavior. At one time, academics attempted to identify the emotion with one of the components: William James with a subjective experience, behaviorists with instrumental behavior, psychophysiologists with physiological changes, and so on.
More recently, emotion is said to consist of all the components. The different components of emotion are categorized somewhat differently depending on the academic discipline.
In psychology and philosophy, emotion typically includes a subjective, conscious experience characterized primarily by psychophysiological expressions, biological reactions, and mental states.
A similar multi-componential description of emotion is found in sociology